Renders and Dragons Rule Creative Kingdoms This Week ‘In the NVIDIA Studio’

Renders and Dragons Rule Creative Kingdoms This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Content creator Grant Abbitt embodies selflessness, one of the best qualities that a creative can possess. Passionate about giving back to the creative community, Abbitt offers inspiration, guidance and free education for others in his field through YouTube tutorials.

He designed Dragon, the 3D scene featured this week In the NVIDIA Studio, specifically to help new Blender users easily understand the steps in the creative process of using the software.

“Dragons can be extremely tough to make,” said Abbitt. While he could have spent more time refining the details, he said, “That wasn’t the point of the project. It’s all about the learning journey for the student.”

Abbitt understands the importance of early education. Providing actionable, straightforward instructions enables prospective 3D modelers to make gradual progress, he said. When encouraged, 3D artists keep morale high while gaining confidence and learning more advanced skills, Abbitt has noticed over his 30+ years of industry experience.

His own early days of learning 3D workflows presented unique obstacles, like software programs costing as much as the hardware, or super-slow internet, which required Abbitt to learn 3D through instructional VHS tapes.

Learning 3D modeling and animation on VHS tapes.

Undeterred by such challenges, Abbitt earned a media studies degree and populated films with his own 3D content.

Now a full-time 3D artist and content creator, Abbitt does what he loves while helping aspiring content creators realize their creative ambitions. In this tutorial, for example, Abbitt teaches viewers how to create a video game character in just 20 minutes.

Dragon Wheel

Abbitt described a different dynasty in this realm — how he created his Dragon piece.

“Reference images are a must,” stressed Abbitt. “Deviation from the intended vision is part of the creative process, but without a direction or foundation, things can quickly go off track.” This is especially important with freelance work and creative briefs provided by clients, he added.

Abbitt looked to Pinterest and ArtStation for creative inspiration and reference material, and sketched in the Krita app on his tablet. The remainder of the project was completed in Blender — the popular 3D creation suite — which is free and open source.

Reference imagery set a solid foundation for the project.

He began with the initial blockout, a 3D rough-draft level built using simple 3D shapes without details or polished art assets. The goal of the blockout was to prototype, test and adjust the foundational shapes of the dragon. Abbitt then combined block shapes into a single mesh model, the structural build of a 3D model, consisting of polygons.

 

More sculpting was followed by retopologizing the mesh, the process of simplifying the topology of a mesh to make it cleaner and easier to work with. This is a necessary step for images that will undergo more advanced editing and distortions.

Adding Blender’s multiresolution modifier enabled Abbitt to subdivide a mesh, especially useful for re-projecting details from another sculpt with a Shrinkwrap modifier, which allows an object to “shrink” to the surface of another object. It can be applied to meshes, lattices, curves, surfaces and texts.

At this stage, the power of Abbitt’s GeForce RTX 4090 GPU really started to shine. He sculpted fine details faster with Blender Cycles RTX-accelerated OptiX ray tracing in the viewport for fluid, interactive modeling with photorealistic detail. Baking and applying textures were done with buttery smooth ease.

Astonishing details for a single 3D model.

The RTX 4090 GPU also accelerated the animation phase, where the artist rigged and posed his model. “Modern content creators require GPU technology to see their creative visions fully realized at an efficient pace,” Abbitt said.

 

For the texturing, painting and rendering process, Abbitt said he found it “extremely useful to be able to see the finished results without a huge render time, thanks to NVIDIA OptiX.”

Rendering final files in popular 3D creative apps — like Blender, Autodesk Maya with Autodesk Arnold, OTOY’s OctaneRender and Maxon’s Redshift — is made 70-200% faster with an RTX 4090 GPU, compared to previous-generation cards. This results in invaluable time saved for a freelancer with a deadline or a student working on a group project.

Abbitt’s RTX GPU enabled OptiX ray tracing in Blender Cycles for the fastest final frame render.

That’s one scary dragon.

“NVIDIA GeForce RTX graphics cards are really the only choice at the moment for Blender users, because they offer so much more speed during render times,” said Abbitt. “You can quickly see results and make the necessary changes.”

Content creator Grant Abbitt.

Check out Abbitt’s YouTube channel with livestreams every Friday at 9 a.m. PT.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

Now Shipping: DGX H100 Systems Bring Advanced AI Capabilities to Industries Worldwide

Now Shipping: DGX H100 Systems Bring Advanced AI Capabilities to Industries Worldwide

Customers from Japan to Ecuador and Sweden are using NVIDIA DGX H100 systems like AI factories to manufacture intelligence.

They’re creating services that offer AI-driven insights in finance, healthcare, law, IT and telecom — and working to transform their industries in the process.

Among the dozens of use cases, one aims to predict how factory equipment will age, so tomorrow’s plants can be more efficient.

Called Green Physics AI, it adds information like an object’s CO2 footprint, age and energy consumption to SORDI.ai, which claims to be the largest synthetic dataset in manufacturing.

Green Physics AI demo accelerated by DGX H100
Green Physics AI lets users model how objects age.

The dataset lets manufacturers develop powerful AI models and create digital twins that optimize the efficiency of factories and warehouses.  With Green Physics AI, they also can optimize energy and CO2 savings for the factory’s products and the components that go into them.

Meet Your Smart Valet

Imagine a robot that could watch you wash dishes or change the oil in your car, then do it for you.

Boston Dynamics AI Institute (The AI Institute), a research organization which traces its roots to Boston Dynamics, the well-known pioneer in robotics, will use a DGX H100 to pursue that vision. Researchers imagine dexterous mobile robots helping people in factories, warehouses, disaster sites and eventually homes.

“One thing I’ve dreamed about since I was in grad school is a robot valet who can follow me and do useful tasks — everyone should have one,” said Al Rizzi, CTO of The AI Institute.

That will require breakthroughs in AI and robotics, something Rizzi has seen firsthand. As chief scientist at Boston Dynamics, he helped create robots like Spot, a quadruped that can navigate stairs and even open doors for itself.

Initially, the DGX H100 will tackle tasks in reinforcement learning, a key technique in robotics. Later, it will run AI inference jobs while connected directly to prototype bots in the lab.

“It’s an extremely high-performance computer in a relatively compact footprint, so it provides an easy way for us to develop and deploy AI models,” said Rizzi.

Born to Run Gen AI

You don’t have to be a world-class research outfit or Fortune 500 company to use a DGX H100. Startups are unboxing some of the first systems to ride the wave of generative AI.

For example, Scissero, with offices in London and New York, employs a GPT-powered chatbot to make legal processes more efficient. Its Scissero GPT can draft legal documents, generate reports and conduct legal research.

In Germany, DeepL will use several DGX H100 systems to expand services like translation between dozens of languages it provides for customers, including Nikkei, Japan’s largest publishing company. DeepL recently released an AI writing assistant called DeepL Write.

Here’s to Your Health

Many of the DGX H100 systems will advance healthcare and improve patient outcomes.

In Tokyo, DGX H100s will run simulations and AI to speed the drug discovery process as part of the Tokyo-1 supercomputer. Xeureka — a startup launched in November 2021 by Mitsui & Co. Ltd., one of Japan’s largest conglomerates —  will manage the system.

Separately, hospitals and academic healthcare organizations in Germany, Israel and the U.S. will be among the first users of DGX H100 systems.

Lighting Up Around the Globe

Universities from Singapore to Sweden are plugging in DGX H100 systems for research across a range of fields.

A DGX H100 will train large language models for Johns Hopkins University Applied Physics Laboratory. The KTH Royal Institute of Sweden will use one to expand its supercomputing capabilities.

Among other use cases, Japan’s CyberAgent, an internet services company, is creating smart digital ads and celebrity avatars. Telconet, a leading telecommunications provider in Ecuador, is building intelligent video analytics for safe cities and language services to support customers across Spanish dialects.

An Engine of AI Innovation

Each NVIDIA H100 Tensor Core GPU in a DGX H100 system provides on average about 6x more performance than prior GPUs. A DGX H100 packs eight of them, each with a Transformer Engine designed to accelerate generative AI models.

The eight H100 GPUs connect over NVIDIA NVLink to create one giant GPU. Scaling doesn’t stop there: organizations can connect hundreds of DGX H100 nodes into an AI supercomputer using the 400 Gbps ultra-low latency NVIDIA Quantum InfiniBand, twice the speed of prior networks.

Fueled by a Full Software Stack

DGX H100 systems run on NVIDIA Base Command, a suite for accelerating compute, storage, and network infrastructure and optimizing AI workloads.

They also include NVIDIA AI Enterprise, software to accelerate data science pipelines and streamline development and deployment of generative AI, computer vision and more.

The DGX platform offers both high performance and efficiency. DGX H100 delivers a 2x improvement in kilowatts per petaflop over the DGX A100 generation.

NVIDIA DGX H100 systems, DGX PODs and DGX SuperPODs are available from NVIDIA’s global partners.

Manuvir Das, NVIDIA’s vice president of enterprise computing, announced DGX H100 systems are shipping in a talk at MIT Technology Review’s Future Compute event today. A link to his talk will be available here soon.

Read More

Rock ‘n’ Robotics: The White Stripes’ AI-Assisted Visual Symphony

Rock ‘n’ Robotics: The White Stripes’ AI-Assisted Visual Symphony

Playfully blending art and technology, underground animator Michael Wartella has teamed up with artificial intelligence to breathe new life into The White Stripes’ fan-favorite song, “Black Math.”

The video was released earlier this month to celebrate the 20th anniversary of the groundbreaking “Elephant” album.

Wartella is known for his genre-bending work as a cartoonist and animator.

His Brooklyn-based Dream Factory Animation studio produced the “Black Math” video, which combines digital and practical animation techniques with AI-generated imagery.

“This track is 20 years old, so we wanted to give it a fresh look, but we wanted it to look like it was cut from the same cloth as classic White Stripes videos,” Wartella said.

For the “Black Math” video, Wartella turned to Automatic1111, an open-source generative AI tool. To create the video, Wartella and his team started off with the actual album cover, using AI to “bore” into the image.

They then used AI to train the AI and build more images in a similar style. “That was really crazy and interesting and everything built from there,” Wartella said.

This image-to-image deep learning model caused a sensation on its release last year, and is part of a new generation of AI tools that are transforming the arts.

“We used several different AI tools and animation tools,” Wartella said. “For every shot, I wanted this to look like an AI video in a way those classic CGI videos look very CGI now.”

Wartella and his team relied heavily on archived images and video of the musician duo as well as motion-capture techniques to create a video replicating the feel of late-1990s and early-2000s music videos.

Wartella has long relied on NVIDIA GPUs to run a full complement of digital animation tools on workstations from Austin, Texas-based BOXX Technologies.

“We’ve used BOXX workstations with NVIDIA cards for almost 20 years now,” he said. “That combination is just really powerful — it’s fast, it’s stable.”

Wartella describes his work on the “Black Math” video as a “collaboration” with the AI tool, using it to generate images, tweaking the results and then returning to the technology for more.

“I see this as a collaboration, not just pressing a button. It’s an incredibly creative tool,” Wartella said of generative AI.

The results were sometimes “kind of strange,” a quality that Wartella prizes.

He took the output from the AI, ran it through conventional composition and editing tools, and then processed the results through AI again.

Wartella felt that working with AI in this way made the video stronger and more abstract.

Wardella and his team used generative AI to create something that feels both different, and familiar to White Stripes fans.

The video presents Jack and Meg White in their 2003 personas, emerging from a whimsical, dark cyber fantasy.

The video parallels the look and feel of the band’s videos from the early 2000s, even as it leans into the otherworldly, almost kaleidoscopic qualities of modern generative AI.

“The lyrics are anti-authoritarian and punkish, so the sound steered this one in the direction,” Wartella said. “The song itself has a scientific theme that is already a perfect fit for the AI.”

When “Black Math” was first released as part of The White Stripes’ critically acclaimed “Elephant” album, it grabbed attention for its high-energy, powerful guitar riffs and Jack White’s unmistakable vocals.

The song played a role in cementing the band’s reputation as a critical player in the garage rock revival of the early 2000s.

Wartella’s inventive approach with “Black Math” highlights the growing use of AI — as well as lively discussion of its implications — among creatives.

Over the past few months, AI-generated art has been increasingly prevalent across various social media platforms, thanks to tools like Midjourney, OpenAI’s Dall·E, DreamStudio and Stable Diffusion.

As AI advances, Wartella said, we can expect to see more artists exploring the potential of these tools in their work.

“I’m in full favor of people having the opportunity to play around with the technology,” Wartella said. “We’ll definitely use AI again if the song or the project calls for it.”

The release of the “Black Math” music video coincides with the launch of “The White Stripes Elephant (20th Anniversary)” deluxe vinyl reissue package, available now through Jack White’s Third Man Records and Sony Legacy Recordings.

Watch the “Black Math” music video:

Read More

What Is Agent Assist?

What Is Agent Assist?

“Please hold” may be the two words that customers hate most — and that contact center agents take pains to avoid saying.

Providing fast, accurate, helpful responses based on contextually relevant information is key to effective customer service. It’s even better if answers are personalized and take into account how a customer might be feeling.

All of this is made easier and quicker for human agents by what the industry calls agent assists.

Agent assist technology uses AI and machine learning to provide facts and make real-time suggestions that help human agents across telecom, retail and other industries conduct conversations with customers.

It can integrate with contact centers’ existing applications, provide faster onboarding for agents, improve the accuracy and efficiency of their responses, and increase customer satisfaction and loyalty.

How Agent Assist Technology Works

Agent assist technology gives human agents AI-powered information and real-time recommendations that can enhance their customer conversations.

Taking conversations as input, agent assist technology outputs accurate, timely suggestions on how to best respond to queries — using a combination of automatic speech recognition (ASR), natural language processing (NLP), machine learning and data analytics.

While a customer speaks to a human agent, ASR tools — like the NVIDIA Riva software development kit — transcribe speech into text, in real time. The text can then be run through NLP, AI and machine learning models that offer recommendations to the human agent by analyzing different aspects of the conversation.

First, AI models can evaluate the context of the conversation, identify topics and bring up relevant information for the human agent — like the customer’s account data , a record of their previous inquiries, documents with recommended products and additional information to help resolve issues.

Say a customer is looking to switch to a new phone plan. The agent assist could, for example, immediately display a chart on the human agent’s screen comparing the company’s offerings, which can be used as a reference throughout the conversation.

Another AI model can perform sentiment analysis based on the words a customer is using.

For example, if a customer says, “I’m extremely frustrated with my cellular reception,” the agent assist would advise the human agent to approach the customer differently from a situation where the customer says, “I am happy with my phone plan but am looking for something less expensive.”

It can even present a human agent with verbiage to consider using when soothing, encouraging, informing or otherwise guiding a customer toward conflict resolution.

And, at a conversation’s conclusion, agent assist technology can provide personalized, best next steps for the human agent to give the customer. It can also offer the human agent a summary of the interaction overall, along with feedback to inform future conversations and employee training.

All such ASR, NLP and AI-powered capabilities come together in agent assist technology, which is becoming increasingly integral to businesses across industries.

How Agent Assist Technology Helps Businesses, Customers

By tapping into agent assist technology, businesses can improve productivity, employee retention and customer satisfaction, among other benefits.

For one, agent assist technology reduces contact center call times. Through NLP and intelligent routing algorithms, it can identify customer needs in real time, so human agents don’t need to hunt for basic customer information or search databases for answers.

Leading telecom provider T-Mobile — which offers award-winning service across its Customer Experience Centers — uses agent assist technology to help tackle millions of daily customer care calls. The NVIDIA NeMo framework helped the company achieve 10% higher accuracy for its ASR-generated transcripts across noisy environments, and Riva reduced latency for its agent assist by 10x. (Dive deeper into speech AI by watching T-Mobile’s on-demand NVIDIA GTC session.)

Agent assist technology also speeds up the onboarding process for human agents, helping them quickly become familiar with the products and services offered by their organization. In addition, it empowers contact center employees to provide high levels of service while maintaining low levels of stress — which means higher employee retention for enterprises.

Quicker, more accurate conflict resolution enabled by agent assist also leads to more positive contact center experiences, happier customers and increased loyalty for businesses.

Use Cases Across Industries

Agent assist technology can be used across industries, including:

  • Telecom — Agent assist can provide automated troubleshooting, technical tips and other helpful information for agents to relay to customers.
  • Retail — Agent assist can suggest products, features, pricing, inventory information and more in real time, as well as translate languages according to customer preferences.
  • Financial services — Agent assist can help detect fraud attempts by providing real-time alerts, so that human agents are aware of any suspicious activity throughout an inquiry.

Minerva CQ, a member of the NVIDIA Inception program for cutting-edge startups, provides agent assist technology that brings together real-time, adaptive workflows with behavioral cues, dialogue suggestions and knowledge surfacing to drive faster, better outcomes. Its technology — based on Riva, NeMo and NVIDIA Triton Inference Server — focuses on helping human agents in the energy, healthcare and telecom sectors.

History and Future of Agent Assist

Predecessors of agent assist technology can be traced back to the 1950s, when computer-based systems first replaced manual call routing.

More recently came intelligent virtual assistants, which are usually automated systems or bots that don’t have a human working behind them.

Smart devices and mobile technology have led to a rise in the popularity of these intelligent virtual assistants, which can answer questions, set reminders, play music, control home devices and handle other simple tasks.

But complex tasks and inquiries — especially for enterprises with customer service at their core — can be solved most efficiently when human agents are augmented by AI-powered suggestions. This is where agent assist technology has stepped in.

The technology has much potential for further advancement, with challenges including:

  • Developing methods for agent assists to adapt to changing customer expectations and preferences.
  • Further ensuring data privacy and security through encryption and other methods to strip conversations of confidential or sensitive information before running them through agent assist AI models.
  • Integrating agent assist with other emerging technologies like interactive digital avatars, which can see, hear, understand and communicate with end users to help customers while boosting their sentiment.

Learn more about NVIDIA speech AI technologies.

Additional resources:

Read More

Welcome to the Family: GeForce NOW, Capcom Bring ‘Resident Evil’ Titles to the Cloud

Welcome to the Family: GeForce NOW, Capcom Bring ‘Resident Evil’ Titles to the Cloud

Horror descends from the cloud this GFN Thursday with the arrival of publisher Capcom’s iconic Resident Evil series.

They’re part of nine new games expanding the GeForce NOW library of over 1,600 titles.

GeForce NOW Servers RTX 4080
More RTX 4080 SuperPODs just in time to play “Resident Evil” titles.

RTX 4080 SuperPODs are now live in Miami, Portland, Ore., and Stockholm. Follow along with the server rollout process, and make the Ultimate upgrade for unbeatable cloud gaming performance.

Survive in the Cloud

Resident Evil on GeForce NOW
“Resident Evil” now resides in the cloud.

The Resident Evil series makes its debut on GeForce NOW with Resident Evil 2, Resident Evil 3 and Resident Evil 7 Biohazard.

Survive against hordes of flesh-eating zombies and other bio-organic creatures created by the sinister Umbrella Corporation in these celebrated — and terrifying — Resident Evil games. The survival horror games feature memorable casts of characters and gripping storylines to keep members glued to their seats.

With RTX ON and high dynamic range, Ultimate and Priority members will also experience the most realistic lighting and deepest shadows. Bonus points for streaming with the lights off for an even more immersive experience.

The Newness

The Resident Evil titles lead nine new games joining the GeForce NOW library:

  • Shadows of Doubt (New release on Steam, April 24)
  • Afterimage (New release on Steam, April 24)
  • Roots of Pacha (New release on Steam, April 25)
  • Bramble: The Mountain King (New release on Steam, April 27)
  • The Swordsmen X: Survival (New release on Steam, April 27)
  • Poker Club (Free on Epic Games Store, April 27)
  • Resident Evil 2 (Steam)
  • Resident Evil 3 (Steam)
  • Resident Evil 7 Biohazard (Steam)

And check out the question of the week. Let us know your answer in the comments below, or on the GeForce NOW Facebook and Twitter channels.

Read More

Viral NVIDIA Broadcast Demo Drops Hammer on Imperfect Audio This Week ‘In the NVIDIA Studio’

Viral NVIDIA Broadcast Demo Drops Hammer on Imperfect Audio This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows.

Content creators in all fields can benefit from free, AI-powered technology available from NVIDIA Studio.

The Studio platform delivers RTX acceleration in over 110 popular creative apps plus an exclusive suite of AI-powered Studio software. NVIDIA Omniverse interconnects 3D workflows, Canvas turns simple brushstrokes into realistic landscape images and RTX Remix helps modders create stunning RTX remasters of classic PC games.

Spotlighted by this week’s In the NVIDIA Studio featured artist Unmesh Dinda, NVIDIA Broadcast transforms the homes, apartments and dorm rooms of content creators, livestreamers and people working from home through the power of AI — all without the need for specialized equipment.

Host of the widely watched YouTube channel PiXimperfect, Dinda takes the noise-canceling and echo-removal AI features in Broadcast to extremes. He turned the perfect demo into a viral hit faster, powered by RTX acceleration in his go-to video-editing software, Adobe Premiere Pro.

It’s Hammer Time

NVIDIA Broadcast has several popular features, including visual background, autoframing, video noise removal, eye contact and vignette effects.

Two of the most frequently used features, noise and echo removal, caught the attention of Dinda, who saw Broadcast’s potential and wanted to show creators how to instantly improve their content.

The foundation of Dinda’s tutorial style came from his childhood. “My father would sit with me every day to help me with schoolwork,” he said. “He always used to explain with examples which were crystal clear to me, so now I do the same with my channel.”

Dinda contemplated how to demonstrate this incredible technology in a quick, relatable way.

“Think of a crazy idea that grabs attention instantly,” said Dinda. “Concepts like holding a drill in both hands or having a friend play drums right next to me.”

Dinda took the advice of famed British novelist William Golding, who once said, “The greatest ideas are the simplest.” Dinda’s final concept ended up as a scene of a hammer hitting a helmet on his head.

It turns out that seeing — and hearing — is believing.

Even with an electric fan whirring directly into his microphone and intense hammering on his helmet, Dinda can be heard crystal clear with Broadcast’s noise-removal feature turned on. To help emphasize the sorcery, Dinda briefly turns the feature off in the demo to reveal the painful sound his viewers would hear without it.

The demo launched on Instagram a few months ago and went viral overnight. Across social media platforms, the video now has over 12 million views and counting.

Dinda wasn’t harmed in the making of this video.

Views are fantastic, but the real gratification of Dinda’s work comes from a genuine desire to improve his followers’ skillsets, he said.

“The biggest inspiration comes from viewers,” said Dinda. “When they comment, message or meet me at an event to say how much the content has helped their career, it inspires me to create more and reach more creatives.”

 

Learn more and download Broadcast, free for all GeForce RTX GPU owners.

Hammer Out the Details

Dinda uses Adobe Premiere Pro to edit his videos, and his GeForce RTX 3080 Ti plays a major part in accelerating his creative workflow.

“I work with and render high-resolution videos on a daily basis, especially with Adobe Premiere Pro. Having a GPU like the GeForce RTX 3080 Ti helps me render and publish in time.” — Unmesh Dinda

He uses the GPU-accelerated decoder, called NVDEC, to unlock smooth playback and scrubbing of the high-resolution video footage he often works in.

As his hammer-filled Broadcast demo launched on several social media platforms, Dinda had the option to deploy the AI-powered, RTX-accelerated auto reframe feature. It automatically and intelligently tracks objects, and crops landscape video to social-media-friendly aspect ratios, saving even more time.

Dinda also used Adobe Photoshop to add graphical overlays to the video. With more than 30 GPU-accelerated features at his disposal — such as super resolution, blur gallery, object selection, smart sharpen and perspective warp — he can improve and adjust footage, quickly and easily.

 

Dinda used the GPU-accelerated NVIDIA encoder, aka NVENC, to speed up video exports up to 5x faster with his RTX GPU, leading to more time saved on the project.

Though he’s a full-time, successful video creator, Dinda stressed, “I have a normal life outside Adobe Photoshop, I promise!”

Streamer Unmesh Dinda.

Check out Dinda’s PiXimperfect channel, a free resource for learning Adobe Photoshop — another RTX-accelerated Studio app.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

The Future of Intelligent Vehicle Interiors: Building Trust with HMI & AI

The Future of Intelligent Vehicle Interiors: Building Trust with HMI & AI

Imagine a future where your vehicle’s interior offers personalized experiences and builds trust through human-machine interfaces (HMI) and AI. In this episode of the NVIDIA AI Podcast, Andreas Binner, chief technology officer at Rightware, delves into this fascinating topic with host Katie Burke Washabaugh.

Rightware is a Helsinki-based company at the forefront of developing in-vehicle HMI. Its platform, Kanzi, works in tandem with NVIDIA DRIVE IX to provide a complete toolchain for designing personalized vehicle interiors for the next generation of transportation, including detailed visualizations of the car’s AI.

Binner touches on his journey into automotive technology and HMI, the evolution of infotainment in the automotive industry over the past decade, and surprising trends in HMI. They explore the influence of AI on HMI, novel AI-enabled features and the importance of trust in new technologies.

Other topics include the role of HMI in fostering trust between vehicle occupants and the vehicle, the implications of autonomous vehicle visualization, balancing larger in-vehicle screens with driver distraction risks, additional features for trust-building between autonomous vehicles and passengers, and predictions for intelligent cockpits in the next decade.

Tune in to learn about the innovations that Rightware’s Kanzi platform and NVIDIA DRIVE IX bring to the automotive industry and how they contribute to developing intelligent vehicle interiors.

Read more on the NVIDIA Blog:  NVIDIA DRIVE Ecosystem Creates Pioneering In-Cabin Features With NVIDIA DRIVE IX

You Might Also Like

Driver’s Ed: How Waabi Uses AI, Simulation to Teach Autonomous Vehicles to Drive

Teaching the AI brains of autonomous vehicles to understand the world as humans do requires billions of miles of driving experience. The road to achieving this astronomical level of driving leads to the virtual world. Learn how Waabi uses powerful high-fidelity simulations to train and develop production-level autonomous vehicles.

Polestar’s Dennis Nobelius on the Sustainable Performance Brand’s Plans

Driving enjoyment and autonomous driving capabilities can complement one another in intelligent, sustainable vehicles. Learn about the automaker’s plans to unveil its third vehicle, the Polestar 3, the tech inside it, and what the company’s racing heritage brings to the intersection of smarts and sustainability.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments

Humans playing games against machines is nothing new, but now computers can develop their own games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

Read More

Right on Track: NVIDIA Open-Source Software Helps Developers Add Guardrails to AI Chatbots

Right on Track: NVIDIA Open-Source Software Helps Developers Add Guardrails to AI Chatbots

Newly released open-source software can help developers guide generative AI applications to create impressive text responses that stay on track.

NeMo Guardrails will help ensure smart applications powered by large language models (LLMs) are accurate, appropriate, on topic and secure. The software includes all the code, examples and documentation businesses need to add safety to AI apps that generate text.

Today’s release comes as many industries are adopting LLMs, the powerful engines behind these AI apps. They’re answering customers’ questions, summarizing lengthy documents, even writing software and accelerating drug design.

NeMo Guardrails is designed to help users keep this new class of AI-powered applications safe.

Powerful Models, Strong Rails

Safety in generative AI is an industry-wide concern. NVIDIA designed NeMo Guardrails to work with all LLMs, such as OpenAI’s ChatGPT.

The software lets developers align LLM-powered apps so they’re safe and stay within the domains of a company’s expertise.

NeMo Guardrails enables developers to set up three kinds of boundaries:

  • Topical guardrails prevent apps from veering off into undesired areas. For example, they keep customer service assistants from answering questions about the weather.
  • Safety guardrails ensure apps respond with accurate, appropriate information. They can filter out unwanted language and enforce that references are made only to credible sources.
  • Security guardrails restrict apps to making connections only to external third-party applications known to be safe.

Virtually every software developer can use NeMo Guardrails — no need to be a machine learning expert or data scientist. They can create new rules quickly with a few lines of code.

Riding Familiar Tools

Since NeMo Guardrails is open source, it can work with all the tools that enterprise app developers use.

For example, it can run on top of LangChain, an open-source toolkit that developers are rapidly adopting to plug third-party applications into the power of LLMs.

“Users can easily add NeMo Guardrails to LangChain workflows to quickly put safe boundaries around their AI-powered apps,” said Harrison Chase, who created the LangChain toolkit and a startup that bears its name.

In addition, NeMo Guardrails is designed to be able to work with a broad range of LLM-enabled applications, such as Zapier. Zapier is an automation platform used by over 2 million businesses, and it’s seen first-hand how users are integrating AI into their work.

“Safety, security, and trust are the cornerstones of responsible AI development, and we’re excited about NVIDIA’s proactive approach to embed these guardrails into AI systems,” said Reid Robinson, lead product manager of AI at Zapier.

“We look forward to the good that will come from making AI a dependable and trusted part of the future.”

Available as Open Source and From NVIDIA

NVIDIA is incorporating NeMo Guardrails into the NVIDIA NeMo framework, which includes everything users need to train and tune language models using a company’s proprietary data.

Much of the NeMo framework is already available as open source code on GitHub.  Enterprises also can get it as a complete and supported package, part of the NVIDIA AI Enterprise software platform.

NeMo is also available as a service. It’s part of NVIDIA AI Foundations, a family of cloud services for businesses that want to create and run custom generative AI models based on their own datasets and domain knowledge.

Using NeMo, South Korea’s leading mobile operator built an intelligent assistant that’s had 8 million conversations with its customers. A research team in Sweden employed NeMo to create LLMs that can automate text functions for the country’s hospitals, government and business offices.

An Ongoing Community Effort

Building good guardrails for generative AI is a hard problem that will require lots of ongoing research as AI evolves.

NVIDIA made NeMo Guardrails — the product of several years’ research — open source to contribute to the developer community’s tremendous energy and work on AI safety.

Together, our efforts on guardrails will help companies keep their smart services aligned with safety, privacy and security requirements so these engines of innovation stay on track.

For more details on NeMo Guardrails and to get started, see our technical blog.

Read More

On Earth Day, 5 Ways AI, Accelerated Computing Are Protecting the Planet

On Earth Day, 5 Ways AI, Accelerated Computing Are Protecting the Planet

From climate modeling to endangered species conservation, developers, researchers and companies are keeping an AI on the environment with the help of NVIDIA technology.

They’re using NVIDIA GPUs and software to track endangered African black rhinos, forecast the availability of solar energy in the U.K., build detailed climate models and monitor environmental disasters from satellite imagery.

This Earth Day, discover five key ways AI and accelerated computing are advancing sustainability, climate science and energy efficiency.

1. Applying AI to Biodiversity Conservation, Sustainable Agriculture

To protect endangered species, camera-enabled edge AI devices embedded in the environment or on drones can help scientists observe animals in the wild, monitoring their populations and detecting threats from predators and poachers.

Conservation AI, a U.K.-based nonprofit, has deployed 70+ cameras around the world powered by NVIDIA Jetson modules for edge AI. Together with the NVIDIA Triton Inference Server, the Conservation AI platform can identify species of interest from footage in just four seconds — and help conservationists detect poachers and rapidly intervene. Another research team developed an NVIDIA Jetson-based solution to monitor endangered black rhinos in Namibia using drone-based AI.

aerial view of rhino with trail of prints
An aerial view of a rhino, observed via drone. IMAGE CREDIT: WildTrack.

And artist Sofia Crespo raised awareness for critically endangered plants and animals through a generative AI art display at Times Square, using generative adversarial networks trained on NVIDIA GPUs to create high-resolution visuals representing relatively unknown species.

In the field of agriculture, Bay Area startup Verdant and smart tractor company Monarch Tractor are developing AI to support sustainable farming practices, including precision spraying to reduce the use of herbicides.

2. Powering Renewable Energy Research

NVIDIA AI and high performance computing are advancing nearly every field of renewable energy research.

Open Climate Fix, a nonprofit product lab and member of the NVIDIA Inception program for startups, is developing AI models that can help predict cloud cover over solar panels — helping electric grid operators determine how much solar energy can be generated that day to help meet customers’ power needs. Startups Utilidata and Anuranet are developing AI-enabled electric meters using NVIDIA Jetson to enable a more energy efficient, resilient grid.

Siemens Gamesa Renewable Energy is working with NVIDIA to create physics-informed digital twins of wind farms using NVIDIA Omniverse and NVIDIA Modulus. U.K. company Zenotech used cloud-based GPUs to accurately simulate the likely energy output of a wind farm’s 140 turbines. And Gigastack, a consortium-led project, is using Omniverse to build a proof of concept for a wind farm that will turn water into hydrogen fuel.

Researchers at Lawrence Livermore National Laboratory achieved a breakthrough in fusion energy using HPC simulations running on Sierra, the world’s sixth-fastest HPC system, which has 17,280 NVIDIA GPUs. And the U.K.’s Atomic Energy Authority is testing the NVIDIA Omniverse simulation platform to design a fusion energy power plant.

3. Accelerating Climate Models, Weather Visualizations

Accurately modeling the atmosphere is critical to predicting climate change in the coming decades.

To better predict extreme weather events, NVIDIA created FourCastNet, a physics-ML model that can forecast the precise path of catastrophic atmospheric rivers a full week in advance.

Using Omniverse, NVIDIA and Lockheed Martin are building an AI-powered digital twin for the U.S. National Oceanic and Atmospheric Administration that could significantly reduce the amount of time necessary to generate complex weather visualizations.

An initiative from Northwestern University and Argonne National Laboratory researchers is instead taking a hyper-local approach, using NVIDIA Jetson-powered devices to better understand wildfires, urban heat islands and the effect of climate on crops.

4. Managing Environmental Disasters With Satellite Data

When it’s difficult to gauge a situation from the ground, satellite data provides a powerful vantage point to monitor and manage climate disasters.

NVIDIA is working with the United Nations Satellite Centre to apply AI to the organization’s satellite imagery technology infrastructure, an initiative that will provide humanitarian teams with near-real-time insights about floods, wildfires and other climate-related disasters.

A methane leak detected by Orbital Sidekick technology.

NVIDIA Inception member Masterful AI has developed machine learning tools that can detect climate risks from satellite and drone feeds. The model has been used to identify rusted transformers that could spark a wildfire and improve damage assessments after hurricanes.

San Francisco-based Inception startup Orbital Sidekick operates satellites that collect hyperspectral intelligence — information from across the electromagnetic spectrum. Its NVIDIA Jetson-powered AI solution can detect hydrocarbon or gas leaks from this data, helping reduce the risk of leaks becoming serious crises.

5. Advancing Energy-Efficient Computing 

On its own, adopting NVIDIA tech is already a green choice: If every CPU-only server running AI and HPC worldwide switched to a GPU-accelerated system, the world could save around 20 trillion watt-hours of energy a year, equivalent to the electricity requirements of nearly 2 million U.S. homes.

NVIDIA Grace CPU Superchip

Semiconductor leaders are integrating the NVIDIA cuLitho software library to accelerate the time to market and boost the energy efficiency of computational lithography, the process of designing and manufacturing next-generation chips. And the NVIDIA Grace CPU Superchip — which scored 2x performance gains over comparable x86 processors in tests — can help data centers slash their power bills by up to half.

In the most recent MLPerf inference benchmark for AI performance, the NVIDIA Jetson AGX Orin system-on-module achieved gains of up to 63% in energy efficiency, supplying AI inference at low power levels, including on battery-powered systems.

NVIDIA last year introduced a liquid-cooled NVIDIA A100 Tensor Core GPU, which Equinix evaluated for use in its data centers. Both companies found that a data center using liquid cooling could run the same workloads as an air-cooled facility while using around 30% less energy.

Bonus: Robot-Assisted Recycling on the AI Podcast

Startup EverestLabs developed RecycleOS, an AI software and robotics solution that helps recycling facilities around the world recover an average of 25-40% more waste, ensuring fewer recyclable materials end up in landfills. The company’s founder and CEO talked about its tech on the NVIDIA AI Podcast:

Learn more about green computing, and about NVIDIA-accelerated applications in climate and energy.

Read More

Epic Benefits: Omniverse Connector for Unreal Engine Saves Content Creators Time and Effort

Epic Benefits: Omniverse Connector for Unreal Engine Saves Content Creators Time and Effort

Content creators using Epic Games’ open, advanced real-time 3D creation tool, Unreal Engine, are now equipped with more features to bring their work to life with NVIDIA Omniverse, a platform for creating and operating metaverse applications.

The Omniverse Connector for Unreal Engine’s 201.0 update brings significant enhancements to creative workflows using both open platforms.

Streamlining Import, Export and Live Workflows

The Unreal Engine Omniverse Connector 201.0 release delivers improvements in import, export and live workflows, as well as updated software development kits.

New features include:

  • Alignment with Epic’s USD libraries and USDImporter plug-in: Improved compatibility between Omniverse and Epic’s Universal Scene Description (USD) libraries and USDImporter plug-in make it easier to transfer assets between the two platforms.
  • Python 3.9 scripts with Omniverse URLs: Unreal Engine developers and technical artists can access Epic’s built-in Python libraries by running Python 3.9 scripts with Omniverse URLs, which link to files on Omniverse Nucleus servers, helping automate tasks.
  • Skeletal mesh blendshape import to morph targets: The Unreal Engine Connector 201.0 now allows users to import skeletal mesh blendshapes into morph targets, or stored geometry shapes that can be used for animation. This eases development and material work on characters that use NVIDIA Material Definition Language (MDL), reducing the time it takes to share character assets with other artists.
  • UsdLuxLight schema compatibility: Improved compatibility of Unreal Engine with the UsdLuxLight schema — the blueprint used to define data that describes lighting in USD — makes it easier for content creators to work with lighting in Omniverse.

Transforming Workflows One Update at a Time

Artists and game content creators are seeing notable improvements to their workflows thanks to this connector update.

Developer and creator Abdelrazik Maghata, aka MR GFX on YouTube, recently joined an Omniverse livestream to demonstrate his workflow using Unreal Engine and Omniverse. Maghata explained how to animate a character in real time by connecting the Omniverse Audio2Face generative AI-powered application to Epic’s MetaHuman framework in Unreal Engine.

Maghata, who’s been a content creator on YouTube for 15 years, uses his platform to teach others about the benefits of Unreal Engine for their 3D workflows. He’s recently added Omniverse into his repertoire to build connections between his favorite content creation tools.

“Omniverse will transform the world of 3D,” he said.

Omniverse ambassador and short-film phenom Jae Solina often uses the Unreal Engine Connector in his creative process, as well. The connector has greatly improved his workflow efficiency and increased productivity by providing interoperability between his favorite tools, Solina said.

Getting connected is simple. Learn how to accelerate creative workflows with the Unreal Engine Omniverse Connector by watching this video:

Get Plugged Into the Omniverse 

At the recent NVIDIA GTC conference, the Omniverse team hosted many sessions spotlighting how creators can enhance their workflows with generative AI, 3D SimReady assets and more. Watch for free on demand.

Plus, join the latest Omniverse community challenge, running through the end of the month. Use the Unreal Engine Omniverse Connector and share your creation — whether it’s fan art, a video-game character or even an original game — on social media using the hashtag #GameArtChallenge for a chance to be featured on channels for NVIDIA Omniverse (Twitter, LinkedIn, Instagram) and NVIDIA Studio (Twitter, Facebook, Instagram).

Get started with NVIDIA Omniverse by downloading the standard license free, or learn how Omniverse Enterprise can connect teams. Developers can get started with these Omniverse resources

To stay up to date on the platform, subscribe to the newsletter and follow NVIDIA Omniverse on Instagram, Medium and Twitter. Check out the Omniverse forums, Discord server, Twitch and YouTube channels.

Read More