GeForce NOW Makes May-hem With 16 New Games, Including ‘The Lord of the Rings: Gollum’

GeForce NOW Makes May-hem With 16 New Games, Including ‘The Lord of the Rings: Gollum’

What has it got in its pocketses? More games coming in May, that’s what.

GFN Thursday gets the summer started early with two newly supported games this week and 16 more coming later this month — including The Lord of the Rings: Gollum.

Don’t forget to take advantage of the limited-time discount on six-month Priority memberships. Priority members get faster access to cloud gaming servers, as well as support for RTX ON in supported games — all for 40% off the normal price. But hurry, this offer ends Sunday, May 21.

And the fun in May won’t stop there.

Stay tuned for more news on Xbox games joining the GeForce NOW library soon.

How Precious

No need to be sneaky about it — The Lord of the Rings: Gollum from Daedalic Entertainment comes to GeForce NOW when it releases on Thursday, May 25.

The action-adventure game and epic interactive experience takes place in parallel to the events described in The Fellowship of the Ring. Play as the enigmatic Gollum on his perilous journey and find out how he outwitted the most powerful characters in Middle-earth.

Climb the mountains of Mordor, sneak around Mirkwood and make difficult choices. Who will gain the upper hand: the cunning Gollum or the innocent Smeagol? Priority and Ultimate members can experience the epic story with support for RTX ray tracing and DLSS technology for AI-powered high-quality graphics, streaming across nearly any device with up to eight-hour sessions. Go Ultimate today with the one cloud gaming membership that rules them all.

May-Day Game-Day

It’s gonna be May, and that means more of the best games joining the GeForce NOW library.

Age of Wonders on GeForce NOW
Welcome to a new Age of Wonders.

Age of Wonders 4 is the long-awaited sequel from Paradox Interactive. A blend of 4x strategy and turn-based combat, members can explore new magical realms and rule over a faction of their design that grows with expanding empires. Battle through each chapter and guide your empire to greatness.

It leads two new games joining the cloud this week:

  • Age of Wonders 4 (New release on Steam)
  • Showgunners (New release on Steam)

Then check out the rest of the titles on their way in May:

  • Occupy Mars: The Game (New release on Steam, May 10)
  • TT Isle of Man: Ride on the Edge 3 (New release on Steam, May 11)
  • Far Cry 6 (New release on Steam, May 11)
  • Tin Hearts (New release on Steam, May 16)
  • The Outlast Trials (New release on Steam, May 18)
  • Warhammer 40,000: Boltgun (New release on Steam, May 23)
  • Blooming Business: Casino (New release on Steam, May 23)
  • Railway Empire 2 (New release on Steam, May 25)
  • The Lord of the Rings: Gollum (New release on Steam, May 25)
  • Above Snakes (New release on Steam, May 25)
  • System Shock (New release on Steam, May 30)
  • Patch Quest (Steam)
  • The Ascent (Steam)
  • Lawn Mowing Simulator (Steam)
  • Conqueror’s Blade (Steam)

April Additions

There were 23 announced games in April, plus another eight that joined the GeForce NOW library of over 1,600 games:

Poker Club unfortunately couldn’t be added in April due to technical issues. Tin Hearts also didn’t make it in April, but is included in the May list due to a shift in its release date.

With so many titles streaming from the cloud, what device will you be streaming on? Let us know in the comments below, or on Twitter or Facebook.

Read More

Picture Perfect: AV1 Streaming Dazzles on GeForce RTX 40 Series GPUs With OBS Studio 29.1 Launch and YouTube Support

Picture Perfect: AV1 Streaming Dazzles on GeForce RTX 40 Series GPUs With OBS Studio 29.1 Launch and YouTube Support

AV1, the next-generation video codec, is expanding its reach with today’s release of OBS Studio 29.1. This latest software update adds support for AV1 streaming to YouTube over Enhanced RTMP.

All GeForce RTX 40 Series GPUs — including laptop GPUs and the recently launched GeForce RTX 4070 — support real-time AV1 hardware encoding, providing 40% more efficient encoding on average than H.264 and delivering higher quality than competing GPUs.

AV1 vs. H.264 encode efficiency based on BD-SNR.

This reduces the upload bandwidth needed to stream, a common limitation from streaming services and internet service providers. At higher resolutions, AV1 encoding is even more efficient. For example, AV1 enables streaming 4K at 60 frames per second with 10 Mbps upload bandwidth — down from 20 Mbps with H.264 — making 4K60 streaming available to a wider audience.

AV1 — The New Standard

As a founding member of the Alliance for Open Media, NVIDIA has worked closely with industry titans in developing the AV1 codec. This work was necessitated by gamers and online content creators who pushed the boundaries of old formats that were defined roughly 20 years ago. The previous standard for livestreaming, H.264, usually maxed out with 1080p at 60 fps at the commonly used bitrates of 6-8 Mbps, and often produced blocky, grainy images.

AV1’s increased efficiency enables streaming higher-quality images, allowing creators to stream at higher resolutions with smoother frame rates. Even in network-limited environments, streamers can now reap the benefits of high-quality video shared with their audience.

Support for AV1 on YouTube comes through the recent update to RTMP. The enhanced protocol also adds support for HEVC streaming, bringing new formats to users on the existing low-latency protocol they use for H.264 streaming. Enhanced RTMP ingestion has been released as a beta feature on YouTube.

Learn how to configure OBS Studio for streaming AV1 with GeForce RTX 40 Series GPUs in the OBS setup guide.

Better Streams With NVENC, NVIDIA Broadcast

GeForce RTX 40 Series GPUs usher in a new era of high-quality streaming with AV1 encoding support on the eighth-generation NVENC. A boon to streamers, NVENC offloads compute-intensive encoding tasks from the CPU to dedicated hardware on the GPU.

Comparison of 4K image quality in AV1 livestream1.

Designed to support the rigors of professional content creators, NVENC preserves video quality with a higher accuracy than competitive encoders. GeForce RTX users can stream higher-quality images at the same bitrate as competitive products or encode at a lower bitrate while maintaining a similar picture quality.

NVIDIA Broadcast, part of the exclusive NVIDIA Studio suite of software, transforms any room into a home studio. Livestreams, voice chats and video calls look and sound better with powerful AI effects like eye contact, noise and room echo removal, virtual background and more.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

 

1 Source: 4K60 AV1 encoded video with AMD 7900 XT, GeForce RTX 4080 and Intel Arc 770 with OBS Studio default settings at 12Mbps

Read More

Latest NVIDIA Graphics Research Advances Generative AI’s Next Frontier

Latest NVIDIA Graphics Research Advances Generative AI’s Next Frontier

NVIDIA today introduced a wave of cutting-edge AI research that will enable developers and artists to bring their ideas to life — whether still or moving, in 2D or 3D, hyperrealistic or fantastical.

Around 20 NVIDIA Research papers advancing generative AI and neural graphics — including collaborations with over a dozen universities in the U.S., Europe and Israel — are headed to SIGGRAPH 2023, the premier computer graphics conference, taking place Aug. 6-10 in Los Angeles.

The papers include generative AI models that turn text into personalized images; inverse rendering tools that transform still images into 3D objects; neural physics models that use AI to simulate complex 3D elements with stunning realism; and neural rendering models that unlock new capabilities for generating real-time, AI-powered visual details.

Innovations by NVIDIA researchers are regularly shared with developers on GitHub and incorporated into products, including the NVIDIA Omniverse platform for building and operating metaverse applications and NVIDIA Picasso, a recently announced foundry for custom generative AI models for visual design. Years of NVIDIA graphics research helped bring film-style rendering to games, like the recently released Cyberpunk 2077 Ray Tracing: Overdrive Mode, the world’s first path-traced AAA title.

The research advancements presented this year at SIGGRAPH will help developers and enterprises rapidly generate synthetic data to populate virtual worlds for robotics and autonomous vehicle training. They’ll also enable creators in art, architecture, graphic design, game development and film to more quickly produce high-quality visuals for storyboarding, previsualization and even production.

AI With a Personal Touch: Customized Text-to-Image Models

Generative AI models that transform text into images are powerful tools to create concept art or storyboards for films, video games and 3D virtual worlds. Text-to-image AI tools can turn a prompt like “children’s toys” into nearly infinite visuals a creator can use for inspiration — generating images of stuffed animals, blocks or puzzles.

However, artists may have a particular subject in mind. A creative director for a toy brand, for example, could be planning an ad campaign around a new teddy bear and want to visualize the toy in different situations, such as a teddy bear tea party. To enable this level of specificity in the output of a generative AI model, researchers from Tel Aviv University and NVIDIA have two SIGGRAPH papers that enable users to provide image examples that the model quickly learns from.

One paper describes a technique that needs a single example image to customize its output, accelerating the personalization process from minutes to roughly 11 seconds on a single NVIDIA A100 Tensor Core GPU, more than 60x faster than previous personalization approaches.

A second paper introduces a highly compact model called Perfusion, which takes a handful of concept images to allow users to combine multiple personalized elements — such as a specific teddy bear and teapot — into a single AI-generated visual:

Examples of generative AI model personalizing text-to-image output based on user-provided images

Serving in 3D: Advances in Inverse Rendering and Character Creation 

Once a creator comes up with concept art for a virtual world, the next step is to render the environment and populate it with 3D objects and characters. NVIDIA Research is inventing AI techniques to accelerate this time-consuming process by automatically transforming 2D images and videos into 3D representations that creators can import into graphics applications for further editing.

A third paper created with researchers at the University of California, San Diego, discusses tech that can generate and render a photorealistic 3D head-and-shoulders model based on a single 2D portrait — a major breakthrough that makes 3D avatar creation and 3D video conferencing accessible with AI. The method runs in real time on a consumer desktop, and can generate a photorealistic or stylized 3D telepresence using only conventional webcams or smartphone cameras.

A fourth project, a collaboration with Stanford University, brings lifelike motion to 3D characters. The researchers created an AI system that can learn a range of tennis skills from 2D video recordings of real tennis matches and apply this motion to 3D characters. The simulated tennis players can accurately hit the ball to target positions on a virtual court, and even play extended rallies with other characters.

Beyond the test case of tennis, this SIGGRAPH paper addresses the difficult challenge of producing 3D characters that can perform diverse skills with realistic movement — without the use of expensive motion-capture data.

 

Not a Hair Out of Place: Neural Physics Enables Realistic Simulations

Once a 3D character is generated, artists can layer in realistic details such as hair — a complex, computationally expensive challenge for animators.

Humans have an average of 100,000 hairs on their heads, with each reacting dynamically to an individual’s motion and the surrounding environment. Traditionally, creators have used physics formulas to calculate hair movement, simplifying or approximating its motion based on the resources available. That’s why virtual characters in a big-budget film sport much more detailed heads of hair than real-time video game avatars.

A fifth paper showcases a method that can simulate tens of thousands of hairs in high resolution and in real time using neural physics, an AI technique that teaches a neural network to predict how an object would move in the real world.

The team’s novel approach for accurate simulation of full-scale hair is specifically optimized for modern GPUs. It offers significant performance leaps compared to state-of-the-art, CPU-based solvers, reducing simulation times from multiple days to merely hours — while also boosting the quality of hair simulations possible in real time. This technique finally enables both accurate and interactive physically based hair grooming.

Neural Rendering Brings Film-Quality Detail to Real-Time Graphics 

After an environment is filled with animated 3D objects and characters, real-time rendering simulates the physics of light reflecting through the virtual scene. Recent NVIDIA research shows how AI models for textures, materials and volumes can deliver film-quality, photorealistic visuals in real time for video games and digital twins.

NVIDIA invented programmable shading over two decades ago, enabling developers to customize the graphics pipeline. In these latest neural rendering inventions, researchers extend programmable shading code with AI models that run deep inside NVIDIA’s real-time graphics pipelines.

In a sixth SIGGRAPH paper, NVIDIA will present neural texture compression that delivers up to 16x more texture detail without taking additional GPU memory. Neural texture compression can substantially increase the realism of 3D scenes, as seen in the image below, which demonstrates how neural-compressed textures (right) capture sharper detail than previous formats, where the text remains blurry (center).

Three-pane image showing a page of text, a zoomed-in version with blurred text, and a zoomed-in version with clear text.
Neural texture compression (right) provides up to 16x more texture detail than previous texture formats without using additional GPU memory.

A related paper announced last year is now available in early access as NeuralVDB, an AI-enabled data compression technique that decreases by 100x the memory needed to represent volumetric data — like smoke, fire, clouds and water.

NVIDIA also released today more details about neural materials research that was shown in the most recent NVIDIA GTC keynote. The paper describes an AI system that learns how light reflects from photoreal, many-layered materials, reducing the complexity of these assets down to small neural networks that run in real time, enabling up to 10x faster shading.

The level of realism can be seen in this neural-rendered teapot, which accurately represents the ceramic, the imperfect clear-coat glaze, fingerprints, smudges and even dust.

Rendered close-up images of a ceramic blue teapot with gold handle
The neural material model learns how light reflects from the many-layered, photoreal reference materials.

More Generative AI and Graphics Research

These are just the highlights — read more about all the NVIDIA papers at SIGGRAPH. NVIDIA will also present six courses, four talks and two Emerging Technology demos at the conference, with topics including path tracing, telepresence and diffusion models for generative AI.

NVIDIA Research has hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics.

Read More

Renders and Dragons Rule Creative Kingdoms This Week ‘In the NVIDIA Studio’

Renders and Dragons Rule Creative Kingdoms This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Content creator Grant Abbitt embodies selflessness, one of the best qualities that a creative can possess. Passionate about giving back to the creative community, Abbitt offers inspiration, guidance and free education for others in his field through YouTube tutorials.

He designed Dragon, the 3D scene featured this week In the NVIDIA Studio, specifically to help new Blender users easily understand the steps in the creative process of using the software.

“Dragons can be extremely tough to make,” said Abbitt. While he could have spent more time refining the details, he said, “That wasn’t the point of the project. It’s all about the learning journey for the student.”

Abbitt understands the importance of early education. Providing actionable, straightforward instructions enables prospective 3D modelers to make gradual progress, he said. When encouraged, 3D artists keep morale high while gaining confidence and learning more advanced skills, Abbitt has noticed over his 30+ years of industry experience.

His own early days of learning 3D workflows presented unique obstacles, like software programs costing as much as the hardware, or super-slow internet, which required Abbitt to learn 3D through instructional VHS tapes.

Learning 3D modeling and animation on VHS tapes.

Undeterred by such challenges, Abbitt earned a media studies degree and populated films with his own 3D content.

Now a full-time 3D artist and content creator, Abbitt does what he loves while helping aspiring content creators realize their creative ambitions. In this tutorial, for example, Abbitt teaches viewers how to create a video game character in just 20 minutes.

Dragon Wheel

Abbitt described a different dynasty in this realm — how he created his Dragon piece.

“Reference images are a must,” stressed Abbitt. “Deviation from the intended vision is part of the creative process, but without a direction or foundation, things can quickly go off track.” This is especially important with freelance work and creative briefs provided by clients, he added.

Abbitt looked to Pinterest and ArtStation for creative inspiration and reference material, and sketched in the Krita app on his tablet. The remainder of the project was completed in Blender — the popular 3D creation suite — which is free and open source.

Reference imagery set a solid foundation for the project.

He began with the initial blockout, a 3D rough-draft level built using simple 3D shapes without details or polished art assets. The goal of the blockout was to prototype, test and adjust the foundational shapes of the dragon. Abbitt then combined block shapes into a single mesh model, the structural build of a 3D model, consisting of polygons.

 

More sculpting was followed by retopologizing the mesh, the process of simplifying the topology of a mesh to make it cleaner and easier to work with. This is a necessary step for images that will undergo more advanced editing and distortions.

Adding Blender’s multiresolution modifier enabled Abbitt to subdivide a mesh, especially useful for re-projecting details from another sculpt with a Shrinkwrap modifier, which allows an object to “shrink” to the surface of another object. It can be applied to meshes, lattices, curves, surfaces and texts.

At this stage, the power of Abbitt’s GeForce RTX 4090 GPU really started to shine. He sculpted fine details faster with Blender Cycles RTX-accelerated OptiX ray tracing in the viewport for fluid, interactive modeling with photorealistic detail. Baking and applying textures were done with buttery smooth ease.

Astonishing details for a single 3D model.

The RTX 4090 GPU also accelerated the animation phase, where the artist rigged and posed his model. “Modern content creators require GPU technology to see their creative visions fully realized at an efficient pace,” Abbitt said.

 

For the texturing, painting and rendering process, Abbitt said he found it “extremely useful to be able to see the finished results without a huge render time, thanks to NVIDIA OptiX.”

Rendering final files in popular 3D creative apps — like Blender, Autodesk Maya with Autodesk Arnold, OTOY’s OctaneRender and Maxon’s Redshift — is made 70-200% faster with an RTX 4090 GPU, compared to previous-generation cards. This results in invaluable time saved for a freelancer with a deadline or a student working on a group project.

Abbitt’s RTX GPU enabled OptiX ray tracing in Blender Cycles for the fastest final frame render.

That’s one scary dragon.

“NVIDIA GeForce RTX graphics cards are really the only choice at the moment for Blender users, because they offer so much more speed during render times,” said Abbitt. “You can quickly see results and make the necessary changes.”

Content creator Grant Abbitt.

Check out Abbitt’s YouTube channel with livestreams every Friday at 9 a.m. PT.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

Now Shipping: DGX H100 Systems Bring Advanced AI Capabilities to Industries Worldwide

Now Shipping: DGX H100 Systems Bring Advanced AI Capabilities to Industries Worldwide

Customers from Japan to Ecuador and Sweden are using NVIDIA DGX H100 systems like AI factories to manufacture intelligence.

They’re creating services that offer AI-driven insights in finance, healthcare, law, IT and telecom — and working to transform their industries in the process.

Among the dozens of use cases, one aims to predict how factory equipment will age, so tomorrow’s plants can be more efficient.

Called Green Physics AI, it adds information like an object’s CO2 footprint, age and energy consumption to SORDI.ai, which claims to be the largest synthetic dataset in manufacturing.

Green Physics AI demo accelerated by DGX H100
Green Physics AI lets users model how objects age.

The dataset lets manufacturers develop powerful AI models and create digital twins that optimize the efficiency of factories and warehouses.  With Green Physics AI, they also can optimize energy and CO2 savings for the factory’s products and the components that go into them.

Meet Your Smart Valet

Imagine a robot that could watch you wash dishes or change the oil in your car, then do it for you.

Boston Dynamics AI Institute (The AI Institute), a research organization which traces its roots to Boston Dynamics, the well-known pioneer in robotics, will use a DGX H100 to pursue that vision. Researchers imagine dexterous mobile robots helping people in factories, warehouses, disaster sites and eventually homes.

“One thing I’ve dreamed about since I was in grad school is a robot valet who can follow me and do useful tasks — everyone should have one,” said Al Rizzi, CTO of The AI Institute.

That will require breakthroughs in AI and robotics, something Rizzi has seen firsthand. As chief scientist at Boston Dynamics, he helped create robots like Spot, a quadruped that can navigate stairs and even open doors for itself.

Initially, the DGX H100 will tackle tasks in reinforcement learning, a key technique in robotics. Later, it will run AI inference jobs while connected directly to prototype bots in the lab.

“It’s an extremely high-performance computer in a relatively compact footprint, so it provides an easy way for us to develop and deploy AI models,” said Rizzi.

Born to Run Gen AI

You don’t have to be a world-class research outfit or Fortune 500 company to use a DGX H100. Startups are unboxing some of the first systems to ride the wave of generative AI.

For example, Scissero, with offices in London and New York, employs a GPT-powered chatbot to make legal processes more efficient. Its Scissero GPT can draft legal documents, generate reports and conduct legal research.

In Germany, DeepL will use several DGX H100 systems to expand services like translation between dozens of languages it provides for customers, including Nikkei, Japan’s largest publishing company. DeepL recently released an AI writing assistant called DeepL Write.

Here’s to Your Health

Many of the DGX H100 systems will advance healthcare and improve patient outcomes.

In Tokyo, DGX H100s will run simulations and AI to speed the drug discovery process as part of the Tokyo-1 supercomputer. Xeureka — a startup launched in November 2021 by Mitsui & Co. Ltd., one of Japan’s largest conglomerates —  will manage the system.

Separately, hospitals and academic healthcare organizations in Germany, Israel and the U.S. will be among the first users of DGX H100 systems.

Lighting Up Around the Globe

Universities from Singapore to Sweden are plugging in DGX H100 systems for research across a range of fields.

A DGX H100 will train large language models for Johns Hopkins University Applied Physics Laboratory. The KTH Royal Institute of Sweden will use one to expand its supercomputing capabilities.

Among other use cases, Japan’s CyberAgent, an internet services company, is creating smart digital ads and celebrity avatars. Telconet, a leading telecommunications provider in Ecuador, is building intelligent video analytics for safe cities and language services to support customers across Spanish dialects.

An Engine of AI Innovation

Each NVIDIA H100 Tensor Core GPU in a DGX H100 system provides on average about 6x more performance than prior GPUs. A DGX H100 packs eight of them, each with a Transformer Engine designed to accelerate generative AI models.

The eight H100 GPUs connect over NVIDIA NVLink to create one giant GPU. Scaling doesn’t stop there: organizations can connect hundreds of DGX H100 nodes into an AI supercomputer using the 400 Gbps ultra-low latency NVIDIA Quantum InfiniBand, twice the speed of prior networks.

Fueled by a Full Software Stack

DGX H100 systems run on NVIDIA Base Command, a suite for accelerating compute, storage, and network infrastructure and optimizing AI workloads.

They also include NVIDIA AI Enterprise, software to accelerate data science pipelines and streamline development and deployment of generative AI, computer vision and more.

The DGX platform offers both high performance and efficiency. DGX H100 delivers a 2x improvement in kilowatts per petaflop over the DGX A100 generation.

NVIDIA DGX H100 systems, DGX PODs and DGX SuperPODs are available from NVIDIA’s global partners.

Manuvir Das, NVIDIA’s vice president of enterprise computing, announced DGX H100 systems are shipping in a talk at MIT Technology Review’s Future Compute event today. A link to his talk will be available here soon.

Read More

Rock ‘n’ Robotics: The White Stripes’ AI-Assisted Visual Symphony

Rock ‘n’ Robotics: The White Stripes’ AI-Assisted Visual Symphony

Playfully blending art and technology, underground animator Michael Wartella has teamed up with artificial intelligence to breathe new life into The White Stripes’ fan-favorite song, “Black Math.”

The video was released earlier this month to celebrate the 20th anniversary of the groundbreaking “Elephant” album.

Wartella is known for his genre-bending work as a cartoonist and animator.

His Brooklyn-based Dream Factory Animation studio produced the “Black Math” video, which combines digital and practical animation techniques with AI-generated imagery.

“This track is 20 years old, so we wanted to give it a fresh look, but we wanted it to look like it was cut from the same cloth as classic White Stripes videos,” Wartella said.

For the “Black Math” video, Wartella turned to Automatic1111, an open-source generative AI tool. To create the video, Wartella and his team started off with the actual album cover, using AI to “bore” into the image.

They then used AI to train the AI and build more images in a similar style. “That was really crazy and interesting and everything built from there,” Wartella said.

This image-to-image deep learning model caused a sensation on its release last year, and is part of a new generation of AI tools that are transforming the arts.

“We used several different AI tools and animation tools,” Wartella said. “For every shot, I wanted this to look like an AI video in a way those classic CGI videos look very CGI now.”

Wartella and his team relied heavily on archived images and video of the musician duo as well as motion-capture techniques to create a video replicating the feel of late-1990s and early-2000s music videos.

Wartella has long relied on NVIDIA GPUs to run a full complement of digital animation tools on workstations from Austin, Texas-based BOXX Technologies.

“We’ve used BOXX workstations with NVIDIA cards for almost 20 years now,” he said. “That combination is just really powerful — it’s fast, it’s stable.”

Wartella describes his work on the “Black Math” video as a “collaboration” with the AI tool, using it to generate images, tweaking the results and then returning to the technology for more.

“I see this as a collaboration, not just pressing a button. It’s an incredibly creative tool,” Wartella said of generative AI.

The results were sometimes “kind of strange,” a quality that Wartella prizes.

He took the output from the AI, ran it through conventional composition and editing tools, and then processed the results through AI again.

Wartella felt that working with AI in this way made the video stronger and more abstract.

Wardella and his team used generative AI to create something that feels both different, and familiar to White Stripes fans.

The video presents Jack and Meg White in their 2003 personas, emerging from a whimsical, dark cyber fantasy.

The video parallels the look and feel of the band’s videos from the early 2000s, even as it leans into the otherworldly, almost kaleidoscopic qualities of modern generative AI.

“The lyrics are anti-authoritarian and punkish, so the sound steered this one in the direction,” Wartella said. “The song itself has a scientific theme that is already a perfect fit for the AI.”

When “Black Math” was first released as part of The White Stripes’ critically acclaimed “Elephant” album, it grabbed attention for its high-energy, powerful guitar riffs and Jack White’s unmistakable vocals.

The song played a role in cementing the band’s reputation as a critical player in the garage rock revival of the early 2000s.

Wartella’s inventive approach with “Black Math” highlights the growing use of AI — as well as lively discussion of its implications — among creatives.

Over the past few months, AI-generated art has been increasingly prevalent across various social media platforms, thanks to tools like Midjourney, OpenAI’s Dall·E, DreamStudio and Stable Diffusion.

As AI advances, Wartella said, we can expect to see more artists exploring the potential of these tools in their work.

“I’m in full favor of people having the opportunity to play around with the technology,” Wartella said. “We’ll definitely use AI again if the song or the project calls for it.”

The release of the “Black Math” music video coincides with the launch of “The White Stripes Elephant (20th Anniversary)” deluxe vinyl reissue package, available now through Jack White’s Third Man Records and Sony Legacy Recordings.

Watch the “Black Math” music video:

Read More

What Is Agent Assist?

What Is Agent Assist?

“Please hold” may be the two words that customers hate most — and that contact center agents take pains to avoid saying.

Providing fast, accurate, helpful responses based on contextually relevant information is key to effective customer service. It’s even better if answers are personalized and take into account how a customer might be feeling.

All of this is made easier and quicker for human agents by what the industry calls agent assists.

Agent assist technology uses AI and machine learning to provide facts and make real-time suggestions that help human agents across telecom, retail and other industries conduct conversations with customers.

It can integrate with contact centers’ existing applications, provide faster onboarding for agents, improve the accuracy and efficiency of their responses, and increase customer satisfaction and loyalty.

How Agent Assist Technology Works

Agent assist technology gives human agents AI-powered information and real-time recommendations that can enhance their customer conversations.

Taking conversations as input, agent assist technology outputs accurate, timely suggestions on how to best respond to queries — using a combination of automatic speech recognition (ASR), natural language processing (NLP), machine learning and data analytics.

While a customer speaks to a human agent, ASR tools — like the NVIDIA Riva software development kit — transcribe speech into text, in real time. The text can then be run through NLP, AI and machine learning models that offer recommendations to the human agent by analyzing different aspects of the conversation.

First, AI models can evaluate the context of the conversation, identify topics and bring up relevant information for the human agent — like the customer’s account data , a record of their previous inquiries, documents with recommended products and additional information to help resolve issues.

Say a customer is looking to switch to a new phone plan. The agent assist could, for example, immediately display a chart on the human agent’s screen comparing the company’s offerings, which can be used as a reference throughout the conversation.

Another AI model can perform sentiment analysis based on the words a customer is using.

For example, if a customer says, “I’m extremely frustrated with my cellular reception,” the agent assist would advise the human agent to approach the customer differently from a situation where the customer says, “I am happy with my phone plan but am looking for something less expensive.”

It can even present a human agent with verbiage to consider using when soothing, encouraging, informing or otherwise guiding a customer toward conflict resolution.

And, at a conversation’s conclusion, agent assist technology can provide personalized, best next steps for the human agent to give the customer. It can also offer the human agent a summary of the interaction overall, along with feedback to inform future conversations and employee training.

All such ASR, NLP and AI-powered capabilities come together in agent assist technology, which is becoming increasingly integral to businesses across industries.

How Agent Assist Technology Helps Businesses, Customers

By tapping into agent assist technology, businesses can improve productivity, employee retention and customer satisfaction, among other benefits.

For one, agent assist technology reduces contact center call times. Through NLP and intelligent routing algorithms, it can identify customer needs in real time, so human agents don’t need to hunt for basic customer information or search databases for answers.

Leading telecom provider T-Mobile — which offers award-winning service across its Customer Experience Centers — uses agent assist technology to help tackle millions of daily customer care calls. The NVIDIA NeMo framework helped the company achieve 10% higher accuracy for its ASR-generated transcripts across noisy environments, and Riva reduced latency for its agent assist by 10x. (Dive deeper into speech AI by watching T-Mobile’s on-demand NVIDIA GTC session.)

Agent assist technology also speeds up the onboarding process for human agents, helping them quickly become familiar with the products and services offered by their organization. In addition, it empowers contact center employees to provide high levels of service while maintaining low levels of stress — which means higher employee retention for enterprises.

Quicker, more accurate conflict resolution enabled by agent assist also leads to more positive contact center experiences, happier customers and increased loyalty for businesses.

Use Cases Across Industries

Agent assist technology can be used across industries, including:

  • Telecom — Agent assist can provide automated troubleshooting, technical tips and other helpful information for agents to relay to customers.
  • Retail — Agent assist can suggest products, features, pricing, inventory information and more in real time, as well as translate languages according to customer preferences.
  • Financial services — Agent assist can help detect fraud attempts by providing real-time alerts, so that human agents are aware of any suspicious activity throughout an inquiry.

Minerva CQ, a member of the NVIDIA Inception program for cutting-edge startups, provides agent assist technology that brings together real-time, adaptive workflows with behavioral cues, dialogue suggestions and knowledge surfacing to drive faster, better outcomes. Its technology — based on Riva, NeMo and NVIDIA Triton Inference Server — focuses on helping human agents in the energy, healthcare and telecom sectors.

History and Future of Agent Assist

Predecessors of agent assist technology can be traced back to the 1950s, when computer-based systems first replaced manual call routing.

More recently came intelligent virtual assistants, which are usually automated systems or bots that don’t have a human working behind them.

Smart devices and mobile technology have led to a rise in the popularity of these intelligent virtual assistants, which can answer questions, set reminders, play music, control home devices and handle other simple tasks.

But complex tasks and inquiries — especially for enterprises with customer service at their core — can be solved most efficiently when human agents are augmented by AI-powered suggestions. This is where agent assist technology has stepped in.

The technology has much potential for further advancement, with challenges including:

  • Developing methods for agent assists to adapt to changing customer expectations and preferences.
  • Further ensuring data privacy and security through encryption and other methods to strip conversations of confidential or sensitive information before running them through agent assist AI models.
  • Integrating agent assist with other emerging technologies like interactive digital avatars, which can see, hear, understand and communicate with end users to help customers while boosting their sentiment.

Learn more about NVIDIA speech AI technologies.

Additional resources:

Read More

Welcome to the Family: GeForce NOW, Capcom Bring ‘Resident Evil’ Titles to the Cloud

Welcome to the Family: GeForce NOW, Capcom Bring ‘Resident Evil’ Titles to the Cloud

Horror descends from the cloud this GFN Thursday with the arrival of publisher Capcom’s iconic Resident Evil series.

They’re part of nine new games expanding the GeForce NOW library of over 1,600 titles.

GeForce NOW Servers RTX 4080
More RTX 4080 SuperPODs just in time to play “Resident Evil” titles.

RTX 4080 SuperPODs are now live in Miami, Portland, Ore., and Stockholm. Follow along with the server rollout process, and make the Ultimate upgrade for unbeatable cloud gaming performance.

Survive in the Cloud

Resident Evil on GeForce NOW
“Resident Evil” now resides in the cloud.

The Resident Evil series makes its debut on GeForce NOW with Resident Evil 2, Resident Evil 3 and Resident Evil 7 Biohazard.

Survive against hordes of flesh-eating zombies and other bio-organic creatures created by the sinister Umbrella Corporation in these celebrated — and terrifying — Resident Evil games. The survival horror games feature memorable casts of characters and gripping storylines to keep members glued to their seats.

With RTX ON and high dynamic range, Ultimate and Priority members will also experience the most realistic lighting and deepest shadows. Bonus points for streaming with the lights off for an even more immersive experience.

The Newness

The Resident Evil titles lead nine new games joining the GeForce NOW library:

  • Shadows of Doubt (New release on Steam, April 24)
  • Afterimage (New release on Steam, April 24)
  • Roots of Pacha (New release on Steam, April 25)
  • Bramble: The Mountain King (New release on Steam, April 27)
  • The Swordsmen X: Survival (New release on Steam, April 27)
  • Poker Club (Free on Epic Games Store, April 27)
  • Resident Evil 2 (Steam)
  • Resident Evil 3 (Steam)
  • Resident Evil 7 Biohazard (Steam)

And check out the question of the week. Let us know your answer in the comments below, or on the GeForce NOW Facebook and Twitter channels.

Read More

Viral NVIDIA Broadcast Demo Drops Hammer on Imperfect Audio This Week ‘In the NVIDIA Studio’

Viral NVIDIA Broadcast Demo Drops Hammer on Imperfect Audio This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows.

Content creators in all fields can benefit from free, AI-powered technology available from NVIDIA Studio.

The Studio platform delivers RTX acceleration in over 110 popular creative apps plus an exclusive suite of AI-powered Studio software. NVIDIA Omniverse interconnects 3D workflows, Canvas turns simple brushstrokes into realistic landscape images and RTX Remix helps modders create stunning RTX remasters of classic PC games.

Spotlighted by this week’s In the NVIDIA Studio featured artist Unmesh Dinda, NVIDIA Broadcast transforms the homes, apartments and dorm rooms of content creators, livestreamers and people working from home through the power of AI — all without the need for specialized equipment.

Host of the widely watched YouTube channel PiXimperfect, Dinda takes the noise-canceling and echo-removal AI features in Broadcast to extremes. He turned the perfect demo into a viral hit faster, powered by RTX acceleration in his go-to video-editing software, Adobe Premiere Pro.

It’s Hammer Time

NVIDIA Broadcast has several popular features, including visual background, autoframing, video noise removal, eye contact and vignette effects.

Two of the most frequently used features, noise and echo removal, caught the attention of Dinda, who saw Broadcast’s potential and wanted to show creators how to instantly improve their content.

The foundation of Dinda’s tutorial style came from his childhood. “My father would sit with me every day to help me with schoolwork,” he said. “He always used to explain with examples which were crystal clear to me, so now I do the same with my channel.”

Dinda contemplated how to demonstrate this incredible technology in a quick, relatable way.

“Think of a crazy idea that grabs attention instantly,” said Dinda. “Concepts like holding a drill in both hands or having a friend play drums right next to me.”

Dinda took the advice of famed British novelist William Golding, who once said, “The greatest ideas are the simplest.” Dinda’s final concept ended up as a scene of a hammer hitting a helmet on his head.

It turns out that seeing — and hearing — is believing.

Even with an electric fan whirring directly into his microphone and intense hammering on his helmet, Dinda can be heard crystal clear with Broadcast’s noise-removal feature turned on. To help emphasize the sorcery, Dinda briefly turns the feature off in the demo to reveal the painful sound his viewers would hear without it.

The demo launched on Instagram a few months ago and went viral overnight. Across social media platforms, the video now has over 12 million views and counting.

Dinda wasn’t harmed in the making of this video.

Views are fantastic, but the real gratification of Dinda’s work comes from a genuine desire to improve his followers’ skillsets, he said.

“The biggest inspiration comes from viewers,” said Dinda. “When they comment, message or meet me at an event to say how much the content has helped their career, it inspires me to create more and reach more creatives.”

 

Learn more and download Broadcast, free for all GeForce RTX GPU owners.

Hammer Out the Details

Dinda uses Adobe Premiere Pro to edit his videos, and his GeForce RTX 3080 Ti plays a major part in accelerating his creative workflow.

“I work with and render high-resolution videos on a daily basis, especially with Adobe Premiere Pro. Having a GPU like the GeForce RTX 3080 Ti helps me render and publish in time.” — Unmesh Dinda

He uses the GPU-accelerated decoder, called NVDEC, to unlock smooth playback and scrubbing of the high-resolution video footage he often works in.

As his hammer-filled Broadcast demo launched on several social media platforms, Dinda had the option to deploy the AI-powered, RTX-accelerated auto reframe feature. It automatically and intelligently tracks objects, and crops landscape video to social-media-friendly aspect ratios, saving even more time.

Dinda also used Adobe Photoshop to add graphical overlays to the video. With more than 30 GPU-accelerated features at his disposal — such as super resolution, blur gallery, object selection, smart sharpen and perspective warp — he can improve and adjust footage, quickly and easily.

 

Dinda used the GPU-accelerated NVIDIA encoder, aka NVENC, to speed up video exports up to 5x faster with his RTX GPU, leading to more time saved on the project.

Though he’s a full-time, successful video creator, Dinda stressed, “I have a normal life outside Adobe Photoshop, I promise!”

Streamer Unmesh Dinda.

Check out Dinda’s PiXimperfect channel, a free resource for learning Adobe Photoshop — another RTX-accelerated Studio app.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

The Future of Intelligent Vehicle Interiors: Building Trust with HMI & AI

The Future of Intelligent Vehicle Interiors: Building Trust with HMI & AI

Imagine a future where your vehicle’s interior offers personalized experiences and builds trust through human-machine interfaces (HMI) and AI. In this episode of the NVIDIA AI Podcast, Andreas Binner, chief technology officer at Rightware, delves into this fascinating topic with host Katie Burke Washabaugh.

Rightware is a Helsinki-based company at the forefront of developing in-vehicle HMI. Its platform, Kanzi, works in tandem with NVIDIA DRIVE IX to provide a complete toolchain for designing personalized vehicle interiors for the next generation of transportation, including detailed visualizations of the car’s AI.

Binner touches on his journey into automotive technology and HMI, the evolution of infotainment in the automotive industry over the past decade, and surprising trends in HMI. They explore the influence of AI on HMI, novel AI-enabled features and the importance of trust in new technologies.

Other topics include the role of HMI in fostering trust between vehicle occupants and the vehicle, the implications of autonomous vehicle visualization, balancing larger in-vehicle screens with driver distraction risks, additional features for trust-building between autonomous vehicles and passengers, and predictions for intelligent cockpits in the next decade.

Tune in to learn about the innovations that Rightware’s Kanzi platform and NVIDIA DRIVE IX bring to the automotive industry and how they contribute to developing intelligent vehicle interiors.

Read more on the NVIDIA Blog:  NVIDIA DRIVE Ecosystem Creates Pioneering In-Cabin Features With NVIDIA DRIVE IX

You Might Also Like

Driver’s Ed: How Waabi Uses AI, Simulation to Teach Autonomous Vehicles to Drive

Teaching the AI brains of autonomous vehicles to understand the world as humans do requires billions of miles of driving experience. The road to achieving this astronomical level of driving leads to the virtual world. Learn how Waabi uses powerful high-fidelity simulations to train and develop production-level autonomous vehicles.

Polestar’s Dennis Nobelius on the Sustainable Performance Brand’s Plans

Driving enjoyment and autonomous driving capabilities can complement one another in intelligent, sustainable vehicles. Learn about the automaker’s plans to unveil its third vehicle, the Polestar 3, the tech inside it, and what the company’s racing heritage brings to the intersection of smarts and sustainability.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments

Humans playing games against machines is nothing new, but now computers can develop their own games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

Read More