Decoding How the Generative AI Revolution BeGAN

Decoding How the Generative AI Revolution BeGAN

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for RTX PC users.

Generative models have completely transformed the AI landscape — headlined by popular apps such as ChatGPT and Stable Diffusion.

Paving the way for this boom were foundational AI models and generative adversarial networks (GANs), which sparked a leap in productivity and creativity.

NVIDIA’s GauGAN, which powers the NVIDIA Canvas app, is one such model that uses AI to transform rough sketches into photorealistic artwork.

How It All BeGAN

GANs are deep learning models that involve two complementary neural networks: a generator and a discriminator.

These neural networks compete against each other. The generator attempts to create realistic, lifelike imagery, while the discriminator tries to tell the difference between what’s real and what’s generated. As its neural networks keep challenging each other, GANs get better and better at making realistic-looking samples.

GANs excel at understanding complex data patterns and creating high-quality results. They’re used in applications including image synthesis, style transfer, data augmentation and image-to-image translation.

NVIDIA’s GauGAN, named after post-Impressionist painter Paul Gauguin, is an AI demo for photorealistic image generation. Built by NVIDIA Research, it directly led to the development of the NVIDIA Canvas app — and can be experienced for free through the NVIDIA AI Playground.

GauGAN has been wildly popular since it debuted at NVIDIA GTC in 2019 — used by art teachers, creative agencies, museums and millions more online.

Giving Sketch to Scenery a Gogh

Powered by GauGAN and local NVIDIA RTX GPUs, NVIDIA Canvas uses AI to turn simple brushstrokes into realistic landscapes, displaying results in real time.

Users can start by sketching simple lines and shapes with a palette of real-world elements like grass or clouds —- referred to in the app as “materials.”

The AI model then generates the enhanced image on the other half of the screen in real time. For example, a few triangular shapes sketched using the “mountain” material will appear as a stunning, photorealistic range. Or users can select the “cloud” material and with a few mouse clicks transform environments from sunny to overcast.

The creative possibilities are endless — sketch a pond, and other elements in the image, like trees and rocks, will reflect in the water. Change the material from snow to grass, and the scene shifts from a cozy winter setting to a tropical paradise.

Canvas offers nine different styles, each with 10 variations and 20 materials to play with.

Canvas features a Panorama mode that enables artists to create 360-degree images for use in 3D apps. YouTuber Greenskull AI demonstrated Panorama mode by painting an ocean cove, before then importing it into Unreal Engine 5.

Download the NVIDIA Canvas app to get started.

Consider exploring NVIDIA Broadcast, another AI-powered content creation app that transforms any room into a home studio. Broadcast is free for RTX GPU owners.

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More

How an NVIDIA Engineer Unplugs to Recharge During Free Days

How an NVIDIA Engineer Unplugs to Recharge During Free Days

On a weekday afternoon, Ashwini Ashtankar sat on the bank of the Doodhpathri River, in a valley nestled in the Himalayas. Taking a deep breath, she noticed that there was no city noise, no pollution — and no work emails.

Ashtankar, a senior tools development engineer in NVIDIA’s Pune, India, office, took advantage of the company’s free days — two extra days off per quarter when the whole company disconnects from work — to recharge. Free days are fully paid by NVIDIA, not counted as vacation or as personal time off, and are in addition to country-specific holidays and time-away programs.

Free days give employees time to take an adventure, a breather — or both. Ashtankar and her husband, Dipen Sisodia — also an NVIDIAN — spent it outdoors, hiking up a mountain, playing in snow and exploring forests and lush green meadows.

“My free days give me time to focus on myself and recharge,” said Ashtankar. “We didn’t take our laptops. We were able to completely disconnect, like all NVIDIANs were doing at the same time.”

Ashtankar returned to work feeling refreshed and recharged, she said. Her team tests software features of NVIDIA products, focusing on GPU display drivers and the GeForce NOW game-streaming service, to make sure bugs are found and addressed before a product reaches customers.

“I take pride in tackling challenges with the highest level of quality and creativity, all in support of delivering the best products to our customers,” she said. “To do that, sometimes the most productive thing we can do is rest and let the soul catch up with the body.”

Ashtankar plans to build her career at NVIDIA for many years to come.

“I’ve never heard of another company that truly cares this much about its employees,” she said.

Learn more about NVIDIA life, culture and careers.

Read More

GeForce NOW Unleashes High-Stakes Horror With ‘Resident Evil Village’

GeForce NOW Unleashes High-Stakes Horror With ‘Resident Evil Village’

Get ready to feel some chills, even amid the summer heat. Capcom’s award-winning Resident Evil Village brings a touch of horror to the cloud this GFN Thursday, part of three new games joining GeForce NOW this week.

And a new app update brings a visual enhancement to members, along with new ways to curate their GeForce NOW gaming libraries.

Greetings on GFN
#GreetingsFromGFN by @railbeam.

Members are showcasing their favorite locations to visit in the cloud. Follow along with #GreetingsFromGFN on @NVIDIAGFN social media accounts and share picturesque scenes from the cloud for a chance to be featured.

The Bell Tolls for All

Resident Evil Village on GeForce NOW
The cloud — big enough, even, for Lady Dimitrescu and her towering castle.

Resident Evil Village, the follow-up to Capcom’s critically acclaimed Resident Evil 7 Biohazard, delivers a gripping blend of survival-horror and action. Step into the shoes of Ethan Winters, a desperate father determined to rescue his kidnapped daughter.

Set against a backdrop of a chilling European village teeming with mutant creatures, the game includes a captivating cast of characters, including the enigmatic Lady Dimitrescu, who haunts the dimly lit halls of her grand castle. Fend off hordes of enemies, such as lycanthropic villagers and grotesque abominations.

Experience classic survival-horror tactics — such as resource management and exploration — mixed with action featuring intense combat and higher enemy counts.

Ultimate and Priority members can experience the horrors of this dark and twisted world in gruesome, mesmerizing detail with support for ray tracing and high dynamic range (HDR) for the most lifelike shadows and sharp visual fidelity when navigating every eerie hallway. Members can stream it all seamlessly from NVIDIA GeForce RTX-powered servers in the cloud and get a taste of the chills with the Resident Evil Village demo before taking on the towering Lady Dimitrescu in the full game.

I Can See Clearly Now

The latest GeForce NOW app update — version 2.0.64 — adds support for 10-bit color precision. Available for Ultimate members, this feature enhances image quality when streaming on Windows, macOS and NVIDIA SHIELD TV.

SDR10 on GeForce NOW
Rolling out now.

10-bit color precision significantly improves the accuracy and richness of color gradients during streaming. Members will especially notice its effects in scenes with detailed color transitions, such as for vibrant skies, dimly lit interiors, and various loading screens and menus. It’s useful for non-HDR displays and non-HDR-supported games. Find the setting in the GeForce NOW app > Streaming Quality > Color Precision, with the recommended default value of 10-bit.

Try it out on the neon-lit streets of Cyberpunk 2077 for smoother color transitions, and traverse the diverse landscapes of Assassin’s Creed Valhalla and other games for a more immersive streaming experience.

The update, rolling out now, also brings bug fixes and new ways to curate a member’s in-app game library. For more information, visit the NVIDIA Knowledgebase.

Lights, Camera, Action: New Games

Beyond Good and Evil 20th Anniversary Edition on GeForce NOW
Uncover the truth.

Join the rebellion as action reporter Jade in Beyond Good & Evil – 20th Anniversary Edition from Ubisoft. Embark on this epic adventure in up to 4K 60 frames per second with improved graphics and audio, a new speedrun mode, updated achievements and an exclusive anniversary gallery. Enjoy unique new rewards exploring Hillys and discover more about Jade’s past in a new treasure hunt throughout the planet.

Check out the list of new games this week:

  • Beyond Good & Evil – 20th Anniversary Edition (New release on Steam and Ubisoft, June 24)
  • Drug Dealer Simulator 2 (Steam)
  • Resident Evil Village (Steam)
  • Resident Evil Village Demo (Steam)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Into the Omniverse: SyncTwin Helps Democratize Industrial Digital Twins With Generative AI, OpenUSD

Into the Omniverse: SyncTwin Helps Democratize Industrial Digital Twins With Generative AI, OpenUSD

Editor’s note: This post is part of Into the Omniverse, a series focused on how technical artists, developers and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

Efficiency and sustainability are critical for organizations looking to be at the forefront of industrial innovation.

To address the digitalization needs of manufacturing and other industries, SyncTwin GmbH — a company that builds software to optimize production, intralogistics and assembly  — developed a digital twin app using NVIDIA cuOpt, an accelerated optimization engine for solving complex routing problems, and NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services that enable developers to build OpenUSD-based applications.

SyncTwin is harnessing the power of the extensible OpenUSD framework for describing, composing, simulating, and collaborating within 3D worlds to help its customers create physically accurate digital twins of their factories. The digital twins can be used to optimize production and enhance digital precision to meet industrial performance.

OpenUSD’s Role in Modern Manufacturing

Manufacturing workflows are incredibly complex, making effective communication and integration across various domains pivotal to ensuring operational efficiency. The SyncTwin app provides seamless collaboration capabilities for factory plant managers and their teams, enabling them to optimize processes and resources.

The app uses OpenUSD and Omniverse to help make factory planning and operations easier and more accessible by integrating various manufacturing aspects into a cohesive digital twin. Customers can integrate visual data, production details, product catalogs, orders, schedules, resources and production settings all in one place with OpenUSD.

The SyncTwin app creates realistic, virtual environments that facilitate seamless interactions between different sectors of factory operations. This capability enables diverse data — including floorplans from Microsoft PowerPoint and warehouse container data from Excel spreadsheets — to be aggregated in a unified digital twin.

The flexibility of OpenUSD allows for non-destructive editing and composition of complex 3D assets and animations, further enhancing the digital twin.

“OpenUSD is the common language bringing all these different factory domains into a single digital twin,” said Michael Wagner, cofounder and chief technology officer of SyncTwin. “The framework can be instrumental in dismantling data silos and enhancing collaborative efficiency across different factory domains, such as assembly, logistics and infrastructure planning.”

Hear Wagner discuss turning PowerPoint and Excel data into digital twin scenarios using the SyncTwin App in a LinkedIn livestream on July 4 at 11 a.m. CET.

Pioneering Generative AI in Factory Planning

By integrating generative AI into its platform, SyncTwin also provides users with data-driven insights and recommendations, enhancing decision-making processes.

This AI integration automates complex analyses, accelerates operations and reduces the need for manual inputs. Learn more about how SyncTwin and other startups are combining the powers of OpenUSD and generative AI to elevate their technologies in this NVIDIA GTC session.

Hear SyncTwin and NVIDIA experts discuss how digital twins are unlocking new possibilities in this recent community livestream:

Editor’s note: This post is part of Into the Omniverse, a series focused on how technical artists, developers and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

Efficiency and sustainability are critical for organizations looking to be at the forefront of industrial innovation.

To address the digitalization needs of manufacturing and other industries, SyncTwin GmbH — a company that builds software to optimize production, intralogistics and assembly  — developed a digital twin app using the NVIDIA cuOpt accelerated optimization engine for solving complex routing problems and NVIDIA Omniverse, a platform of application programming interfaces (APIs), software development kits (SDKs) and services that enable developers to build OpenUSD-based applications.

SyncTwin is harnessing the power of the extensible OpenUSD framework for describing, composing, simulating, and collaborating within 3D worlds to help their customers create physically accurate digital twins of their factories. The digital twins can be used to optimize production and enhance digital precision to meet industrial performance.

OpenUSD’s Role in Modern Manufacturing

Manufacturing workflows are incredibly complex, making effective communication and integration across various domains pivotal to ensuring operational efficiency. The SyncTwin app provides seamless collaboration capabilities for factory plant managers and their teams, enabling them to optimize processes and resources.

The app uses OpenUSD and Omniverse to help make factory planning and operations easier and more accessible by integrating various manufacturing aspects into a cohesive digital twin. Customers can integrate visual data, production details, product catalogs, orders, schedules, resources and production settings all in one place with OpenUSD.

The SyncTwin app creates realistic, virtual environments that facilitate seamless interactions between different sectors of factory operations. This capability enables diverse data —  including floorplans from Microsoft PowerPoint and warehouse container data from an Excel spreadsheet — to be aggregated in a unified digital twin.

The flexibility of OpenUSD allows for non-destructive editing and composition of complex 3D assets and animations, further enhancing the digital twin.

“OpenUSD is the common language bringing all these different factory domains into a single digital twin,” said Michael Wagner, cofounder and chief technology officer of SyncTwin. “The framework can be instrumental in dismantling data silos and enhancing collaborative efficiency across different factory domains, such as assembly, logistics and infrastructure planning.”

Hear Wagner discuss turning PowerPoint and Excel data into digital twin scenarios using the SyncTwin App in a LinkedIn livestream on July 4 at 11am CET.

Pioneering Generative AI in Factory Planning

By integrating generative AI into its platform, SyncTwin also provides users with data-driven insights and recommendations, enhancing decision-making processes.

This AI integration automates complex analyses, accelerates operations and reduces the need for manual inputs. Learn more about how SyncTwin and other startups are combining the powers of OpenUSD and generative AI to elevate their technologies in this NVIDIA GTC session.

Hear SyncTwin and NVIDIA experts discuss how digital twins are unlocking new possibilities in this recent community livestream:

By tapping into the power of OpenUSD and NVIDIA’s AI and optimization technologies, SyncTwin is helping set new standards for factory planning and operations, improving operational efficiency and supporting the vision of sustainability and cost reduction across manufacturing.

Get Plugged Into the World of OpenUSD

Learn more about OpenUSD and meet with NVIDIA experts at SIGGRAPH, taking place July 28-Aug. 1 at the Colorado Convention Center and online. Attend these SIGGRAPH highlights:

  • NVIDIA founder and CEO Jensen Huang’s fireside chat on Monday, July 29, covering the latest in generative AI and accelerated computing.
  • OpenUSD Day on Tuesday, July 30, where industry luminaries and developers will showcase how to build 3D pipelines and tools using OpenUSD.
  • Hands-on OpenUSD training for all skill levels.

Check out this video series about how OpenUSD can improve 3D workflows. For more resources on OpenUSD, explore the Alliance for OpenUSD forum and visit the AOUSD website.

Get started with NVIDIA Omniverse by downloading the standard license free, access OpenUSD resources and learn how Omniverse Enterprise can connect teams. Follow Omniverse on Instagram, LinkedIn, Medium and X. For more, join the Omniverse community on the forums, Discord server and YouTube channel. 

Featured image courtesy of SyncTwin GmbH.

Read More

Thinking Outside the Blox: How Roblox Is Using Generative AI to Enhance User Experiences

Thinking Outside the Blox: How Roblox Is Using Generative AI to Enhance User Experiences

Roblox is a colorful online platform that aims to reimagine the way that people come together — now that vision is being augmented by generative AI. In this episode of NVIDIA’s AI Podcast, host Noah Kravitz speaks with Anupam Singh, vice president of AI and growth engineering at Roblox, on how the company is using the technology to enhance virtual experiences with features such as automated chat filters and real-time text translation, which help build inclusivity and user safety. Singh also discusses how generative AI can be used to power coding assistants that help creators focus more on creative expression, rather than spending time manually scripting world-building features.

Time Stamps

1:49: Background on Roblox and user interactions within the platform
6:38: Singh’s insight on AI and machine learning’s role in Roblox’s growth
15:51: Using generative AI to enhance user self-expression
20:04: How generative AI simplifies content creation
24:26: What’s next for Roblox

You Might Also Like:

Media.Monks’ Lewis Smithingham on Enhancing Media and Marketing With AI – Ep. 222

In this episode, Lewis Smithingham, senior vice president of innovation and special operations at Media.Monks, discusses AI’s potential to enhance the media and entertainment industry. Smithingham delves into Media.Monk’s platform for entertainment and speaks to its vision where AI enhances creativity and allows for more personalized, scalable content creation.

The Case for Generative AI in the Legal Field – Ep. 210

AI-driven digital solutions enable law practitioners to search laws and cases intelligently — automating the time-consuming process of drafting and analyzing legal documents. In this episode, Thomson Reuters Chief Product Officer David Wong discusses AI’s potential to help deliver better access to justice.

Anima Anandkumar on Using Generative AI to Tackle Global Challenges – Ep. 203

Generative AI-based models can not only learn and understand natural languages — they can learn the very language of nature itself, presenting new possibilities for scientific research. Anima Anandkumar, senior director of AI research at NVIDIA, discusses generative AI’s potential to make splashes in the scientific community.

Deepdub’s Ofir Krakowski on Redefining Dubbing from Hollywood to Bollywood – Ep. 202

Deepdub acts as a digital bridge, providing access to content by using generative AI to break down language and cultural barriers in the entertainment landscape. In this episode, Deepdub co-founder and CEO Ofir Krakowski speaks on how AI-driven dubbing helps entertainment companies boost efficiency and increase accessibility.

Subscribe to the AI Podcast

Get the AI Podcast through iTunes, Google Play, Amazon Music, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

Read More

Cut the Noise: NVIDIA Broadcast Supercharges Livestreaming, Remote Work

Cut the Noise: NVIDIA Broadcast Supercharges Livestreaming, Remote Work

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for RTX PC users.

AI has changed computing forever. The spotlight has most recently been on generative AI, but AI-accelerated, NVIDIA RTX-powered tools have also been key in improving gaming, content creation and productivity over the years.

The NVIDIA Broadcast app is one example, using Tensor Cores on a local RTX GPU to seamlessly improve audio and video quality. Paired with the NVIDIA encoder (NVENC) built into GeForce RTX and NVIDIA RTX GPUs, the app makes it easy to get started as a livestreamer or to look professional during video conference calls.

The Stream Dream

High-quality livestreaming traditionally required expensive hardware. Many livestreamers relied on software CPU encoding using the x264 software library, which often impacted gameplay quality. This led many to use a dual-PC setup, with one PC focused on gaming and content and the other on encoding the stream. It was complicated to assemble, difficult to troubleshoot and often cost-prohibitive for budding livestreamers.

NVENC is here to help. It’s a dedicated hardware video encoder on NVIDIA GPUs that processes the encoding, freeing up the rest of the system to focus on game and content performance. Industry-leading streaming apps like Open Broadcaster Software (OBS) are adding support for NVENC, paving the way for a new generation of broadcasters on popular platforms like Twitch and YouTube.

Meanwhile, NVIDIA Maxine helps solve the issue of expensive equipment. It includes free, AI-powered features like virtual green screens and webcam-based augmented reality tracking that eliminate the need for special equipment like physical green screens or motion- capture suits. Broadcasters first got to experience the technology at TwitchCon 2019, where they tested OBS live on the show floor with an AI-accelerated green screen on a GeForce RTX 2080 GPU.

Maxine’s AI-powered effects debuted for RTX users in the RTX Voice beta, and moved into the NVIDIA Broadcast app.

Now Showing: NVIDIA Broadcast

NVIDIA Broadcast offers AI-powered features that improve audio and video quality for a variety of use cases. It’s user-friendly, works in any app and is a breeze to set up.

It includes:

  • Noise and Acoustic Echo Removal: AI eliminates unwanted background noise from both the mic and inbound audio at the touch of a button.
  • Virtual Backgrounds: Features like Background Removal, Replacement and Blur help customize backgrounds without the need for expensive equipment or complex lighting setups.
  • Eye Contact: AI helps make it appear as though a streamer is looking directly at the camera, even when they’re glancing off camera or taking notes.
  • Auto Frame: Dynamically tracks movements in real time, automatically cropping and zooming moving objects regardless of their position.
  • Vignette: AI applies a darkening effect to the corners of camera images, providing visual contrast to draw attention to the center of the video and adding stylistic flair.
  • Video Noise Removal: Removes visual noise from low-light situations for a cleaner picture.

NVIDIA Broadcast works by creating a virtual camera, microphone or speaker in Windows so that users can set up their devices once and use them in any broadcasting, video conferencing or voice chat apps, including Discord, Google Meet, Microsoft Teams, OBS Studio, Slack, Webex and Zoom.

Those with an NVIDIA GeForce RTX, TITAN RTX, NVIDIA RTX or Quadro RTX GPU can use their GPU’s dedicated Tensor Cores to help the app’s AI networks run in real time.

The same AI-powered technology in NVIDIA Broadcast is also available to app developers as a software development kit. Audiovisual technology company Elgato includes Maxine’s AI audio noise removal technology in its Wave Link software, while VTube Studio — a popular app for connecting a 3D model to a webcam for streaming as an animated character — offers an RTX-accelerated model tracker plug-in as a free download. Independent developer Xaymar uses NVIDIA Maxine in his VoiceFX plug-in.

Content creators can use this plug-in or Elgato’s virtual studio technology (VST) filter to clean up noise and echo from recordings in post-processing in video editing suites like Adobe Premiere Pro or in digital audio workstations like Ableton Live and Adobe Audition.

(Not) Hearing Is Believing

Since its release, NVIDIA Broadcast has been used by millions.

“I’ve utilized the video noise removal and background replacement the most,” said Mr_Vudoo, a Twitch personality and broadcaster. “The eye contact feature was very interesting and quite honestly took me by surprise at how well it worked.”

Unmesh Dinda, host of the YouTube channel PiXimperfect, demonstrated NVIDIA Broadcast’s noise-canceling and echo-removal AI features in an extreme scenario. He set an electric fan whirring directly into his microphone and donned a helmet that was intensely hammered on. Even with these loud sounds in the background, Dinda could be heard crystal clear with Broadcast’s noise-removal feature turned on. The video has racked up more than 12 million views.

NVIDIA Broadcast is also a useful tool for the growing remote workforce. In an article, Tom’s Hardware editor-in-chief Avram Piltch detailed his testing of the app’s noise reduction features against noisy air conditioners, lawn-mowing neighbors and even a robot-wielding, tantrum-throwing child. Broadcast’s AI audio filters prevailed every time:

“I got my eight-year-old to fake throwing a fit right behind me and, once I enabled noise removal, every whine of ‘I’m not going to bed’ went silent (at least on the recording),” said Piltch. “To double the challenge, we had him throw a tantrum while carrying around a robot car with whirring treads. Once again, NVIDIA Broadcast removed all of the unwanted sound.”

Even everyday scenarios like video calls with a medical professional benefit from NVIDIA Broadcast’s AI-powered background removal.

Download NVIDIA Broadcast for free on any RTX-powered desktop or laptop.

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More

EvolutionaryScale Debuts With ESM3 Generative AI Model for Protein Design

EvolutionaryScale Debuts With ESM3 Generative AI Model for Protein Design

Generative AI has revolutionized software development with prompt-based code generation — protein design is next.

EvolutionaryScale today announced the release of its ESM3 model, the third-generation ESM model, which simultaneously reasons over the sequence, structure and functions of proteins, giving protein discovery engineers a programmable platform.

The startup, which emerged from the Meta FAIR (Fundamental AI Research) unit, recently landed funding led by Lux Capital, Nat Friedman and Daniel Gross, with investment from NVIDIA.

At the forefront of programmable biology, EvolutionaryScale can assist researchers in engineering proteins that can help target cancer cells, find alternatives to harmful plastics, drive environmental mitigations and more.

EvolutionaryScale is pioneering the frontier of programmable biology with the scale-out model development of ESM3, which used NVIDIA H100 Tensor Core GPUs for the most compute ever put into a biological foundation model. The 98 billion parameter ESM3 model uses roughly 25x more flops and 60x more data than its predecessor, ESM2.

The company, which developed a database of more than 2 billion protein sequences to train its AI model, offers technology that can provide clues applicable to drug development, disease eradication and, literally, how humans have evolved at scale as a species — as its name suggests — for drug discovery researchers.

Accelerating In Silico Biological Research With ESM3

With leaps in training data, EvolutionaryScale aims to accelerate protein discovery with ESM3.

The model was trained on almost 2.8 billion protein sequences sampled from organisms and biomes, allowing scientists to prompt the model to identify and validate new proteins with increasing levels of accuracy.

ESM3 offers significant updates over previous versions. The model is natively generative, and it is an “all to all” model, meaning structure and function annotations can be provided as input rather than just as output.

Once it’s made publicly available, scientists can fine-tune this base model to construct purpose-built models based on their own proprietary data. The boost in protein engineering capabilities due to ESM3’s large-scale generative training across enormous amounts of data offers a time-traveling machine for in silico biological research.

Driving the Next Big Breakthroughs With NVIDIA BioNeMo

ESM-3 provides biologists and protein designers with a generative AI boost, helping improve their engineering and understanding of proteins. With simple prompts, it can generate new proteins with a provided scaffold, self-improve its protein design based on feedback and design proteins based on the functionality that the user indicates. These capabilities can be used in tandem in any combination to provide chain-of-thought protein design as if the user were messaging a researcher who had memorized the intricate three-dimensional meaning of every protein sequence known to humans and had learned the language fluently, enabling users to iterate back and forth.

“In our internal testing we’ve been impressed by the ability of ESM3 to creatively respond to a variety of complex prompts,” said Tom Sercu, co-founder and VP of engineering at EvolutionaryScale. “It was able to solve an extremely hard protein design problem to create a novel Green Fluorescent Protein. We expect ESM3 will help scientists accelerate their work and open up new possibilities — we’re looking forward to seeing how it will contribute to future research in the life sciences.”

EvolutionaryScale will be opening an API for closed beta today and code and weights are available for a small open version of ESM3 for non-commercial use. This version is coming soon to NVIDIA BioNeMo, a generative AI platform for drug discovery. The full ESM3 family of models will soon be available to select customers as an NVIDIA NIM microservice, run-time optimized in collaboration with NVIDIA, and supported by an NVIDIA AI Enterprise software license for testing at ai.nvidia.com.

The computing power required to train these models is growing exponentially. ESM3 was trained using the Andromeda cluster, which uses NVIDIA H100 GPUs and NVIDIA Quantum-2 InfiniBand networking.

The ESM3 model will be available on select partner platforms and NVIDIA BioNeMo.

See notice regarding software product information.

Read More

Why 3D Visualization Holds Key to Future Chip Designs

Why 3D Visualization Holds Key to Future Chip Designs

Multi-die chips, known as three-dimensional integrated circuits, or 3D-ICs, represent a revolutionary step in semiconductor design. The chips are vertically stacked to create a compact structure that boosts performance without increasing power consumption.

However, as chips become denser, they present more complex challenges in managing electromagnetic and thermal stresses. To understand and address this, advanced 3D multiphysics visualizations become essential to design and diagnostic processes.

At this week’s Design Automation Conference, a global event showcasing the latest developments in chips and systems, Ansys — a company that develops engineering simulation and 3D design software — will share how it’s using NVIDIA technology to overcome these challenges to build the next generation of semiconductor systems.

To enable 3D visualizations of simulation results for their users, Ansys uses NVIDIA Omniverse, a platform of application programming interfaces, software development kits, and services that enables developers to easily integrate Universal Scene Description (OpenUSD) and NVIDIA RTX rendering technologies into existing software tools and simulation workflows.

The platform powers visualizations of 3D-IC results from Ansys solvers so engineers can evaluate phenomena like electromagnetic fields and temperature variations to optimize chips for faster processing, increased functionality and improved reliability.

With Ansys Icepak on the NVIDIA Omniverse platform, engineers can simulate temperatures across a chip according to different power profiles and floor plans. Finding chip hot-spots can lead to better design of the chips themselves, as well as auxiliary cooling devices. However, these 3D-IC simulations are computationally intensive, limiting the number of simulations and design points users can explore.

Using NVIDIA Modulus, combined with novel techniques for handling arbitrary power patterns in the Ansys RedHawk-SC electrothermal data pipeline and model training framework, the Ansys R&D team is exploring the acceleration of simulation workflows with AI-based surrogate models. Modulus is an open-source AI framework for building, training and fine-tuning physics-ML models at scale with a simple Python interface.

With the NVIDIA Modulus Fourier neural operator (FNO) architecture, which can parameterize solutions for a distribution of partial differential equations, Ansys researchers created an AI surrogate model that efficiently predicts temperature profiles for any given power profile and a given floor plan defined by system parameters like heat transfer coefficient, thickness and material properties. This model offers near real-time results at significantly reduced computational costs, allowing Ansys users to explore a wider design space for new chips.

Ansys uses a 3D FNO model to infer temperatures on a chip surface for unseen power profiles, a given die height and heat-transfer coefficient boundary condition.

Following a successful proof of concept, the Ansys team will explore integration of such AI surrogate models for its next-generation RedHawk-SC platform using NVIDIA Modulus.

As more surrogate models are developed, the team will also look to enhance model generality and accuracy through in-situ fine-tuning. This will enable RedHawk-SC users to benefit from faster simulation workflows, access to a broader design space and the ability to refine models with their own data to foster innovation and safety in product development.

To see the joint demonstration of 3D-IC multiphysics visualization using NVIDIA Omniverse APIs, visit Ansys at the Design Automation Conference, running June 23-27, in San Francisco at booth 1308 or watch the presentation at the Exhibitor Forum.

Read More

Crack the Case With ‘Tell Me Why’ and ‘As Dusk Falls’ on GeForce NOW

Crack the Case With ‘Tell Me Why’ and ‘As Dusk Falls’ on GeForce NOW

Sit back and settle in for some epic storytelling. Tell Me Why and As Dusk Falls — award-winning, narrative-driven games from Xbox Studios — add to the 1,900+ games in the GeForce NOW library, ready to stream from the cloud. 

Members can find more adventures with four new titles available this week.

Experience a Metallica concert like no other in “Metallica: Fuel. Fire. Fury.” This journey through six fan-favorite songs features gameplay that matches the intensity. “Metallica: Fuel. Fire. Fury.” will have six different showtimes running June 22-23 in Fortnite. Anyone can get a front-row seat to the interactive music experience by streaming on their mobile device, powered by GeForce NOW.

Unravel the Mystery

Whether uncovering family mysteries in Alaska or navigating small-town secrets in Arizona, gamers are set to be drawn into richly woven stories with Tell Me Why and Ask Dusk Falls joining the cloud this week.

Tell Me Why on GeForce NOW
Ain’t nothing but a great game.

Tell Me Why — an episodic adventure game from Dontnod Entertainment, the creators of the beloved Life Is Strange series — follows twins Tyler and Alyson Ronan as they reunite after a decade to uncover the mysteries of their troubled childhoods in the fictional town of Delos Crossing, Alaska. Experience true-to-life characters, mature themes and gripping choices.

As Dusk Falls on GeForce NOW
Every family has secrets.

Dive into the intertwined lives of two families over three decades in As Dusk Falls from INTERIOR/NIGHT. Set in small-town Arizona in the 1990s, the game’s unique art style blends 2D character illustrations with 3D environments, creating a visually striking experience. Players’ choices significantly impact the storyline, making each playthrough unique.

GeForce NOW members can now stream these award-winning titles on a variety of devices, including PCs, Macs, SHIELD TVs and Android devices. Upgrade to a Priority or Ultimate membership to enjoy enhanced streaming quality and performance, including up to 4K resolution and 120 frames per second on supported devices. Jump into these emotionally rich narratives and discover the power of choice in shaping the characters’ destinies.

Wake Up to New Games

Still Wakes the Deep on GeForce NOW
Run!

In Still Wakes the Deep from The Chinese Room and Secret Mode, play as an offshore oil rig worker fighting for dear life through a vicious storm, perilous surroundings and the dark, freezing North Sea waters. All lines of communication have been severed. All exits are gone. All that remains is the need to face the unknowable horror aboard. Live the terror and escape the rig, all from the cloud.

Check out the list of new games this week:

  • Still Wakes the Deep (New release on Steam and Xbox, available on PC Game Pass, June 18)
  • Skye: The Misty Isle (New release on Steam, June 19)
  • As Dusk Falls (Steam and Xbox, available on PC Game Pass)
  • Tell Me Why (Steam and Xbox, available on PC Game Pass)
Greetings From GFN
Make sure to catch #GreetingFromGFN.

Plus, #GreetingsFromGFN continues on @NVIDIAGFN social media accounts, with members sharing their favorite locations to visit in the cloud.

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Decoding How NVIDIA AI Workbench Powers App Development

Decoding How NVIDIA AI Workbench Powers App Development

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible and showcases new hardware, software, tools and accelerations for NVIDIA RTX PC and workstation users.

The demand for tools to simplify and optimize generative AI development is skyrocketing. Applications based on retrieval-augmented generation (RAG) — a technique for enhancing the accuracy and reliability of generative AI models with facts fetched from specified external sources — and customized models are enabling developers to tune AI models to their specific needs.

While such work may have required a complex setup in the past, new tools are making it easier than ever.

NVIDIA AI Workbench simplifies AI developer workflows by helping users build their own RAG projects, customize models and more. It’s part of the RTX AI Toolkit — a suite of tools and software development kits for customizing, optimizing and deploying AI capabilities — launched at COMPUTEX earlier this month. AI Workbench removes the complexity of technical tasks that can derail experts and halt beginners.

What Is NVIDIA AI Workbench?

Available for free, NVIDIA AI Workbench enables users to develop, experiment with, test and prototype AI applications across GPU systems of their choice — from laptops and workstations to data center and cloud. It offers a new approach for creating, using and sharing GPU-enabled development environments across people and systems.

A simple installation gets users up and running with AI Workbench on a local or remote machine in just minutes. Users can then start a new project or replicate one from the examples on GitHub. Everything works through GitHub or GitLab, so users can easily collaborate and distribute work. Learn more about getting started with AI Workbench.

How AI Workbench Helps Address AI Project Challenges

Developing AI workloads can require manual, often complex processes, right from the start.

Setting up GPUs, updating drivers and managing versioning incompatibilities can be cumbersome. Reproducing projects across different systems can require replicating manual processes over and over. Inconsistencies when replicating projects, like issues with data fragmentation and version control, can hinder collaboration. Varied setup processes, moving credentials and secrets, and changes in the environment, data, models and file locations can all limit the portability of projects.

AI Workbench makes it easier for data scientists and developers to manage their work and collaborate across heterogeneous platforms. It integrates and automates various aspects of the development process, offering:

  • Ease of setup: AI Workbench streamlines the process of setting up a developer environment that’s GPU-accelerated, even for users with limited technical knowledge.
  • Seamless collaboration: AI Workbench integrates with version-control and project-management tools like GitHub and GitLab, reducing friction when collaborating.
  • Consistency when scaling from local to cloud: AI Workbench ensures consistency across multiple environments, supporting scaling up or down from local workstations or PCs to data centers or the cloud.

RAG for Documents, Easier Than Ever

NVIDIA offers sample development Workbench Projects to help users get started with AI Workbench. The hybrid RAG Workbench Project is one example: It runs a custom, text-based RAG web application with a user’s documents on their local workstation, PC or remote system.

Every Workbench Project runs in a “container” — software that includes all the necessary components to run the AI application. The hybrid RAG sample pairs a Gradio chat interface frontend on the host machine with a containerized RAG server — the backend that services a user’s request and routes queries to and from the vector database and the selected large language model.

This Workbench Project supports a wide variety of LLMs available on NVIDIA’s GitHub page. Plus, the hybrid nature of the project lets users select where to run inference.

Workbench Projects let users version the development environment and code.

Developers can run the embedding model on the host machine and run inference locally on a Hugging Face Text Generation Inference server, on target cloud resources using NVIDIA inference endpoints like the NVIDIA API catalog, or with self-hosting microservices such as NVIDIA NIM or third-party services.

The hybrid RAG Workbench Project also includes:

  • Performance metrics: Users can evaluate how RAG- and non-RAG-based user queries perform across each inference mode. Tracked metrics include Retrieval Time, Time to First Token (TTFT) and Token Velocity.
  • Retrieval transparency: A panel shows the exact snippets of text — retrieved from the most contextually relevant content in the vector database — that are being fed into the LLM and improving the response’s relevance to a user’s query.
  • Response customization: Responses can be tweaked with a variety of parameters, such as maximum tokens to generate, temperature and frequency penalty.

To get started with this project, simply install AI Workbench on a local system. The hybrid RAG Workbench Project can be brought from GitHub into the user’s account and duplicated to the local system.

More resources are available in the AI Decoded user guide. In addition, community members provide helpful video tutorials, like the one from Joe Freeman below.

Customize, Optimize, Deploy

Developers often seek to customize AI models for specific use cases. Fine-tuning, a technique that changes the model by training it with additional data, can be useful for style transfer or changing model behavior. AI Workbench helps with fine-tuning, as well.

The Llama-factory AI Workbench Project enables QLoRa, a fine-tuning method that minimizes memory requirements, for a variety of models, as well as model quantization via a simple graphical user interface. Developers can use public or their own datasets to meet the needs of their applications.

Once fine-tuning is complete, the model can be quantized for improved performance and a smaller memory footprint, then deployed to native Windows applications for local inference or to NVIDIA NIM for cloud inference. Find a complete tutorial for this project on the NVIDIA RTX AI Toolkit repository.

Truly Hybrid — Run AI Workloads Anywhere

The Hybrid-RAG Workbench Project described above is hybrid in more than one way. In addition to offering a choice of inference mode, the project can be run locally on NVIDIA RTX workstations and GeForce RTX PCs, or scaled up to remote cloud servers and data centers.

The ability to run projects on systems of the user’s choice — without the overhead of setting up the infrastructure — extends to all Workbench Projects. Find more examples and instructions for fine-tuning and customization in the AI Workbench quick-start guide.

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More