Matice Founder and Harvard Professor, Jessica Whited on Harnessing Regenerative Species – and AI – for Medical Breakthroughs

Matice Founder and Harvard Professor, Jessica Whited on Harnessing Regenerative Species – and AI – for Medical Breakthroughs

Scientists at Matice Biosciences are using AI to study the regeneration of tissues in animals known as super-regenerators, such as salamanders and planarians.

The goal of the research is to develop new treatments that will help humans heal from injuries without scarring.

On the latest episode of NVIDIA’s AI Podcast, host Noah Kravtiz spoke with Jessica Whited, a regenerative biologist at Harvard University and co-founder of Matice Biosciences.

Whited was inspired to start the company after her son suffered a severe injury while riding his bike.

She realized that while her work had been dedicated ultimately to limb regeneration, the short-term byproduct of it was a wealth of information that could be used to harness this regenerative science into topical treatments that can be put in the hands of everyday people, like her son and many others, who would no longer have to live with the physical scars of their trauma.

This led her to investigate the connection between regeneration and scarring.

Whited and her team are using AI to analyze the molecular and cellular mechanisms that control regeneration and scarring in super-regenerators.

They believe that by understanding these mechanisms, they can develop new treatments to help humans heal from injuries without scarring.

To learn more about Matice, please visit www.maticebio.com or follow along on Instagram, Twitter, Facebook and LinkedIn.

You Might Also Like

Jules Anh Tuan Nguyen Explains How AI Lets Amputee Control Prosthetic Hand, Video Games

A postdoctoral researcher at the University of Minnesota discusses his efforts to allow amputees to control their prosthetic limb — right down to the finger motions — with their minds.

Overjet’s Ai Wardah Inam on Bringing AI to Dentistry

Overjet, a member of NVIDIA Inception, is moving fast to bring AI to dentists’ offices. Dr. Wardah Inam, CEO of the company, discusses using AI to improve patient care.

Immunai CTO and Co-Founder Luis Voloch on Using Deep Learning to Develop New Drugs

Luis Voloch, co-founder and chief technology officer of Immunai, talks about tackling the challenges of the immune system with a machine learning and data science mindset.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better. Have a few minutes to spare? Fill out this listener survey.

Featured Image Credit: Matice Biosciences

Read More

NVIDIA H100 GPUs Set Standard for Generative AI in Debut MLPerf Benchmark 

NVIDIA H100 GPUs Set Standard for Generative AI in Debut MLPerf Benchmark 

Leading users and industry-standard benchmarks agree: NVIDIA H100 Tensor Core GPUs deliver the best AI performance, especially on the large language models (LLMs) powering generative AI.

H100 GPUs set new records on all eight tests in the latest MLPerf training benchmarks released today, excelling on a new MLPerf test for generative AI. That excellence is delivered both per-accelerator and at-scale in massive servers.

For example, on a commercially available cluster of 3,584 H100 GPUs co-developed by startup Inflection AI and operated by CoreWeave, a cloud service provider specializing in GPU-accelerated workloads, the system completed the massive GPT-3-based training benchmark in less than eleven minutes.

“Our customers are building state-of-the-art generative AI and LLMs at scale today, thanks to our thousands of H100 GPUs on fast, low-latency InfiniBand networks,” said Brian Venturo, co-founder and CTO of CoreWeave. “Our joint MLPerf submission with NVIDIA clearly demonstrates the great performance our customers enjoy.”

Top Performance Available Today

Inflection AI harnessed that performance to build the advanced LLM behind its first personal AI, Pi, which stands for personal intelligence. The company will act as an AI studio, creating personal AIs users can interact with in simple, natural ways.

“Anyone can experience the power of a personal AI today based on our state-of-the-art large language model that was trained on CoreWeave’s powerful network of H100 GPUs,” said Mustafa Suleyman, CEO of Inflection AI.

Co-founded in early 2022 by Mustafa and Karén Simonyan of DeepMind and Reid Hoffman, Inflection AI aims to work with CoreWeave to build one of the largest computing clusters in the world using NVIDIA GPUs.

Tale of the Tape

These user experiences reflect the performance demonstrated in the MLPerf benchmarks announced today.

NVIDIA wins all eight tests in MLPerf Training v3.0

H100 GPUs delivered the highest performance on every benchmark, including large language models, recommenders, computer vision, medical imaging and speech recognition. They were the only chips to run all eight tests, demonstrating the versatility of the NVIDIA AI platform.

Excellence Running at Scale

Training is typically a job run at scale by many GPUs working in tandem. On every MLPerf test, H100 GPUs set new at-scale performance records for AI training.

Optimizations across the full technology stack enabled near linear performance scaling on the demanding LLM test as submissions scaled from hundreds to thousands of H100 GPUs.

NVIDIA demonstrates efficiency at scale in MLPerf Training v3.0

In addition, CoreWeave delivered from the cloud similar performance to what NVIDIA achieved from an AI supercomputer running in a local data center. That’s a testament to the low-latency networking of the NVIDIA Quantum-2 InfiniBand networking CoreWeave uses.

In this round, MLPerf also updated its benchmark for recommendation systems.

The new test uses a larger data set and a more modern AI model to better reflect the challenges cloud service providers face. NVIDIA was the only company to submit results on the enhanced benchmark.

An Expanding NVIDIA AI Ecosystem

Nearly a dozen companies submitted results on the NVIDIA platform in this round. Their work shows NVIDIA AI is backed by the industry’s broadest ecosystem in machine learning.

Submissions came from major system makers that include ASUS, Dell Technologies, GIGABYTE, Lenovo, and QCT. More than 30 submissions ran on H100 GPUs.

This level of participation lets users know they can get great performance with NVIDIA AI both in the cloud and in servers running in their own data centers.

Performance Across All Workloads

NVIDIA ecosystem partners participate in MLPerf because they know it’s a valuable tool for customers evaluating AI platforms and vendors.

The benchmarks cover workloads users care about — computer vision, translation and reinforcement learning, in addition to generative AI and recommendation systems.

Users can rely on MLPerf results to make informed buying decisions, because the tests are transparent and objective. The benchmarks enjoy backing from a broad group that includes Arm, Baidu, Facebook AI, Google, Harvard, Intel, Microsoft, Stanford and the University of Toronto.

MLPerf results are available today on H100, L4 and NVIDIA Jetson platforms across AI training, inference and HPC benchmarks. We’ll be making submissions on NVIDIA Grace Hopper systems in future MLPerf rounds as well.

The Importance of Energy Efficiency

As AI’s performance requirements grow, it’s essential to expand the efficiency of how that performance is achieved. That’s what accelerated computing does.

Data centers accelerated with NVIDIA GPUs use fewer server nodes, so they use less rack space and energy. In addition, accelerated networking boosts efficiency and performance, and ongoing software optimizations bring x-factor gains on the same hardware.

Energy-efficient performance is good for the planet and business, too. Increased performance can speed time to market and let organizations build more advanced applications.

Energy efficiency also reduces costs because data centers accelerated with NVIDIA GPUs use fewer server nodes. Indeed, NVIDIA powers 22 of the top 30 supercomputers on the latest Green500 list.

Software Available to All

NVIDIA AI Enterprise, the software layer of the NVIDIA AI platform, enables optimized performance on leading accelerated computing infrastructure. The software comes with the enterprise-grade support, security and reliability required to run AI in the corporate data center.

All the software used for these tests is available from the MLPerf repository, so virtually anyone can get these world-class results.

Optimizations are continuously folded into containers available on NGC, NVIDIA’s catalog for GPU-accelerated software.

Read this technical blog for a deeper dive into the optimizations fueling NVIDIA’s MLPerf performance and efficiency.

Read More

Meet the Omnivore: Startup Develops App Letting Users Turn Objects Into 3D Models With Just a Smartphone

Meet the Omnivore: Startup Develops App Letting Users Turn Objects Into 3D Models With Just a Smartphone

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who accelerate 3D workflows and create virtual worlds using NVIDIA Omniverse, a development platform built on Universal Scene Description, aka OpenUSD.

As augmented reality (AR) becomes more prominent and accessible across the globe, Kiryl Sidarchuk is helping to erase the border between the real and virtual worlds.

Kiryl Sidarchuk

Co-founder and CEO of AR-Generation, which is a member of the NVIDIA Inception program for cutting-edge startups, Sidarchuk with his company developed MagiScan, an AI-based 3D scanner app.

It lets users capture any object with their smartphone camera and quickly creates a high-quality, detailed 3D model of it for use in any AR or metaverse application.

AR-Generation now offers an extension that enables direct export of 3D models from MagiScan to NVIDIA Omniverse, a development platform for connecting and building 3D tools and metaverse applications.

It’s made possible with speed and ease by Universal Scene Description, aka OpenUSD, an extensible framework that serves as a common language between digital content-creation tools.

“Augmented reality will become an integral part of everyday life,” said Sidarchuk, who’s based in Nicosia, Cyprus. “We customized our app to allow export of 3D models based on real-world objects directly to Omniverse, enabling users to showcase the models in AR and integrate them into any metaverse or game.”

Omniverse extensions are core building blocks that let anyone create and extend functions of Omniverse apps using the popular Python or C++ programming languages.

It was simple and convenient for AR-Generation to build the extension, Sidarchuk said, thanks to easily accessible documentation, as well as technical guidance from NVIDIA teams, free AWS credits and networking opportunities with other AI-driven companies — all benefits of being a part of NVIDIA Inception.

Capture, Click and Create 3D Models From Real-World Objects 

Sidarchuk estimates that MagiScan can create 3D models from objects 10x faster and at up to 100x less cost than it would take a designer to do so manually.

This frees creators up to focus on fine-tuning their work and makes AR more accessible to all through a simple app.

AR-Generation chose to build an extension for Omniverse because the platform “provides a convenient environment that integrates all the tools for working with 3D and generative AI,” said Sidarchuk. “Plus, we can collaborate and exchange ideas with colleagues in real time.”

Export 3D models from MagiScan to Omniverse with OpenUSD.

Sidarchuk’s favorite feature of Omniverse is its OpenUSD compatibility, which enables seamless interchange of 3D data between creative applications. “OpenUSD is the format of the future,” he said.

Based on this framework, the MagiScan extension for Omniverse enables fast, affordable creation of high-quality 3D models for any object. MagiScan is available for download on iOS and Android devices.

“It can help everyone from individuals to large corporations save time and money in digitalization,” said Sidarchuk, who claims his first word as a toddler was “money.”

The business-oriented developer started his first company at age 16. It was a one-man endeavor, buying fresh fruits and vegetables from a small village and selling them in Minsk, the capital of Belarus. “That’s how I earned enough to buy my first car,” he mused.

More than a dozen years later, when he’s not working to “enhance human capabilities through augmented-reality technologies,” he said, Sidarchuk now spends his free time with his five-year-old daughter, Aurora.

Watch Sidarchuk discuss 3D modeling, AI and AR on a replay of his Omniverse livestream on demand, and learn more about the MagiScan extension for Omniverse.

Join In on the Creation

Anyone can build their own Omniverse extension or Connector to enhance their 3D workflows and tools. Creators and developers across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Check out artwork from other “Omnivores” and submit projects in the gallery. Connect your workflows to Omniverse with software from Adobe, Autodesk, Epic Games, Maxon, Reallusion and more.

Get started with NVIDIA Omniverse by downloading the standard license free, or learn how Omniverse Enterprise can connect your team. Developers can get started with Omniverse resources and learn about OpenUSD. Explore the growing ecosystem of 3D tools connected to Omniverse.

Stay up to date on the platform by subscribing to the newsletter, and follow NVIDIA Omniverse on Instagram, Medium and Twitter. For more, join the Omniverse community and check out the Omniverse forums, Discord server, Twitch and YouTube channels. 

Read More

Quicker Cures: How Insilico Medicine Uses Generative AI to Accelerate Drug Discovery

Quicker Cures: How Insilico Medicine Uses Generative AI to Accelerate Drug Discovery

While generative AI is a relatively new household term, drug discovery company Insilico Medicine has been using it for years to develop new therapies for debilitating diseases.

The company’s early bet on deep learning is bearing fruit — a drug candidate discovered using its AI platform is now entering Phase 2 clinical trials to treat idiopathic pulmonary fibrosis, a relatively rare respiratory disease that causes progressive decline in lung function.

Insilico used generative AI for each step of the preclinical drug discovery process: to identify a molecule that a drug compound could target, generate novel drug candidates, gauge how well these candidates would bind with the target, and even predict the outcome of clinical trials.

Doing this using traditional methods would have cost more than $400 million and taken up to six years. But with generative AI, Insilico accomplished them for one-tenth of the cost and one-third of the time — reaching the first phase of clinical trials just two and a half years after beginning the project.

“This first drug candidate that’s going to Phase 2 is a true highlight of our end-to-end approach to bridge biology and chemistry with deep learning,” said Alex Zhavoronkov, CEO of Insilico Medicine. “This is a significant milestone not only for us, but for everyone in the field of AI-accelerated drug discovery.”

Insilico is a premier member of NVIDIA Inception, a free program that provides cutting-edge startups with technical training, go-to-market support and AI platform guidance. The company uses NVIDIA Tensor Core GPUs in its generative AI drug design engine, Chemistry42, to generate novel molecular structures — and was one of the first adopters of an early precursor to NVIDIA DGX systems in 2015.

AI Enables End-to-End Preclinical Drug Discovery

Insilico’s Pharma.AI platform includes multiple AI models trained on millions of data samples for a range of tasks. One AI tool, PandaOmics, rapidly identifies and prioritizes targets that play a significant role in a disease’s effectiveness — like the infamous spike protein on the virus that causes COVID-19.

The Chemistry42 engine can design within days new potential drug compounds that target the protein identified using PandaOmics. The generative chemistry tool uses deep learning to come up with drug-like molecular structures from scratch.

“Typically, AI companies in drug discovery focus either on biology or on chemistry,” said Petrina Kamya, head of AI platforms at Insilico. “From the start, Insilico has been applying the same deep learning approach to both fields, using AI both to discover drug targets and generate chemical structures of small molecules.”

Over the years, the Insilico team has adopted different kinds of deep neural networks for drug discovery, including generative adversarial networks and transformer models. They’re now using NVIDIA BioNeMo to accelerate the early drug discovery process with generative AI.

Finding the Needle in the AI Stack

To develop its pulmonary fibrosis drug candidate, Insilico used Pharma.AI to design and synthesize about 80 molecules, achieving unprecedented success rates for preclinical drug candidates. The process — from identifying the target to nominating a promising drug candidate for trials — took under 18 months.

During Phase 2 clinical trials, Insilico’s pulmonary fibrosis drug will be tested in several hundred people with the condition in the U.S. and China. The process will take several months — but in parallel, the company has more than 30 programs in the pipeline to target other diseases, including a number of cancer drugs.

“When we first presented our results, people just did not believe that generative AI systems could achieve this level of diversity, novelty and accuracy,” said Zhavoronkov. “Now that we have an entire pipeline of promising drug candidates, people are realizing that this actually works.”

Learn more about Insilico Medicine’s Chemistry42 platform for AI-accelerated drug candidate screening in this talk from NVIDIA GTC.

Subscribe to NVIDIA healthcare news and generative AI news.

Read More

Deep Learning Digs Deep: AI Unveils New Large-Scale Images in Peruvian Desert

Deep Learning Digs Deep: AI Unveils New Large-Scale Images in Peruvian Desert

Researchers at Yamagata University in Japan have harnessed AI to uncover four previously unseen geoglyphs — images on the ground, some as wide as 1,200 feet, made using the land’s elements — in Nazca, a seven-hour drive south of Lima, Peru.

The geoglyphs — a humanoid, a pair of legs, a fish and a bird — were revealed using a deep learning model, making the discovery process significantly faster than traditional archaeological methods.

The team’s deep learning model training was executed on an IBM Power Systems server with an NVIDIA GPU.

Using open-source deep learning software, the researchers analyzed high-resolution aerial photographs, a technique that was part of a study that began in November 2019.

Published this month in the Journal of Archaeological Science, the study confirms the deep learning model’s findings through onsite surveys and highlights the potential of AI in accelerating archaeological discoveries.

The deep learning techniques that comprise the hallmark of modern AI are used for various archeological efforts, whether analyzing ancient scrolls discovered across the Mediterranean or categorizing pottery sherds from the American Southwest.

The Nazca lines, a series of ancient geoglyphs that date from 500 B.C. to 500 A.D. — primarily likely from 100 B.C. to 300 A.D. — were created by removing darker stones on the desert floor to reveal lighter-colored sand beneath.

The drawings — depicting animals, plants, geometric shapes and more — are thought to have had religious or astronomical significance to the Nazca people who created them.

The discovery of these new geoglyphs indicates the possibility of more undiscovered sites in the area.

And it underscores how technology like deep learning can enhance archaeological exploration, providing a more efficient approach to uncovering hidden archaeological sites.

Read the full paper.

Featured image courtesy of Wikimedia Commons.

Read More

Scientists Improve Delirium Detection Using AI and Rapid-Response EEGs

Scientists Improve Delirium Detection Using AI and Rapid-Response EEGs

Detecting delirium isn’t easy, but it can have a big payoff: speeding essential care to patients, leading to quicker and surer recovery.

Improved detection also reduces the need for long-term skilled care, enhancing the quality of life for patients while decreasing a major financial burden. In the U.S., caring for those suffering from delirium costs up to $64,000 a year per patient, according to the National Institutes of Health.

In a paper published last month in Nature, researchers describe how they used a deep learning model called Vision Transformer, accelerated by NVIDIA GPUs, alongside a rapid-response electroencephalogram, or EEG, device to detect delirium in critically ill older adults.

The paper, called “Supervised deep learning with vision transformer predicts delirium using limited lead EEG,” is authored by Malissa Mulkey of the University of South Carolina, Huyunting Huang of Purdue University, Thomas Albanese and Sunghan Kim of the University of East Carolina, and Baijian Yang of Purdue.

Their innovative approach achieved a testing accuracy rate of 97%, promising a potential breakthrough in forecasting dementia. And by harnessing AI and EEGs, the researchers could objectively evaluate prevention and treatment methods, leading to better care.

This impressive result is due in part to the accelerated performance of NVIDIA GPUs, enabling the researchers to accomplish their tasks in half the time compared to CPUs.

Delirium affects up to 80% of critically ill patients. Yet conventional clinical detection methods identify fewer than 40% of cases — representing a significant gap in patient care. Presently, screening ICU patients involves a subjective bedside assessment.

The introduction of handheld EEG devices could make screening more accurate and affordable, but the lack of skilled technicians and neurologists poses a challenge.

The use of AI, however, can eliminate the need for a neurologist to interpret findings and allow for the detection of changes associated with delirium roughly two days before symptom onset, when patients are more receptive to treatment. It also makes it possible to use EEGs with minimal training.

The researchers applied an AI model called ViT, initially created for natural language processing and accelerated by NVIDIA GPUs, to EEG data — offering a fresh approach to data interpretation.

The use of a handheld rapid-response EEG device, which doesn’t require large EEG machines or specialized technicians, was another noteworthy study finding.

This practical tool, combined with advanced AI models for interpreting the data they collect, could streamline delirium screenings in critical care units.

The research presents a promising method for delirium detection that could shorten hospital stays, increase discharge rates, decrease mortality rates and reduce the financial burden associated with delirium.

By integrating the power of NVIDIA GPUs with innovative deep learning models and practical medical devices, this study underlines the transformative potential of technology in enhancing patient care.

As AI grows and develops, medical professionals are increasingly likely to rely on it to forecast conditions like dementia and intervene early, revolutionizing the future of critical care.

Read the full paper.

Read More

A Golden Age: ‘Age of Empires III’ Joins GeForce NOW

A Golden Age: ‘Age of Empires III’ Joins GeForce NOW

Conquer the lands in Microsoft’s award-winning Age of Empires III: Definitive Edition. It leads 10 new games supported today on GeForce NOW.

At Your Command

Age of Empires III on GeForce NOW
Stream battles all from the cloud.

Age of Empires III: Definitive Edition is a remaster of one of the most beloved real-time strategy franchises featuring improved visuals, enhanced gameplay, cross-platform multiplayer and more. Command mighty civilizations from across Europe and the Americas or jump to the battlefields of Asia. Members can experience two new game modes: Historical Battles and The Art of War Challenge Missions. Two new nations also join this edition — Sweden and the Inca — each with advantages for conquering the New World.

Build an empire today and stream across devices in glorious 4K resolution with an Ultimate membership.

Conquer Your Games List

Conqueror's Blade on GeForce NOW
Master the art of siege tactics in “Conqueror’s Blade” this week.

The GeForce NOW library is always expanding. Take a look at the 10 newly supported games this week.

  • Aliens: Dark Descent (New release on Steam, June 20)
  • Trepang2 (New release on Steam, June 21)
  • Forever Skies (New release on Steam, June 22)
  • Age of Empires III: Definitive Edition (Steam)
  • A.V.A Global (Steam)
  • Bloons TD 6 (Steam)
  • Conqueror’s Blade (Steam)
  • Layers of Fear (Steam)
  • Park Beyond (Steam)
  • Tom Clancy’s Rainbow Six Extraction (Steam)

Before diving into the weekend, let us know your answer to our question of the week on Twitter or in the comments below. Happy streaming!

Read More

Shell-e-brate Good Times in 3D With ‘Kingsletter’ This Week ‘In the NVIDIA Studio’

Shell-e-brate Good Times in 3D With ‘Kingsletter’ This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Amir Anbarestani, an accomplished 3D artist who goes by the moniker Kingsletter, had a “shell of a good time” creating his Space Turtle scene this week In the NVIDIA Studio.

Kingsletter has always harbored a fascination with 3D art, he said. As a child, he often enjoyed exploring and crafting within immersive environments. Whether it was playing with plasticine — putty-like modeling material — or creating pencil drawings, his innate inclination for self-expression always found resonance within the expansive domain of 3D.

Below, he shares his inspiration and creative process using ZBrush, Adobe Substance 3D Painter and Blender.

An NVIDIA DLSS 3 plug-in is now available in Unreal Engine 5, offering select benefits including AI upscaling for high frame rates, super resolution and more for GeForce RTX 40 Series owners.

And 3D creative app Marvelous Designer launches Into the Omniverse its NVIDIA Omniverse Connector this month. Learn how talented artists are using the Connector, along with the Universal Scene Description (“OpenUSD”) framework, to elevate their creative workflows.

NVIDIA DLSS 3 Plug-In Is Unreal — Engine 5

NVIDIA Studio released a DLSS 3 plug-in compatible with Unreal Engine 5. The Play in Editor tool is useful for game developers to quickly review gameplay in a level while editing — and DLSS 3 AI upscaling will unlock significantly higher frame rates on GeForce RTX 40 Series GPUs for even smoother previewing.

NVIDIA DLSS 3 plug-in unlocks incredible visual details with DLSS 3 in Unreal Engine 5.

Plus, select Unreal Engine viewports offer DLSS 2 Super Resolution and upscaling benefits in typical content-creation workflows like modeling, lighting, animation and more.

Download DLSS 3 for Unreal Engine 5.2, available now. Learn more about NVIDIA technologies supported by Unreal Engine 5.

Turtle Recall 

The process began with sketching and initial sculpting in the ZBrush tool, where the concept of a floating turtle in space took shape and evolved into a dynamic shot of the creature soaring toward the camera.

“It’s remarkable how something as simple as shaping an idea’s basic form can be so immensely gratifying,” said Kingsletter on the blockout phase. “There’s a unique joy in starting with a blank canvas and gradually bringing the essence of a concept to life.”

Sketching and initial sculpting in ZBrush.

After finalizing the model in ZBrush, Kingsletter used ZRemesher to retopologize it, or generate a low-poly version suitable for the intended scene. This is useful for removing artifacts and other mesh issues before animation and rigging.

“NVIDIA graphics cards are industry leading in the creative community. I don’t think I know anyone that uses other GPUs.” — Kingsletter

The RIZOMUV UV mapping 3D software was then deployed for unwrapping the model, the process of opening a mesh to make a 2D texture that covers a 3D object. This is effective for adding textures to objects with precision, a common need for professional artists.

Next, Kingsletter applied surface details, from subtle dusting to extreme wear and tear, with materials mimicking real-world behaviors such as sheen, subsurface scattering and more in Adobe Substance 3D Painter. RTX-accelerated light and ambient occlusion enabled fully baked models in mere seconds.

Textures added and baked rapidly in Adobe Substance 3D Painter.

Kingsletter then moved to Blender to animate the scene, setting up simple rigs and curves to bring the turtle’s flapping limbs and flight to life. Harnessing the potential of his MSI Creator Z17 HX Studio A13V NVIDIA Studio laptop from MSI with GeForce RTX 4070 graphics turtle-ly exceeded the artist’s lofty expectations.

The MSI Creator Z17 HX Studio laptop with GeForce RTX 4070 graphics.

“As a digital creative professional, I always strive to work with the best creative tools available,” Kingsletter said. “Choosing the MSI Creator laptop allowed me to exceed my creative professional needs and indulge in my passionate gaming hobby.”

He enriched the cosmic environment using Blender’s particle system, which scattered random debris, asteroids and a small, rotating planet throughout the outer-space scene. AI-powered RTX-accelerated OptiX ray tracing in the viewport unlocked buttery-smooth interactive animations in the viewport.

Create magnificent worlds in Blender accelerated by GeForce RTX graphics.

“Simulating smoke proved to be the most challenging aspect,” said Kingsletter about his first foray into this form of animation. “Through numerous trials and errors, I persevered until I achieved a truly satisfactory result.”

Realistic smoke elevated the 3D animation.

His RTX 4070 GPU facilitated smoother, more efficient rendering of the final visuals with RTX-accelerated OptiX ray tracing in Blender Cycles, ensuring the fastest final frame render.

When asked what he’d advise his younger artist self, Kingsletter said, “I’d enhance my observation skills. By immersing myself in the intricacies of form and paying careful attention to the world around me, I would have laid a stronger foundation for my creative journey.”

Wise words for all creators.

Digital 3D artist Kingsletter.

Check out Kingsletter’s beautiful 3D creations on Instagram.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

Into the Omniverse: Universal Scene Description Support for Marvelous Designer Lets Users Tailor Digital Assets, Clothes for 3D Characters

Into the Omniverse: Universal Scene Description Support for Marvelous Designer Lets Users Tailor Digital Assets, Clothes for 3D Characters

Editor’s note: This post is part of Into the Omniverse, a monthly series focused on how artists, developers and enterprises can transform their workflows using the latest advances in Universal Scene Description and NVIDIA Omniverse.

Whether animating fish fins or fashioning chic outfits for digital characters, creators can tap Marvelous Designer software to compose and tailor assets, clothes and other materials for their 3D workflows.

Marvelous Designer recently launched an Omniverse Connector, a tool that enhances collaborative workflows that take place between its software and NVIDIA Omniverse, a development platform for connecting and building 3D tools and applications.

The Connector enables users to significantly speed and ease their design processes, thanks to its support for the Universal Scene Description framework, known as OpenUSD, which serves as a common language between 3D tools.

In a typical computer graphics pipeline, an artist needs to go back and forth between software in finalizing their work. The new Omniverse Connector enables creators to save time with Marvelous Designer’s improved import and export capabilities through OpenUSD.

In a recent livestream, 3D designer Brandon Yu shared how he’s using the new Connector and OpenUSD to improve his collaborative workflow, enhance productivity, expand creative possibilities and streamline his design process.

Mike Shawbrook, who has more than 150,000 subscribers on his MH Tutorials YouTube channel, walks through using the new Connector in the tutorial below. Shawbrook demonstrates how he set up a live session between Marvelous Designer and Omniverse to create a simple cloth blanket.

For more, check out this tutorial on using the new Connector and see how OpenUSD can improve 3D workflows:

Improved USD Compatibility

With the Marvelous Designer Omniverse Connector, users can harness the real-time rendering capabilities of Omniverse to visualize their garments in an interactive environment. This integration empowers creators to make informed design decisions, preview garments’ reactions to different lighting conditions and simulate realistic fabric behavior in real time.

The Connector’s expanded support for OpenUSD enables seamless interchange of 3D data between creative applications.

In the graphic above, an artist uses the new connector to adjust 3D-animated fish fins, a key digital material in an underwater scene.

Get Plugged Into the Omniverse 

To learn more about how OpenUSD can improve 3D workflows, check out a new video series on the file framework. The first installment covers four OpenUSD “superpowers.”

Anyone can build their own Omniverse extension or Connector to enhance their 3D workflows and tools.

Share your Marvelous Designer and Omniverse creations to the Omniverse gallery for a chance to be featured on NVIDIA social media channels.

Get started with NVIDIA Omniverse by downloading the standard license free, or learn how Omniverse Enterprise can connect your team. Developers can get started with Omniverse resources and learn about OpenUSD. Explore the growing ecosystem of 3D tools connected to Omniverse.

Stay up to date on the platform by subscribing to the newsletter, and follow NVIDIA Omniverse on Instagram, Medium and Twitter. For more, join the Omniverse community and check out the Omniverse forums, Discord server, Twitch and YouTube channels. 

Featured image courtesy of Marvelous Designer.

Read More

NVIDIA CEO: Creators Will Be “Supercharged” by Generative AI

NVIDIA CEO: Creators Will Be “Supercharged” by Generative AI

Generative AI will “supercharge” creators across industries and content types, NVIDIA founder and CEO Jensen Huang said today at the Cannes Lions Festival, on the French Riviera.

“For the very first time, the creative process can be amplified in content generation, and the content generation could be in any modality — it could be be text, images, 3D, videos,” Huang said in a conversation with Mark Read, CEO of WPP — the world’s largest marketing and communications services company.

Huang and Read backstage at Cannes Lions

At the event attended by thousands of creators, marketers and brand execs from around the world, Huang outlined the impact of AI on the $700 billion digital advertising industry. He also touched on the ways AI can enhance creators’ abilities, as well as the importance of responsible AI development.

“You can do content generation at scale, but infinite content doesn’t imply infinite creativity,” he said. “Through our thoughts, we have to direct this AI to generate content that has to be aligned to your values and your brand tone.”

The discussion followed Huang’s recent keynote at COMPUTEX, where NVIDIA and WPP announced a collaboration to develop a content engine powered by generative AI and the NVIDIA Omniverse platform for building and operating metaverse applications.

Driving Forces of the Generative AI Era

NVIDIA has been pushing the boundaries of graphics technology for 30 years and been at the forefront of the AI revolution for a decade. This combination of expertise in graphics and AI uniquely positions the company to enable the new era of generative AI applications.

Huang said that “the biggest moment of modern AI” can be traced back to an academic contest in 2012, when a team of University of Toronto researchers led by Alex Krizhevsky showed that NVIDIA GPUs could train an AI model that recognized objects better than any computer vision algorithm that came before it.

Since then, developers have taught neural networks to recognize images, videos, speech, protein structures, physics and more.

“You could learn the language of almost anything,” Huang said. “Once you learn the language, you can apply the language — and the application of language is generation.”

Generative AI models can create text, pixels, 3D objects and realistic motion, giving professionals superpowers to more quickly bring their ideas to life. Like a creative director working with a team of artists, users can direct AI models with prompts, and fine-tune the output to align with their vision.

“You have to give the machine feedback like the best creative director,” Read said.

These tools aren’t a replacement for human creativity, Huang emphasized. They augment the skills of artists and marketing professionals to help them feed demand from clients by producing content more quickly and in multiple forms tailored to different audiences.

“We will democratize content generation,” Huang said.

Reimagining How We Live, Work and Create With AI

Generative AI’s key benefit for the creative industry is its ability to scale up content generation, rapidly generating options for text and visuals that can be used in advertising, marketing and film.

“In the old days, you’d create hundreds of different ad options that are retrieved based on the medium,” Huang said. “In the future, you won’t retrieve — you’ll generate billions of different ads. But every single one of them has to be tone appropriate, has to be brand perfect.”

For use by professional creators, these AI tools must also produce high-quality visuals that meet or exceed the standard of content captured through traditional methods.

It all starts with a digital twin, a true-to-reality simulation of a real-world physical asset. The NVIDIA Omniverse platform enables the creation of stunning, photorealistic visuals that accurately represent physics and materials — whether for images, videos, 3D objects or immersive virtual worlds.

“Omniverse is a virtual world,” Huang said. “We created a virtual world where AI could learn how to create an AI that’s physically based and grounded by physics.”  

“This virtual world has the ability to ingest assets and content that’s created by any tool, because we have this interface called USD,” he said, referring to the Universal Scene Description framework for collaborating in 3D. With it, artists and designers can combine assets developed using popular tools from companies like Adobe and Autodesk with virtual worlds developed using generative AI.

NVIDIA Picasso, a foundry for custom generative AI models for visual design unveiled earlier this year, also supports best-in-class image, video and 3D generative AI capabilities developed in collaboration with partners including Adobe, Getty Images and Shutterstock.

“We created a platform that makes it possible for our partners to train from data that was licensed properly from, for example, Getty, Shutterstock, Adobe,” Huang said. “They’re respectful of the content owners. The training data comes from that source, and whatever economic benefits come from that could accrete back to the creators.”

Like any groundbreaking technology, it’s critical that AI is developed and deployed thoughtfully, Read and Huang said. Technology to watermark AI-generated assets and to detect whether a digital asset was modified or counterfeited will support these goals.

We have to put as much energy into the capabilities of AI as we do the safety of AI,” Huang said. “In the world of advertising, safety is brand alignment, brand integrity, appropriate tone and truth.”

Collaborating on Content Engine for Digital Advertising

As a leader in digital advertising, WPP is embracing AI as a tool to boost creativity and personalization, helping creators across the industry craft compelling messages that reach the right consumer.

“From the creative process to the customer, there’s going to have to be ad agencies in the middle that understand the technology,” Huang said. “That entire process in the middle requires humans in the loop. You have to understand the voice of the brand you’re trying to represent.”

Using Omniverse Cloud, WPP’s creative professionals can build physically accurate digital twins of products using a brand’s specific product-design data. This real-world data can be combined with AI-generated objects and digital environments — licensed through partners such as Adobe and Getty Images — to create virtual sets for marketing content.

“WPP is going to unquestionably become an AI company,” Huang said. “You’ll create an AI factory where the input is creativity, thoughts and prompts, and what comes out of it is content.”

Enhanced by responsibly trained, NVIDIA-accelerated generative AI, this content engine will boost creative teams’ speed and efficiency, helping them quickly render brand-accurate advertising content at scale.

“The type of content you’ll be able to help your clients generate will be practically infinite,” Huang said. “From the days of hundreds of examples of content that you create for a particular brand or for a particular campaign, it’s going to eventually become billions of generated content for every individual.”

Learn more about NVIDIA’s collaboration with WPP.

Read More