NVIDIA Helps Retail Industry Tackle Its $100 Billion Shrink Problem

NVIDIA Helps Retail Industry Tackle Its $100 Billion Shrink Problem

The global retail industry has a $100 billion problem.

“Shrinkage” — the loss of goods due to theft, damage and misplacement — significantly crimps retailers’ profits.

An estimated 65% of shrinkage is due to theft, according to the National Retail Federation’s 2022 Retail Security Survey, conducted in partnership with the Loss Prevention Research Council. And many retailers are reporting theft has more than doubled recently, driven by rising prices of food and other essentials.

To make it easier for developers to quickly build and roll out applications designed to prevent theft, NVIDIA today announced three Retail AI Workflows, built on its Metropolis microservices. They can be used as no-code or low-code building blocks for loss-prevention applications because they come pretrained with images of the most-stolen products as well as software to plug into existing store applications for point-of-sale machines and object and product tracking across entire stores.

“Retail theft is growing due to macro-dynamics, and threatens to overwhelm the industry,” said Read Hayes, director of the Loss Prevention Research Council. “Businesses are now facing the reality that investment in loss-prevention solutions is a critical requirement.”

The NVIDIA Retail AI Workflows, which are available through the NVIDIA AI Enterprise software suite, include:

  • Retail Loss Prevention AI Workflow: The AI models within this workflow come pretrained to recognize hundreds of products most frequently lost to theft — including meat, alcohol and laundry detergent — and to recognize them in the varying sizes and shapes they’re offered. With synthetic data generation from NVIDIA Omniverse, retailers and independent software vendors can customize and further train the models to hundreds of thousands of store products. The workflow is based on a state-of-the-art few-shot learning technique developed by NVIDIA Research which, combined with active learning, identifies and captures any new products scanned by customers and sales associates during checkout to ultimately improve model accuracy.
  • Multi-Camera Tracking AI Workflow: Delivers multi-target, multi-camera (MTMC) capabilities that allow application developers to more easily create systems that track objects across multiple cameras throughout the store. The workflow tracks objects and store associates across cameras and maintains a unique ID for each object. Objects are tracked through visual embeddings or appearance, rather than personal biometric information, to maintain full shopper privacy.
  • Retail Store Analytics Workflow: Uses computer vision to provide insights for store analytics, such as store traffic trends, counts of customers with shopping baskets, aisle occupancy and more via custom dashboards.

The workflows are built on NVIDIA Metropolis microservices, a low- or no-code way of building AI applications. The microservices provide the building blocks for developing complex AI workflows and allow them to rapidly scale into production-ready AI apps.

Developers can easily customize and extend these AI workflows, including by integrating their own models. The microservices also make it easier to integrate new offerings with legacy systems, such as point-of-sale systems.

“NVIDIA’s new Retail AI Workflows built on Metropolis microservices allow us to customize our product, scale rapidly to fit our ever-growing customers’ needs better and continue to drive innovation in the retail space,” said Bobby Chowdary, chief technology officer at Radius.ai.

“As part of our applied AI offerings, Infosys is developing state-of-the-art loss prevention systems leveraging NVIDIA’s new workflows comprising pretrained models for retail SKU recognition and microservices architecture,” said Balakrishna D R, executive vice president and head of AI and Automation at Infosys. “It will enable us to deploy these solutions faster and rapidly scale across stores and product lines while also getting much higher levels of accuracy than before.”

NVIDIA will unveil additional details of its Retail AI Workflows at the National Retail Federation Conference in New York, Jan. 15-17.

Sign up for early access to the new NVIDIA Retail AI Workflows for developers and learn more in the NVIDIA Technical Blog. Join NVIDIA at #NRF2023.

Read More

3D Artist ‘CG Geek’ Builds Massive Sci-Fi World in Record Time This Week ‘In the NVIDIA Studio’

3D Artist ‘CG Geek’ Builds Massive Sci-Fi World in Record Time This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

3D and animation extraordinaire CG Geek completed an ambitious design challenge this week In the NVIDIA Studio — building a massive, sci-fi-inspired 3D world in only three days. The creation of the world, dubbed The Fullness of Time, was fast-tracked by his GeForce RTX 4090 GPU.

72 Hours to Build a Sci-Fi World

Animator and visual effects artist CG Geek teaches aspiring artists how to get started on his popular YouTube channel. He also shares tutorials on Blender, his favorite 3D app because “it’s open source, and the community is always challenging one another to push limits even further,” he said.

To see how far those limits could be pushed, CG Geek kicked off a timed design challenge last week as part of CES, putting together a fully rendered and animated project in only three days — powered by NVIDIA Studio technologies and his GeForce RTX 4090 GPU.

The artist polled his community on Instagram, Twitter and YouTube for a genre to use as a starting point for the project.

Sci-fi was the clear winner, so he envisioned what a far-future city skyline would look like. The first step was to populate the space with futuristic 3D buildings and skyscrapers.

CG Geek formed simple shapes in Blender, scaling them to match the sizes of real-world buildings. He then added materials and reflections to create beautifully textured structures before adding geometry, or geo nodes, a recently added feature in Blender and a crucial aspect of 3D modeling.

Geo nodes virtually eliminate procedural workflows. The traditional process of constructing objects follows a linear pattern, with one tool used after the next and each step only reversible by manual undo operations. Geo nodes allow for non-linear, non-destructive workflows and the instancing of objects to create incredibly detailed scenes using small amounts of data.

Sculpting of the 3D world is nearly complete.

CG Geek scanned objects using his iPhone to create realistic 3D models from photos. He then used Adobe Photoshop to apply detailed textures, one of 30 GPU-accelerated features made possible by his GeForce RTX 4090 GPU. The RTX-accelerated Super Resolution feature, which uses AI to upscale images with higher quality, was especially useful for exporting textures across the entire piece, CG Geek said.

CG Geek added fine details like ivy and realistic wear and tear to his sci-fi buildings until he reached the desired look.

His process used during the challenge is covered in a tutorial on building detailed, low-poly sci-fi buildings in a matter of minutes:

CG Geek’s RTX 4090 GPU enables him to use Blender Cycle’s RTX-accelerated, AI-powered OptiX ray tracing in the viewport for interactive, photorealistic movement within such a detailed environment. This virtually eliminates wait times, allowing him to create at the speed of his imagination.

CG Geek can play back the entire animation in real time without exporting, thanks to the power of the RTX 4090 GPU.

The artist quickly and easily applied realistic textures for the sand and water as well as animations. Final renders were delivered quickly with RTX-accelerated OptiX ray tracing in Blender Cycles.

It took CG Geek just 21 hours to build the futuristic metropolis and 10 hours to render it at 4K resolution.

“Currently, NVIDIA stands alone at the top of high-performance GPUs for 3D tasks like Blender,” he said. ”For real-time editing workflows, nothing comes close to beating the RTX 4090 GPU in speed.”

3D artist CG Geek.

View more of CG Geek’s work and tutorials.

Five-to-Nine Hustle, Powered by NVIDIA Studio

Nine to five o’clock is when people typically have a job, classes or other responsibilities. For many artists, it’s from five to nine that the real creativity kicks in and inspirational juices start flowing.

Make the most of your side hustling.

More than ever, creators are turning their passions into opportunities and monetizing their side hustles. NVIDIA Studio is celebrating these entrepreneurs and helping them learn, explore and take their creative endeavors to the next level:

  • With technology and resources — the latest advances in GPU-acceleration and AI-powered features help get the job done faster, plus Studio Drivers add creative app optimization and reliability to systems.
  • With education — hundreds of select tutorials, free to the public and created by creative professionals, offer everything from quick tricks and tips to multipart, in-depth series to elevate and expand the skill sets of content creators.
  • With inspiration — experience the creative journeys of interdimensional Studio artists, moving storytellers and esteemed streamers across creative fields in 3D animation, video editing, graphic design, photography and more.

Begin your side hustle journey with NVIDIA Studio.

#NewYearNewArt Challenge 

The latest NVIDIA Studio community challenge has kicked off: #NewYearNewArt.

With a new year will come new art, and we’d love to see yours! Use the hashtag #NewYearNewArt and tag @NVIDIAStudio to show off recent creations for a chance to be featured on our channels.

Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

The Greenest Generation: NVIDIA, Intel and Partners Supercharge AI Computing Efficiency

The Greenest Generation: NVIDIA, Intel and Partners Supercharge AI Computing Efficiency

AI is at the heart of humanity’s most transformative innovations — from developing COVID vaccines at unprecedented speeds and diagnosing cancer to powering autonomous vehicles and understanding climate change.

Virtually every industry will benefit from adopting AI, but the technology has become more resource intensive as neural networks have increased in complexity. To avoid placing unsustainable demands on electricity generation to run this computing infrastructure, the underlying technology must be as efficient as possible.

Accelerated computing powered by NVIDIA GPUs and the NVIDIA AI platform offer the efficiency that enables data centers to sustainably drive the next generation of breakthroughs.

And now, timed with the launch of 4th Gen Intel Xeon Scalable processors, NVIDIA and its partners have kicked off a new generation of accelerated computing systems that are built for energy-efficient AI. When combined with NVIDIA H100 Tensor Core GPUs, these systems can deliver dramatically higher performance, greater scale and higher efficiency than the prior generation, providing more computation and problem-solving per watt.

The new Intel CPUs will be used in NVIDIA DGX H100 systems, as well as in more than 60 servers featuring H100 GPUs from NVIDIA partners around the world.

Supercharging Speed, Efficiency and Savings for Enterprise AI

The coming NVIDIA and Intel-powered systems will help enterprises run workloads an average of 25x more efficiently than traditional CPU-only data center servers. This incredible performance per watt means less power is needed to get jobs done, which helps ensure the power available to data centers is used as efficiently as possible to supercharge the most important work.

Compared to prior-generation accelerated systems, this new generation of NVIDIA-accelerated servers speed training and inference to boost energy efficiency by 3.5x – which translates into real cost savings, with AI data centers delivering over 3x lower total cost of ownership.

New 4th Gen Intel Xeon CPUs Move More Data to Accelerate NVIDIA AI

Among the features of the new 4th Gen Intel Xeon CPU is support for PCIe Gen 5, which can double the data transfer rates from CPU to NVIDIA GPUs and networking. Increased PCIe lanes allow for a greater density of GPUs and high-speed networking within each server.

Faster memory bandwidth also improves the performance of data-intensive workloads such as AI, while networking speeds — up to 400 gigabits per second (Gbps) per connection — support faster data transfers between servers and storage.

NVIDIA DGX H100 systems and servers from NVIDIA partners with H100 PCIe GPUs come with a license for NVIDIA AI Enterprise, an end-to-end, secure, cloud-native suite of AI development and deployment software, providing a complete platform for excellence in efficient enterprise AI.

NVIDIA DGX H100 Systems Supercharge Efficiency for Supersize AI

As the fourth generation of the world’s premier purpose-built AI infrastructure, NVIDIA DGX H100 systems provide a fully optimized platform powered by the operating system of the accelerated data center, NVIDIA Base Command software.

Each DGX H100 system features eight NVIDIA H100 GPUs, 10 NVIDIA ConnectX-7 network adapters and dual 4th Gen Intel Xeon Scalable processors to deliver the performance required to build large generative AI models, large language models, recommender systems and more.

Combined with NVIDIA networking, this architecture supercharges efficient computing at scale by delivering up to 9x more performance than the previous generation and 20x to 40x more performance than unaccelerated X86 dual-socket servers for AI training and HPC workloads. If a language model previously required 40 days to train on a cluster of X86-only servers, the NVIDIA DGX H100 using Intel Xeon CPUs and ConnectX-7 powered networking could complete the same work in as little as 1-2 days.

NVIDIA DGX H100 systems are the building blocks of an enterprise-ready, turnkey NVIDIA DGX SuperPOD, which delivers up to one exaflop of AI performance, providing a leap in efficiency for large-scale enterprise AI deployment.

NVIDIA Partners Boost Data Center Efficiency 

For AI data center workloads, NVIDIA H100 GPUs enable enterprises to build and deploy applications more efficiently.

Bringing a new generation of performance and energy efficiency to enterprises worldwide, a broad portfolio of systems with H100 GPUs and 4th Gen Intel Xeon Scalable CPUs are coming soon from NVIDIA partners, including ASUS, Atos, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Lenovo, QCT and Supermicro.

As the bellwether of the efficiency gains to come, the Flatiron Institute’s Lenovo ThinkSystem with NVIDIA H100 GPUs tops the latest Green500 list — and NVIDIA technologies power 23 of the top 30 systems on the list. The Flatiron system uses prior-generation Intel CPUs, so even more efficiency is expected from the systems now coming to market.

Additionally, connecting servers with NVIDIA ConnectX-7 networking and Intel 4th Gen Xeon Scalable processors will increase efficiency and reduce infrastructure and power consumption.

NVIDIA ConnectX-7 adapters support PCIe Gen 5 and 400 Gbps per connection using Ethernet or InfiniBand, doubling networking throughput between servers and to storage. The adapters support advanced networking, storage and security offloads. ConnectX-7 reduces the number of cables and switch ports needed, saving 17% or more on electricity needed for the networking of large GPU-accelerated HPC and AI clusters and contributing to the better energy efficiency of these new servers.

NVIDIA AI Enterprise Software Delivers Full-Stack AI Solution

These next-generation systems also deliver a leap forward in operational efficiency as they’re optimized for the NVIDIA AI Enterprise software suite.

Running on NVIDIA H100, NVIDIA AI Enterprise accelerates the data science pipeline and streamlines the development and deployment of predictive AI models to automate essential processes and gain rapid insights from data.

With an extensive library of full-stack software, including AI workflows of reference applications, frameworks, pretrained models and infrastructure optimization, the software provides an ideal foundation for scaling enterprise AI success.

To try out NVIDIA H100 running AI workflows and frameworks supported in NVIDIA AI Enterprise, sign up for NVIDIA LaunchPad free of charge.

Watch NVIDIA founder and CEO Jensen Huang speak at the 4th Gen Intel Xeon Scalable processor launch event.

Read More

Tipping Point: NVIDIA DRIVE Scales AI-Powered Transportation at CES 2023

Tipping Point: NVIDIA DRIVE Scales AI-Powered Transportation at CES 2023

Autonomous vehicle (AV) technology is heading to the mainstream.

The NVIDIA DRIVE ecosystem showcased significant milestones toward widespread intelligent transportation at CES. Growth is occurring in vehicle deployment plans as well as AI solutions integrating further into the car.

Foxconn joined the NVIDIA DRIVE ecosystem. The world’s largest technology manufacturer will produce electronic control units based on the NVIDIA DRIVE Orin systems-on-a-chip and build its electric vehicles using the NVIDIA DRIVE Hyperion platform.

The Polestar 3, which is powered by NVIDIA DRIVE Orin, made its U.S. debut, showcasing its new driver-monitoring system. Working with intelligent sensing company Smart Eye, the automaker is using AI to improve in-cabin safety and convenience.

Also appearing stateside for the first time was the Volvo EX90 fully electric SUV. Volvo Cars’ new flagship vehicle features centralized, software-defined compute powered by DRIVE Orin and NVIDIA DRIVE Xavier, and will begin deliveries in early 2024.

The Volvo EX90 was on display at the automotive technology company, Luminar’s booth (CES LVCC West Hall booth 5324). Luminar, an NVIDIA DRIVE ecosystem member, is providing its lidar technology to enable next-generation safety and, in the future, highway autonomy.

The Volvo EX90.

Elsewhere on the show floor, NVIDIA DRIVE ecosystem members such as Aeva with Plus, Imagry, Infineon with Lucid, u-blox and Valeo showcased the latest innovations in intelligent transportation.

These announcements mark a shift in the autonomous vehicle industry, from early stages to global deployment.

Foxconn Enters the AV Arena

Building safe, intelligent vehicles with highly automated and fully autonomous driving capabilities is a massive endeavor.

NVIDIA DRIVE offers an open, AI-enabled AV development platform for the industry to build upon. By adding Foxconn as a tier-one platform scaling partner, NVIDIA can greatly extend its efforts to meet growing demand.

In addition, Foxconn’s selection of DRIVE Hyperion will speed time to market for its state-of-the-art EVs with autonomous driving capabilities and lower its time-to-cost strategy.

The DRIVE Hyperion sensor suite is already qualified to ensure diverse, redundant real-time processing, which increases overall safety.

Inside AI

The industry is placing greater focus on interior safety and convenience features as AI takes over more driving tasks.

Driver monitoring is a key part of Polestar’s broader driver-understanding system, which includes features such as adaptive cruise control, lane-keep assist and pilot assist as standard. These coordinated systems run simultaneously on the centralized DRIVE Orin AI compute platform.

The recently launched Polestar 3, featuring Smart Eye driver monitoring software.

The Polestar 3, launched in October, features two closed-loop driver-monitoring cameras and software from Smart Eye (6353) which track the driver’s head, eye and eyelid movements, and can trigger warning messages, sounds or an emergency-stop function if a distracted, drowsy or disconnected driver is detected.

End-to-End Innovation

The rest of the CES show floor was brimming with new vehicle technologies poised to deliver more convenient and safer transportation.

Lucid showcased its flagship sedan, the Air, in partner Infineon’s booth (3829), breaking down the technologies that make up the award-winning EV. At its core is the NVIDIA DRIVE centralized compute platform, which powers its software-defined DreamDrive advanced driver assistance system.

The award-winning Lucid Air electric sedan.

In addition to personal transportation, NVIDIA DRIVE is powering safer, more efficient public transit, as well as delivery and logistics.

Israeli startup Imagry (booth 5874), a developer of mapless autonomous driving solutions, announced that its DRIVE Orin-based platform will power two autonomous bus pilots in its home country in 2023. Lidar maker Aeva showcased the latest vehicle from autonomous trucking company Plus, built on DRIVE Orin.

AV sensing and localization technology also exhibited significant advances. Global tier-one supplier Valeo (booth CP-17) demonstrated how it’s using the high-fidelity NVIDIA DRIVE Sim platform to develop intelligent active lighting solutions for low-light conditions. U-blox (booth 10963), which specializes in global satellite navigation satellite system solutions, showed the latest in AV localization, integrated into the NVIDIA DRIVE Hyperion architecture.

With every corner of the AV industry firing on all cylinders, CES 2023 is signaling the start to the widespread deployment of intelligent transportation.

Read More

GFN Thursday Brings RTX 4080 to the Cloud With GeForce NOW Ultimate Membership

GFN Thursday Brings RTX 4080 to the Cloud With GeForce NOW Ultimate Membership

GFN Thursday rings in the new year with a recap of the biggest cloud gaming news from CES 2023: the GeForce NOW Ultimate membership. Powered by the latest NVIDIA GPU technology, Ultimate members can play their favorite PC games at performance never before available from the cloud.

Plus, with a new year comes new games. GeForce NOW brings 24 more titles to the cloud in January, starting with five this week.

Ultimate Performance, Now in the Cloud

Get ready for cloud gaming that’s “beyond fast.” GeForce NOW is bringing RTX 4080 performance to the cloud with the new high-performance Ultimate membership.

Supercomputer power streamed to you, fueling your every Victory Royale.

The GeForce NOW Ultimate membership raises the bar on cloud gaming, bringing it closer than ever to a local gaming experience. It’s powered by the NVIDIA Ada Lovelace architecture in upgraded GeForce NOW RTX 4080 SuperPODs.

GeForce NOW Ultimate members receive three major streaming upgrades. First, the new SuperPODs are capable of rendering and streaming at up to 240 frames per second for the lowest latency ever from the cloud. Paired with NVIDIA Reflex, members’ game play will feel almost indistinguishable from a local desktop PC.

Second, supported streaming resolutions get an upgrade: Ultimate members can play their favorite PC games at up to 4K 120 fps on the native PC and Mac apps.

And third, for the first time, cloud gamers can play at native ultrawide resolutions, a long-requested feature from the GeForce NOW community. Experience your favorite adventures like A Plague Tale: Requiem, The Witcher 3: Wild Hunt, Shadow of the Tomb Raider and more at up to 3840×1600 resolutions for a truly immersive experience.

The Ultimate upgrade also brings support for the latest NVIDIA RTX technologies, like full ray tracing and DLSS 3 — introduced with the GeForce RTX 40 Series launch. They deliver beautiful, cinematic-quality graphics and use AI to keep frame rates smooth in supported games.

With support for NVIDIA G-SYNC-enabled monitors, GeForce NOW will vary the streaming rate to the client for the first time, delivering smooth and instantaneous frame updates to client screens on Reflex-enabled games — further driving down total latency.

Ultimate members will also continue to enjoy longer streaming sessions, fastest access to the highest-performance cloud gaming servers and game settings that persist from session to session.

The Ultimate Library Keeps Growing

There are more than 1,500 games supported in the GeForce NOW library, with more than 400 titles joining last year. Members can stream mega-hits from top publishers like Electronic Arts and Ubisoft, popular PC indie titles like Valheim and Rust, and over 100 of the biggest free-to-play games like Fortnite and Genshin Impact.

The new year also brings some of the biggest upcoming PC game launches on the service, starting with Portal with RTX later this week. Relive the critically acclaimed and award-winning Portal reimagined with full ray tracing and higher frame rates with DLSS 3 for those streaming from the cloud from an RTX 4080 SuperPOD.

Full ray tracing transforms each scene of Portal with RTX, enabling light to bounce and be affected by each area’s new high-resolution, physically based textures and enhanced high-poly models. Every light is ray traced and casts shadows for a new sense of realism. Global illumination indirect lighting naturally illuminates and darkens rooms, volumetric ray-traced lighting scatters through fog and smoke, and shadows are pixel perfect.

This ray-traced reimagining of Valve’s classic game was built using a revolutionary modding tool called NVIDIA RTX Remix, which brings the test chambers of Portal’s “Aperture Science” to new life.

More big titles are on the way. As announced at CES 2023 this week, members can expect to see Atomic Heart, The Day Before and Party Animals join GeForce NOW when they release later this year. Stay tuned to future GFN Thursday updates for more details.

Upgrading Is Beyond Easy

Members can sign up today for the GeForce NOW Ultimate membership at $19.99 per month or $99.99 for six months.

Existing GeForce RTX 3080 members’ accounts have already been converted to Ultimate memberships at their current pricing, and will experience GeForce RTX 4080 performance as soon as it’s available in their regions. It’s the easiest upgrade to Ultimate performance, happening automatically.

The new GeForce RTX 4080-powered SuperPODs will be available in North America and Europe starting later this month, with continued rollout over the months to follow. Sign up today, as quantities are limited.

Upgrade today for the Ultimate cloud gaming experience.

 

Additionally, new AT&T Fiber customers, and new or existing AT&T 5G customers on an eligible 5G rate plan, can get a complimentary six-month Ultimate membership. Visit AT&T Gaming for more details.

Joining in January

Level up and learn new awesome abilities, unlock secret items and modes, summon powerful allies, and more in Scott Pilgrim vs. The World: The Game Complete Edition.

Here’s a look at the games joining the GeForce NOW library in January:

In addition, members can look for the following this week:

  • Scott Pilgrim vs. The World: The Game Complete Edition (New release on Steam, Jan. 5)
  • Carrier Command 2 (Steam)
  • Project Hospital (Steam)
  • Portal with RTX (Steam)
  • Severed Steel (Epic Games Store)

Let us know what you think of GeForce NOW Ultimate on Twitter or in the comments below.

Read More

Lights! Cameras! Atoms! Scientist Peers Into the Quantum Future

Lights! Cameras! Atoms! Scientist Peers Into the Quantum Future

Editor’s note: This is part of a series profiling people advancing science with high performance computing.

Ryan Coffee makes movies of molecules. Their impacts are huge.

The senior scientist at the SLAC National Accelerator Laboratory (above) says these visualizations could unlock the secrets of photosynthesis. They’ve already shown how sunlight can cause skin cancer.

Long term, they may help chemists engineer life-saving drugs and batteries that let electric cars go farther on a charge.

To make films that inspire that kind of work, Coffee’s team needs high-performance computers, AI and an excellent projector.

A Brighter Light

The projector is called the Linac Coherent Light Source (LCLS). It uses a linear accelerator a kilometer long to pulse X-rays up to 120 times per second.

That’s good enough for a Hollywood flick, but not fast enough for Coffee’s movies.

“We need to see how electron clouds move like soap bubbles around molecules, how you can squeeze them in certain ways and energy comes out,” said Coffee, a specialist in the physics at the intersection of atoms, molecules and optics.

So, an upgrade next year will let the giant instrument take 100,000 frames per second. In two years, another enhancement, called LCLS II, will push that to a million frames a second.

Sorting the frames that flash by that fast — in random order — is a job for the combination of high performance computing (HPC) and AI.

AIs in the Audience

Coffee’s goal is to sit an AI model in front of the LCLS II. It will watch the ultrafast movies to learn an atomic dance no human eyes could follow.

The work will require inference on the fastest GPUs available running next to the instrument in Menlo Park, Calif. Meanwhile, data streaming off LCLS II will be used to constantly retrain the model on a bank of NVIDIA A100 Tensor Core GPUs at the Argonne National Laboratory outside Chicago.

It’s a textbook case for HPC at the edge, and one that’s increasingly common in an era of giant scientific instruments that peer up at stars and down into atoms.

LCLS instrument for molecular science with HPC + AI
A look at part of the LCLS instrument. (For more details, see this blog.)

So far, Coffee’s team has been able to retrain an autoencoder model every 10-20 minutes while it makes inferences 100,000 times a second.

“We’re already in the realm of attosecond pulses where I can watch the electron bubbles slosh back and forth,” said Coffee, a core member of SLAC’s overall AI initiative.

A Broader AI Collaboration

The next step is even bigger.

Data from Coffee’s work on molecular movies will be securely shared with data from Argonne’s Advanced Proton Source, a kind of ultra-high-resolution still camera.

“We can use secure, federated machine learning to pull these two datasets together, creating a powerful, shared transformer model,” said Coffee, who’s collaborating with multiple organizations to make it happen.

Ryan Coffee HPC AI for molecular science
Coffee in the ‘projection room’ where the light in his next molecular movies will first appear.

The transformer will let scientists generate synthetic data for many data-starved applications such as research on fusion reactors.

It’s an effort specific to science that parallels work in federated learning in healthcare. Both want to build powerful AI models for their fields while preserving data privacy and security.

“We know people get the best results from large language models trained on many languages,” he said. “So, we want to do that in science by taking diverse views of the same things to create better models,” he said.

The Quantum Future

The atomic forces that Coffee studies may power tomorrow’s computers, the scientist explains.

“Imagine a stack of electron bubbles all in the same quantum state, so it’s a superconductor,” he said. “When I add one electron at the bottom, one pops to the top instantaneously because there’s no resistance.”

The concept, called entanglement in quantum computing, means two particles can switch states in lock step even if they’re on opposite sides of the planet.

That would give researchers like Coffee instant connections between powerful instruments like LCLS II and remote HPC centers training powerful AI models in real time.

Sounds like science fiction? Maybe not.

Coffee foresees a time when his experiments will outrun today’s computers, a time that will require alternative architectures and AIs. It’s the kind of big-picture thinking that excites him.

“I love the counterintuitiveness of quantum mechanics, especially when it has real, measurable results humans can apply — that’s the fun stuff.”

Read More

UF Provost Joe Glover on Building a Leading AI University

UF Provost Joe Glover on Building a Leading AI University

When NVIDIA co-founder Chris Malachowsky approached University of Florida Provost Joe Glover with the offer of an AI supercomputer, he couldn’t have predicted the transformative impact it would have on the university. In just a short time, UF has become one of the top public colleges in the U.S. and developed a groundbreaking neural network for healthcare research.

In a recent episode of NVIDIA’s AI Podcast, host Noah Kravitz sat down with Glover, who is also senior vice president of academic affairs at UF. The two discussed the university’s efforts to put AI to work across all aspects of higher education, including a public-private partnership with NVIDIA that has helped transform UF into one of the leading AI universities in the country.

Just a year after the partnership was unveiled in July 2020, UF rose to No. 5 on the U.S. News and World Report’s list of the best public colleges in the U.S. The ranking was, in part, a recognition of UF’s vision for infusing AI into its teaching and research.

Last March, UF Health, the university’s academic health center, teamed with NVIDIA to develop GatorTron, a neural network that generates synthetic clinical data researchers can use to train other AI models in healthcare.

According to Glover, the success of UF’s AI initiatives can be attributed to “a combination of generous philanthropy, some good decisions, a little inspiration and a few miracles here and there along the way.” 

He believes that the university’s AI-powered vision has significantly impacted its teaching and research and will continue to do so in the future.

You Might Also Like

Art(ificial) Intelligence: Pindar Van Arman Builds Robots That Paint

Pindar Van Arman, an American artist and roboticist, designs painting robots that explore the differences between human and computational creativity. Since his first system in 2005, he has built multiple artificially creative robots. The most famous, Cloud Painter, was awarded first place at Robotart 2018.

Real or Not Real? Attorney Steven Frank Uses Deep Learning to Authenticate Art

Steven Frank is a partner at the law firm Morgan Lewis, specializing in intellectual property and commercial technology law. He’s also half of the husband-wife team that used convolutional neural networks to authenticate artistic masterpieces, including da Vinci’s Salvador Mundi, with AI’s help.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments

Humans playing games against machines is nothing new, but now computers can develop games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.

Subscribe to the AI Podcast: Now Available on Amazon Music

You can now listen to the AI Podcast through Amazon Music.

Also get the AI Podcast through Apple Music, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

 

Read More

NVIDIA Reveals Gaming, Creator, Robotics, Auto Innovations at CES

NVIDIA Reveals Gaming, Creator, Robotics, Auto Innovations at CES

Powerful new GeForce RTX GPUs, a new generation of hyper-efficient laptops and new Omniverse capabilities and partnerships across the automotive industry were highlights of a news-packed address ahead of this week’s CES trade show in Las Vegas.

“AI will define the future of computing and this has influenced much of what we’re covering today,” said Jeff Fisher, senior vice president for gaming products at NVIDIA, as he kicked off the presentation.

Fisher was joined by several leaders from NVIDIA to introduce products and partnerships across gaming and content creation, robotics and next-generation automobiles.

The headline news:

Introducing GeForce RTX 40 Series Laptops, RTX 4070 Ti Graphics Cards and DLSS 3 Games

Fisher said the performance and power efficiency of the NVIDIA GeForce RTX 40 Series Laptop GPUs enable the greatest ever generational leap, including 14-inch gaming and creating powerhouse laptops, starting at $999 in February.

New GeForce RTX 4070 Ti graphics cards for desktops are faster than last generation’s RTX 3090 Ti at nearly half the power, bringing the NVIDIA Ada Lovelace architecture down to $799, with availability starting Jan. 5.

And DLSS 3 is being adopted by developers faster than any prior NVIDIA tech, with 50 released and upcoming titles, including Witchfire, The Day Before, Warhaven, THRONE AND LIBERTY and Atomic Heart.

In addition, RTX 4080 performance is coming to the NVIDIA GeForce NOW cloud-gaming service. As a result, Fisher said millions more gamers will have access to the NVIDIA Ada architecture with GeForce NOW’s Ultimate membership.

The new tier will bring NVIDIA Reflex and 240 frames per second streaming to the cloud for the first time, along with full ray tracing and DLSS 3 in games like Portal With RTX.

Momentum for NVIDIA RTX continues to build, Fisher said. “Creating has grown beyond photos and videos to virtual worlds rendered with 3D cinematic graphics and true-to-life physics,” Fisher said. “The RTX platform is powering this growth.”

Ray tracing and AI are defining the next generation of content, and NVIDIA Studio is the platform for this new breed of content creators. The heartbeat of Studio is found in NVIDIA Omniverse, where creators can connect accelerated apps and collaborate in real time.

NVIDIA’s Stephanie Johnson, vice president of consumer marketing, introduced a new suite of generative AI tools and experimental plug-ins that harness the power of AI.

Built with NVIDIA RTX, Omniverse is a platform enabling 3D artists to connect their favorite tools from Adobe, Autodesk, SideFX, Unreal Engine and more. And Omniverse now has a new Connector for Unity, said Stephanie Johnson, vice president of consumer marketing at NVIDIA.

Johnson introduced a suite of new generative AI tools and experimental plug-ins using the power of AI as the ultimate creative assistant. Audio2Face and Audio2Gesture generate animations from an audio file. The AI ToyBox by NVIDIA Research lets users generate 3D meshes from 2D inputs.

Companies have used generative AI technology to build Omniverse Connectors and extensions. Move.AI’s Omniverse extension, for example, enables video-to-animation. Lumirithmic generates 3D mesh for heads from facial scans. And Elevate3D generates photorealistic 3D visualizations of products from 360-degree video recordings.

Johnson also announced that NVIDIA RTX Remix, which is built on Omniverse and is “the easiest way to mod classic games,” will be entering early access soon. “The modding community can’t wait to get their hands on Remix,” she said.

NVIDIA Isaac Sim Brings Significantly Improved Features, Tools for Developing Intelligent Robots 

Simulation plays a vital role in the lifecycle of a robotics project, explained Deepu Talla, vice president of embedded and edge computing at NVIDIA. Partners are using NVIDIA Isaac Sim to create digital twins that help speed the training and deployment of intelligent robots.

NVIDIA’s Deepu Talla, vice president of embedded and edge computing, announced the next release of Isaac Sim, NVIDIA’s robotics simulation application and synthetic data generation tool.

To revolutionize the way the robotics ecosystem develops the next generation of autonomous robots, Talla announced major updates to the next release of Isaac Sim. This includes improved sensor and lidar support to more accurately model real-world performance, a new conveyor-building tool, a new utility to add people to the simulation environment, a collection of new sim-ready warehouse assets and a host of new popular robots that come pre-integrated.

For the open-source ROS developer community, this release upgrades support for ROS 2 Humble and Windows, Talla added. And for robotics researchers, NVIDIA is introducing a new tool called Isaac ORBIT, which provides operating environments for manipulator robots. NVIDIA has also improved Isaac Gym for reinforcement learning and updated Isaac Cortex for collaborative robot programming.

“We are committed to advancing robotics and arguably investing more than anyone else in the world,” Talla said. “We are well on the way to having a thousand to million times more virtual robots for every physical robot deployed.”

Mercedes-Benz to Create Digital Twins; Foxconn Building EVs on NVIDIA DRIVE; Geforce NOW Streams to Cars

The NVIDIA DRIVE platform is open and easy to program, said Ali Kani, vice president of automotive at NVIDIA.

Hundreds of partners across the automotive ecosystem are now developing software on NVIDIA DRIVE, including 20 of the top 30 manufacturers building new energy vehicles, many of the industry’s top tier one manufacturers and software makers, plus eight of the largest 10 trucking and robotaxi companies.

It’s a number that continues to grow, with Kani announcing a partnership with Foxconn, the world’s largest technology manufacturer and service provider, to build electric vehicles based on NVIDIA DRIVE Hyperion.

NVIDIA’s Ali Kani, vice president of automotive, announced a partnership with Foxconn, that GeForce NOW will be “coming to screens in your car” and that Mercedes-Benz is using NVIDIA digital twin technology to plan and build more efficient production facilities.

“With Hyperion adoption, Foxconn will manufacture vehicles with leading electric range as well as state-of-the-art AV technology while reducing time to market,” Kani said.

Kani touched on how, as next-generation cars become autonomous and electric, interiors are transformed into mobile living spaces, complete with the same entertainment available at home. GeForce NOW will be “coming to screens in your car,” Kani said.

Kani also announced several DRIVE partners are integrating GeForce NOW, including Hyundai Motor Group, BYD and Polestar.

While gamers will enjoy virtual worlds from inside their cars, tools such as the metaverse are critical to the development and testing of new autonomous vehicles.

Kani announced that Mercedes-Benz is using digital twin technology to plan and build more efficient production facilities. “The applications for Omniverse in the automotive market are staggering,” Kani said.

Read More

NVIDIA Releases Major Update to Omniverse Enterprise

NVIDIA Releases Major Update to Omniverse Enterprise

The latest release of NVIDIA Omniverse Enterprise, available now, brings increased performance, generational leaps in real-time RTX ray and path tracing, and streamlined workflows to help teams build connected 3D pipelines, and develop and operate large-scale, physically accurate, virtual 3D worlds like never before.

Artists, designers, engineers and developers can benefit from various enhancements across common Omniverse use cases, including breaking down 3D data silos through aggregation, building custom 3D pipeline tools and generating synthetic 3D data.

The release includes support for the breakthrough innovations within the new NVIDIA Ada Lovelace architecture, including third-generation RTX technology and DLSS 3, delivering up to 3x performance gains when powered by the latest GPU technology, such as NVIDIA RTX 6000 Ada Generation, NVIDIA L40 and OVX systems.

The update also delivers features and capabilities such as new Omniverse Connectors, layer-based live workflows, improved user experience and customization options, including multi-view ports, editable hotkeys, lighting presets and more.

Enhancing 3D Creation

The Omniverse ecosystem is expanding, with more capabilities and tools that allow teams to elevate 3D workflows and reach new levels of real-time, physically accurate simulations. The latest additions include:

  • New Connectors: Omniverse Connectors enable more seamless connected workflows between disparate 3D applications. New Adobe Substance 3D Painter, Autodesk Alias, PTC Creo, Kitware’s Paraview Omniverse Connectors are now supported on Omniverse Enterprise. Users can also easily export data created in NX Software from Siemens Digital Industries Software.
  • Omniverse DeepSearch: Now generally available, this AI-powered service lets teams intuitively search through extremely large, untagged 3D databases with natural language or using 2D reference images. This unlocks major value in previously unwieldy digital backlots and asset collections that lacked the ability to search.
  • Omniverse Farm: A completely renewed user interface provides improved usability and performance, plus Kubernetes support.
  • Omniverse Cloud: New cloud containers for Enterprise Nucleus Server, Replicator, Farm and Isaac Sim for AWS provide enterprises more flexibility in connecting and managing distributed teams all over the world. AWS’ security, identity and access-management controls allow teams to maintain complete control over their data. Containers are now available on NVIDIA NGC.

Strengthening Core Components for Building 3D Worlds

Omniverse Enterprise is designed for maximum flexibility and scalability. This means creators, designers, researchers and engineers can quickly connect tools, assets and projects to collaborate in a shared virtual space.

Omniverse Enterprise brings updates to the core components of the platform, including:

  • Omniverse Kit SDK, the powerful toolkit for building extensions, apps, microservices or plug-ins, now makes it easier than ever to build advanced tools and Omniverse applications with new templates and developer workflows.
  • Omniverse Create, a reference app for composing large-scale, USD-based worlds, now includes NVIDIA DLSS 3 and multi-viewport support, making it easier for Omniverse Enterprise users to fluidly interact with extremely large and complex scenes.
  • Omniverse View, a reference app for reviewing 3D scenes, has been streamlined to focus purely on the review and approval experience. New collaborative, real-time, interactive capabilities — including markup, annotation, measure and simple navigation — make stakeholder presentations easier and more interactive than ever.
  • Omniverse Nucleus, the database and collaboration engine, now includes improved IT management tools, such as expanded version control to handle atomic checkpoints on the server. Updated Large File Transfer service enables users to move files between servers, on premises or in the cloud to benefit hybrid workflows. And new self-service deployment instructions for Enterprise Nucleus Server on AWS are now available, letting customers deploy and manage Nucleus in the cloud.

Customers Dive Into Omniverse Enterprise 

Many customers around the world have experienced enhanced 3D workflows with Omniverse Enterprise.

Dentsu International, one of the largest global marketing and advertising agency networks, always looks for solutions that enable collaborative and seamless work, with a central repository for completed projects.

“There’s great value when multiple artists with different tools, located in different places, are able to work together, and NVIDIA Omniverse Enterprise makes that possible,” said Melody Jeter, vice president of Creative Technology at Dentsu. “The collaboration tackles the robust challenge of laborious file transferring — now, the workflows are all synced live and in real time.”

In addition to enhancing current pipelines with Omniverse Enterprise, Dentsu is looking to incorporate NVIDIA generative AI into its 3D design pipeline with software development kits like Omniverse ACE and Audio2Face.

Mercedes Benz, the German premium vehicle manufacturer, is using Omniverse Enterprise at its sites world wide to design, plan and optimize its manufacturing and assembly facilities. By developing full-fidelity digital twins of their production environments, globally dispersed teams will open up new abilities to collaborate in real time, accelerate decision-making and identify opportunities to reduce waste, decrease energy consumption and continuously enhance quality.

Zaha Hadid Architects (ZHA) is a renowned architectural design firm that has created some of the world’s most singular building designs. ZHA focuses on creating transformative cultural, corporate and residential spaces through cutting-edge technologies. With Omniverse Enterprise, the team can accelerate and automate its workflows, as well as develop custom tools within the platform.

“We are working with NVIDIA to incorporate Omniverse as the connective infrastructure of our tech stack. Our goal is to retain design intent across the various project stages and improve productivity,” said Shajay Bhooshan, associate director at Zaha Hadid Architects. “We expect NVIDIA Omniverse to play a critical, supportive role to our efforts to create a platform that’s agnostic, version-controlled and a single source of truth for design data, as it evolves from idea to delivery.”

NVIDIA Omniverse Enterprise is available by subscription from BOXX Technologies, Dell Technologies, Z by HP and Lenovo, and channel partners including Arrow, ASK, PNY and Leadtek. The platform is optimized to run on NVIDIA-Certified, RTX-enabled desktop and mobile workstations, as well as servers, with new support for NVIDIA RTX Ada generation systems.

Watch the NVIDIA special address at CES on demand. Learn more about NVIDIA Omniverse Enterprise and try Omniverse for free.

Read More

Intelligent Design: NVIDIA DRIVE Revolutionizes Vehicle Interior Experiences

Intelligent Design: NVIDIA DRIVE Revolutionizes Vehicle Interior Experiences

AI is extending further into the vehicle as autonomous-driving technology becomes more prevalent.

With the NVIDIA DRIVE platform, automakers can design and implement intelligent interior features to continuously surprise and delight customers.

It all begins with the compute architecture. The recently introduced NVIDIA DRIVE Thor platform unifies traditionally distributed functions in vehicles  — including digital cluster, infotainment, parking and assisted driving — for greater efficiency in development and faster software iteration.

NVIDIA DRIVE Concierge, built on the DRIVE IX software stack, runs an array of safety and convenience features, including driver and occupant monitoring, digital assistants and autonomous-vehicle visualization.

Automakers can benefit from NVIDIA data center solutions even if they aren’t using the NVIDIA DRIVE platform. With cloud technology, vehicles can stream the NVIDIA GeForce NOW cloud-gaming service without any special equipment. Plus, developers can train, test and validate in-vehicle AI models on NVIDIA DGX servers.

The same data center technology that’s accelerating AI development — in combination with the NVIDIA Omniverse platform for creating and operating metaverse applications — is also revolutionizing the automotive product cycle. Using NVIDIA DRIVE Sim built on Omniverse, automakers can design vehicle interiors and retail experiences entirely in the virtual world.

Easing Pain Points From Concept to Customer

Designing and selling vehicles requires the highest levels of organization and orchestration. The cockpit alone has dozens of components — such as steering wheel, cluster and infotainment — that developers must create and integrate with the rest of the car.

These processes are incredibly time- and resource-intensive — there are countless configurations, and chosen designs must be built out and tested prior to production. Vehicle designers must collaborate on various layouts, which must then be validated and approved. Customers must travel to dealerships to experience various options, and the ability to test features depends on a store’s inventory at any given time.

In the virtual world, developers can easily design vehicles, and car buyers can seamlessly test them, leading to an optimal experience on both ends of the production pipeline.

Design and Collaboration

Automakers operate design centers around the world, tapping into expertise from North America, Europe, Asia and other automotive hubs. Working on user experience concepts across these locations requires frequent international travel and close coordination.

With DRIVE Sim, designers and engineers anywhere in the world can work together to develop the cockpit experience, without having to leave their desks.

Design teams can also save time and valuable resources by testing concepts in the virtual world, without having to wait for physical prototypes. Decision-makers can review designs and ensure they meet relevant safety standards in DRIVE Sim before sending them to production.

Transforming the Customer Experience

The benefits of in-vehicle simulation extend far beyond the design phase.

Consumers are increasingly expecting full-service digital retail experiences. More than 60% of shoppers want to conduct more of the car-buying process online compared to the last time they bought a vehicle, while more than 75% are open to buying a car entirely online, according to an Autotrader survey.

The same tools used to design the vehicle can help meet these rising consumer expectations.

With DRIVE Sim, car buyers can configure and test the car from the comfort of their homes. Customers can see all potential options and combinations of vehicle features at the push of a button and take their dream car for a virtual spin — no lengthy trips to the dealership required.

From concept design to customer experience, DRIVE Sim is easing the process and opening up new ways to design and enjoy intelligent vehicles.

Read More