Creativity Redefined: New GeForce RTX 40 Series GPUs and NVIDIA Studio Updates Accelerate AI Revolution

Content creation is booming at an unprecedented rate.

Whether it’s a 3D artist sculpting a beautiful piece of art or an aspiring influencer editing their next hit TikTok, more than 110 million professional and hobbyist artists worldwide are creating content on laptops and desktops.

NVIDIA Studio is meeting the moment with new GeForce RTX 40 Series GPUs, 110 RTX-accelerated apps, as well as the Studio suite of software, validated systems, dedicated Studio Drivers for creators, and software development kits to help artists create at the speed of their imagination.

On display during today’s NVIDIA GTC keynote, new GeForce RTX 40 Series GPUs and incredible AI tools — powered by the ultra-efficient NVIDIA Ada Lovelace architecture — are making content creation faster and easier than ever.

The new GeForce RTX 4090 brings a massive boost in performance, third-generation RT Cores, fourth-generation Tensor Cores, an eighth-generation NVIDIA Dual AV1 Encoder and 24GB of Micron G6X memory capable of reaching 1TB/s bandwidth. The GeForce RTX 4090 is up to 2x faster than a GeForce RTX 3090 Ti in 3D rendering, AI and video exports.

The GeForce RTX 4080 comes in two variants — 16GB and 12GB — so creators can choose the optimal memory capacity based on their projects’ needs. The RTX GeForce 4080 16GB is up to 1.5x faster than the RTX 3080 Ti.

The keynote kicked off with a breathtaking demo, NVIDIA Racer RTX, built in NVIDIA Omniverse, an open platform for virtual collaboration and real-time photorealistic simulation. The demo showcases the latest NVIDIA technologies with real-time full ray tracing — in 4K resolution at 60 frames per second (FPS), running with new DLSS 3 technology — and hyperrealistic physics.

It’s a blueprint for the next generation of content creation and game development, where worlds are no longer prebaked, but physically accurate, full simulations.

Deep dive into the making of Racer RTX in this special edition of the In the NVIDIA Studio blog series.

In addition to benefits for 3D creators, we’ve introduced near-magical RTX and AI tools for game modders — the creators of the PC gaming community — with the Omniverse application RTX Remix, which has been used to turn RTX ON in Portal with RTX, free downloadable content for Valve’s classic hit, Portal.

Video editors and livestreamers are getting a massive boost, too. New dual encoders cut video export times nearly in half. Live streamers get encoding benefits with the eighth-generation NVIDIA Encoder, including support for AV1 encoding.

Groundbreaking AI technology, like AI image generators and new video-editing tools in DaVinci Resolve, are ushering in a new wave of creativity. Beyond-fast GeForce RTX 4090 and 4080 graphics cards will power the next step in the AI revolution, delivering up to a 2x increase in AI performance over the previous generation.

A New Paradigm for 3D

GeForce RTX 40 Series GPUs and DLSS 3 deliver big boosts in performance for 3D artists, so they can create in fully ray-traced environments with accurate physics and realistic materials all in real time, without proxies.

DLSS 3 uses AI-powered fourth-generation RTX Tensor Cores, and a new Optical Flow Accelerator on GeForce RTX 40 Series GPUs, to generate additional frames and dramatically increase FPS. This improves smoothness and speeds up movement in the viewport for those working in 3D applications such as NVIDIA Omniverse, Unity and Unreal Engine 5.

Performance testing conducted by NVIDIA in September 2022 with desktops equipped with Intel Core i9-12900K with UHD 770, 64 GB RAM. NVIDIA Driver 521.58, Windows 11. Autodesk Maya with Autodesk Arnold 2022 renderer performance measures render time of the NVIDIA SOL 3D model. Blender 2.93 measures render time of various scenes using Blender OpenData benchmark, with the OptiX render engine. Render time of various scenes with Redshift version 3.0.45.

NVIDIA Omniverse, included in the NVIDIA Studio software suite, takes creativity and collaboration even further. A new Omniverse application, RTX Remix, is putting powerful new tools in the hands of game modders.

Modders — the millions of creators in the gaming world who are responsible for driving billions of game mod downloads annually — can use the app to remaster a large library of compatible DirectX 8 and 9 titles, including one of the world’s most modded games, The Elder Scrolls III: Morrowind. As a test, NVIDIA artists updated a Morrowind scene in stunning ray tracing with DLSS 3 and enhanced assets.

It starts with the magical capture tool. With one click, capture geometry, materials, lighting and cameras in the Universal Scene Description format. AI texture tools bring assets up to date, with super resolution and physically based materials. Assets are easily customizable in real time using Omniverse Connectors for Autodesk Maya, Blender and more.

RTX Remix enables artists to easily make incredible mods with ray tracing and DLSS for classic games.

Modders can collaborate in Omniverse connected apps and view changes in RTX Remix to replace assets throughout entire games. RTX Remix features a state-of-the-art ray tracer, DLSS 3 and more, making it easy to reimagine classics with incredible graphics.

RTX mods work alongside existing mods, meaning a large breadth of content on sites like Nexus Mods is ready to be updated with dazzling RTX. Sign up to be notified when RTX Remix is available.

The robust capabilities of this new modding platform were used to create Portal with RTX.

Portal fans can wishlist the Portal with RTX downloadable content on Steam and experience the first-ever RTX Remix-modded game in November.

Fast Forward to the Future of Video Production

Video production is getting a significant boost with GeForce RTX 40 Series GPUs. The feeling of being stuck on pause while waiting for videos to export gets dramatically reduced with the GeForce RTX 40 Series’ new dual encoders, which slash export times nearly in half.

The dual encoders can work in tandem, dividing work automatically between them to double output. They’re also capable of recording up to 8K, 60 FPS content in real time via GeForce Experience and OBS Studio to make stunning gameplay videos.

Dual encoders can record up to 8K, 60 FPS content in real time via GeForce Experience and OBS Studio.

Blackmagic Design’s DaVinci Resolve, the popular Voukoder plugin for Adobe Premiere Pro, and Jianying — the top video editing app in China — are all enabling AV1 support, as well as a dual encoder through encode presets. Expect dual encoder and AV1 availability for these apps in October. And we’re working with popular video-effects app Notch to enable AV1, as well as Topaz to enable AV1 and the dual encoder.

AI tools are changing the speed at which video work gets done. Professionals can now automate tedious tasks, while aspiring filmmakers can add stylish effects with ease. Rotoscoping — the process of highlighting a part of motion footage, typically done frame by frame — can now be done nearly instantaneously with the AI-powered “Object Select Mask” tool in Blackmagic Design’s DaVinci Resolve. With GeForce RTX 40 Series GPUs, this feature is 70% faster than with the previous generation.

“The new GeForce RTX 40 Series GPUs are going to supercharge the speed at which our users are able to produce video through the power of AI and dual encoding — completing their work in a fraction of the time,” said Rohit Gupta, director of software development at Blackmagic Design.

Content creators using GeForce RTX 40 Series GPUs also benefit from speedups to existing integrations in top video-editing apps. GPU-accelerated effects and decoding save immeasurable time by enabling work with ultra-high-resolution RAW footage in real time in REDCINE-X PRO, DaVinci Resolve and Adobe Premiere Pro, without the need for proxies.

AV1 Brings Encoding Benefits to Livestreamers and More

The open video encoding format AV1 is the biggest leap in encoding since H.264 was released nearly two decades ago. The GeForce RTX 40 Series features the eighth-generation NVIDIA video encoder, NVENC for short, now with support for AV1.

Performance testing conducted by NVIDIA in September 2022, using desktops equipped with Intel Core i9-12900K with UHD 770, 32GB RAM. NVIDIA Driver 521.58, Windows 11. Encoding quality is measured as BD-SNR using seventh-generation NVENC with H.264, and eighth-generation NVENC with AV1, both at the maximum-quality P7 preset.

For livestreamers, the new AV1 encoder delivers 40% better efficiency. This means livestreams will appear as if bandwidth was increased by 40% — a big boost in image quality. AV1 also adds support for advanced features like high dynamic range.

NVIDIA collaborated with OBS Studio to add AV1 — on top of the recently released HEVC and HDR support — within its next software release, expected in October. OBS is also optimizing encoding pipelines to reduce overhead by 35% for all NVIDIA GPUs. The new release will additionally feature updated NVIDIA Broadcast effects, including noise and room echo removal, as well as improvements to virtual background.

We’ve also worked with Discord to enable end-to-end livestreams with AV1. In an update releasing later this year, Discord will enable its users to use AV1 to dramatically improve screen sharing, be it for game play, school work or hangouts with friends.

To make the deployment of AV1 seamless for developers, NVIDIA is making it available in the NVIDIA Video Codec SDK 12 in October. Developers can also access the NVENC AV1 directly with Microsoft Media Framework, through Google Chrome and Chromium, as well as in FFMPEG.

Benefits to livestreamers go beyond AV1 encoding on GeForce RTX 40 Series GPUs. The SDKs that power NVIDIA Broadcast are available to developers, enabling native feature support for Logitech, Corsair and Elgato devices, or advanced workflows in OBS and Notch software. At GTC, NVIDIA updated and introduced new AI-powered effects:

  • The popular Virtual Background feature now includes temporal information, so random objects in the background will no longer create distractions by flashing in and out. It will be available in the next version of OBS Studio.
  • Face Expression Estimation is a new feature that allows apps to accurately track facial expressions for face meshes, even with the simplest of webcams. It’s hugely beneficial to VTubers and can be found in the next version of VTube Studio.
  • Eye Contact allows creators, like podcasters, to appear as if they’re looking directly at the camera — highly useful for when the user is reading a script or looking away to engage with viewers in the chat window.

The Making of ‘Racer RTX’

To showcase the technological advancements made possible by GeForce RTX 40 Series GPUs, a global team of NVIDIANs, led by creative director Gabriele Leone, created a stunning new technology demo, Racer RTX.

The team recreated city streets in West Los Angeles, turning them into a lifelike radio-controlled car track.

Leone and team set out to one-up the fully playable, physics-based Marbles at Night RTX demo. With improved GPU performance and breakthrough advancements in NVIDIA Omniverse, Racer RTX lets the user explore different sandbox environments, highlighting the amazing 3D worlds that artists are now able to create.

The demo is a look into the next generation of content creation, “where virtually everything is simulated,” Leone said. “Soon, there’s going to be no need to bake lighting — content will be fully simulated, aided by incredibly powerful GeForce RTX 40 Series GPUs.”

The Omniverse real-time editor empowered the artists on the project to create lights, design materials, rig physics, adjust elements and see updates immediately. They moved objects, added new geometry, changed surface types and tweaked physics.

In a traditional rasterized workflow, levels and lighting need to be baked. And in a typical art environment, only one person can work on a level at a time, leading to painstaking iteration that greatly slows the creation process. These challenges were overcome with Omniverse.

Animating behavior is also a complex and manual process for creators. Using NVIDIA MDL-based materials, Leone turned on PhysX in Omniverse, and each surface and object was automatically textured and modeled to behave as it would in real life. Ram a baseball, for example, and it’ll roll away and interact with other objects until it runs out of momentum.

“Racer RTX” offers a glimpse into the future of content creation, where worlds are no longer prebaked.

Working with GeForce RTX 40 Series GPUs and DLSS 3 meant the team could modify its worlds through a fully ray-traced design viewport in 4K at 60 FPS — 4x the FPS of a GeForce RTX 3090 Ti.

And the team is just getting started. The Racer RTX demo will be available for developers and creators to download, explore and tweak in November. Get familiar with Omniverse ahead of the release.

With all these astounding advancements in AI and GPU-accelerated features for the NVIDIA Studio platform, it’s the perfect time to upgrade for gamers, 3D artists, video editors and live steamers. And battle-tested monthly NVIDIA Studio Drivers help any creator feel like a professional — download the September Driver.

Stay Tuned for the Latest Studio News

Keep up to date on the latest creator news, creative app updates, AI-powered workflows and featured In the NVIDIA Studio artists by visiting the NVIDIA Studio blog.

GeForce RTX 4090 GPUs launch on Wednesday, Oct. 12, followed by the GeForce RTX 4080 graphics cards in November. Visit GeForce.com for further information.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

The post Creativity Redefined: New GeForce RTX 40 Series GPUs and NVIDIA Studio Updates Accelerate AI Revolution appeared first on NVIDIA Blog.

Read More

NVIDIA Medical Edge AI Computing Platform Selected by Top Robotic and Digital Surgery Startups

NVIDIA today introduced the NVIDIA IGX platform for medical edge AI use cases, bringing advanced security and safety to intelligent machines and human-machine collaboration.

IGX is a hardware and software platform that delivers secure, low-latency AI inference to meet the clinical demand for instant insights from a range of devices and sensors for medical applications, including robotic-assisted surgery and patient monitoring.

The IGX platform supports NVIDIA Clara Holoscan, a domain-specific platform that allows medical-device developers to bridge edge, on-premises data center and cloud services. This integration enables the rapid development of new, software-defined devices that bring the latest AI applications directly into the operating room.

Three leading medical-device startups — Activ Surgical, Moon Surgical and Proximie — have selected the combination of NVIDIA Clara Holoscan running on the IGX platform to power their surgical robotics systems. All three are members of NVIDIA Inception, a global program that helps technology startups evolve faster.

They’re among more than 70 medical device companies, medical centers and startups already using Clara Holoscan to advance their efforts to deploy AI applications in clinical settings.

Robotic Surgery Startups Accelerate Time to Market

Activ Surgical has selected NVIDIA Clara Holoscan to accelerate development of its AI and augmented-reality solution for real-time surgical guidance. The Boston-based company’s ActivSight technology allows surgeons to view critical physiological structures and functions, like blood flow, that cannot be seen with the naked eye.

By integrating this information into surgical imaging systems, the company aims to reduce surgical complication rates, improving patient care and safety.

“NVIDIA Clara Holoscan will help us optimize precious engineering resources and go to market faster,” says  Tom Calef, chief technology officer at Activ Surgical. “With Clara Holoscan and NVIDIA IGX, we envision that our intraoperative AI solution will transform the collective surgical experience with data-driven insights, helping make world-class surgery accessible for all.”

Paris-based robotic surgery company Moon Surgical is designing Maestro, an accessible, adaptive surgical-assistant robotics system that works with the equipment and workflows that operating rooms already have in place.

“NVIDIA has all the hardware and software figured out, with an optimized architecture and libraries,” said Anne Osdoit, CEO of Moon Surgical. “Clara Holoscan helps us not worry about things we typically spend a lot of time working on in the medical-device development cycle.”

The company has instead been able to focus its engineering resources on AI algorithms and other unique features. Adopting Clara Holoscan saved them time and resources, helping them compress their development timeline.

London-based Proximie is building a telepresence platform to enable real-time, remote surgeon collaboration. Clara Holoscan will allow the company to provide local video processing in the operating room, improving performance for users while maintaining data privacy and lowering cloud-computing costs.

“We are delighted to work with NVIDIA to strengthen the health ecosystem and further our mission to connect operating rooms globally,” said Dr. Nadine Hachach-Haram, founder and CEO of Proximie. “Thanks to this collaboration, we are able to provide the most immersive experience possible and deliver a resilient digital solution, with which operating-room devices all over the world can communicate with each other and capture valuable insights.”

Proximie is already deployed in more than 500 operating rooms around the world, and has recorded tens of thousands of surgical procedures to date.

Medical-Grade Compliance in Edge AI

The NVIDIA IGX platform is powered by NVIDIA IGX Orin, the world’s most powerful, compact and energy-efficient AI supercomputer for medical devices. IGX Orin developer kits will be available early next year.

IGX features industrial-grade components designed for medical certification, making it easier to take medical devices from clinical trials to real-world deployment.

Embedded-computing manufacturers ADLINK, Advantech, Dedicated Computing, Kontron, Leadtek, MBX, Onyx, Portwell, Prodrive Technologies and YUAN will be among the first to build products based on NVIDIA IGX for the medical device industry.

Learn more about the NVIDIA IGX platform in a special address by Kimberly Powell, NVIDIA’s vice president of healthcare, at GTC. Register free for the virtual conference, which runs through Thursday, Sept. 22.

Hear from Activ Surgical and other leading startups in medical devices, medical imaging and biopharma in the GTC panel, “Accelerate Patient-Centric Innovation With Makers and Breakers in Healthcare Life Science.” The GTC session “Take Medical AI from Research to Clinical Production With MONAI and Clara Holoscan” will highlight the latest developments in MONAI and Clara Holoscan.

Watch the GTC keynote address by NVIDIA founder and CEO Jensen Huang below: 

The post NVIDIA Medical Edge AI Computing Platform Selected by Top Robotic and Digital Surgery Startups appeared first on NVIDIA Blog.

Read More

New NVIDIA IGX Platform Helps Create Safe, Autonomous Factories of the Future

NVIDIA today introduced the IGX edge AI computing platform for secure, safe autonomous systems.

IGX brings together hardware with programmable safety extensions, commercial operating-system support and powerful AI software — enabling organizations to safely and securely deliver AI in support of human-machine collaboration.

The all-in-one platform enables next-level safety, security and perception for use cases in healthcare, as well as in industrial edge AI.

Robots and autonomous systems are used to create “factories of the future,” where humans and robots work side by side, leading to improved efficiency for manufacturing, logistics and other workflows.

Such autonomous machines have built-in functional safety capabilities to ensure intelligent spaces stay clear of collisions and other safety threats.

NVIDIA IGX enhances this functional safety, using AI sensors around the environment to add proactive-safety alerts — which identify safety concerns before an incident occurs — in addition to existing reactive-safety abilities, which mitigate safety threats.

“Safety has long been a top priority for industrial organizations,” said Riccardo Mariani, vice president of industry safety at NVIDIA. “What’s new is that we’re using AI across sensors in a factory to create a centralized view, which can improve safety by providing additional inputs to the intelligent machines and autonomous mobile robots operating in the environment.”

For sensitive settings in which humans and machines collaborate — like factories, distribution centers and warehouses — intelligent systems’ safety and security features are especially critical.

Proactive Safety for the Industrial Edge

Three layers of functional safety exist at the industrial edge.

First, reactive safety is the mitigation of safety threats and events after they occur. For example, if a human walked into a robot’s direct route, the bot might stop, slow or shut down to avoid a collision.

This type of safety mechanism already exists in autonomous machines, per standard regulations. But reactive stops of an intelligent machine or factory line result in unplanned downtime, which for some manufacturers can cost hundreds of thousands of dollars per hour.

Second, proactive safety is the identification of potential safety concerns before breaches happen. NVIDIA IGX is focused on enabling organizations to add this safety layer to intelligent environments, further protecting workers and saving costs. It delivers high-performance, proactive-safety capabilities that are built into its hardware and software.

For example, with IGX-powered proactive safety at play, cameras in the warehouse might see a human heading into the path of a robot. It could signal the bot to change routes and avoid being in the same field of movement, preventing the collision altogether. IGX can also alert employees and other robots in the area, rerouting them to eliminate the risk of a backup across the factory.

And third, predictive safety is the anticipation of future exposure to safety threats based on past performance data. In this case, organizations can use simulation and digital twins to identify patterns of intersections or improve a factory layout to reduce the number of safety incidents.

One of the first companies to use IGX at the edge is Siemens, a technology leader in industrial automation and digitalization, which is working with NVIDIA on a vision for autonomous factories. Siemens is collaborating with NVIDIA to expand its work across industrial computing, including with digital twins and for the industrial metaverse.

Siemens is already adding next-level perception into its edge-based applications through NVIDIA Metropolis. With millions of sensors in factories, Metropolis connects entire fleets of robots and IoT devices to bring AI into industrial environments, making it one of the key application frameworks for edge AI running on top of the IGX platform.

Zero-Trust Security, High Performance and Long-Term Support

In addition to safety requirements, edge AI deployments have unique security needs compared to systems in data centers or the cloud — since humans and machines operate side by side in edge environments.

The IGX platform has a hardened, end-to-end security architecture that starts at the system level and expands through over-the-air updates from the cloud. Advanced networking and a dedicated security controller are paired with NVIDIA Fleet Command, which brings secure edge AI management and orchestration from the cloud.

The first product offering of the IGX platform is NVIDIA IGX Orin, the world’s most powerful, compact and energy-efficient AI system for accelerating the development of autonomous industrial machines and medical devices.

IGX Orin developer kits will be available early next year. Each includes an integrated GPU and CPU for high-performance AI compute, as well as an NVIDIA ConnectX-7 SmartNIC to deliver high-performance networking with ultra-low latency and advanced security.

IGX also brings up to 10 years of full-stack support from NVIDIA and leading partners.

Learn more about IGX and other technology breakthroughs by watching the latest GTC keynote address by NVIDIA founder and CEO Jensen Huang:

The post New NVIDIA IGX Platform Helps Create Safe, Autonomous Factories of the Future appeared first on NVIDIA Blog.

Read More

NVIDIA Isaac Nova Orin Opens New Era of Innovation for Autonomous Mobile Robots

Next-day packages. New vehicle deliveries. Fresh organic produce. Each of these modern conveniences is accelerated by fleets of mobile robots.

NVIDIA today is announcing updates to Nova Orin — an autonomous mobile robot (AMR) reference platform — that advance its roadmap. We’re releasing details of three reference platform configurations. Two use a single Jetson AGX Orin — which runs the NVIDIA Isaac robotics stack and the Robot Operating System (ROS) with the GPU-accelerated framework — and one relies on two Orin modules.

The Nova Orin platform is designed to improve reliability and reduce development costs worldwide for building and deploying AMRs.

AMRs are like self-driving cars but for unstructured environments. They don’t need fixed, preprogrammed tracks and are capable of avoiding obstacles. This makes them ideal in logistics for moving items in warehouses, distribution centers and factories, or for applications in hospitality, cleaning, roaming security and last-mile delivery.

For years, AMR manufacturers have been designing robots by sourcing and integrating compute hardware, software and sensors in house. This time-consuming effort demands years of engineering resources, lengthens go-to-market pipelines and distracts from developing domain-specific applications.

Nova Orin offers a better way forward with tested, industrial-grade configurations of sensors, software and GPU-computing capabilities. Tapping into the NVIDIA AI platform frees developers to focus on building their unique software stack of robot applications.

Much is at stake for intralogistics enabled by AMRs across industries, a market expected to increase nearly 6x to $46 billion by 2030, up from $8 billion in 2021, according to estimates from ABI Research.

Designing a Highly Capable, Flexible Reference Architecture 

The Nova Orin reference architecture designs are provided for specific use cases. There is one Orin-based design without safety-certified sensors, and one that includes them, along with a safety programmable logic controller. The third architecture has a dual Orin-based design that depends on vision AI for enabling functional safety.

Sensor support is included for stereo cameras, lidars, ultrasonic sensors and inertial measurement units. The chosen sensors have been selected to balance performance, price and reliability for industrial applications. The suite of sensors provides a multimodal diversity of coverage that is required for developing and deploying safe and collaborative AMRs.

The stereo cameras and fisheye cameras are custom designed by NVIDIA in coordination with camera partners. All sensors are calibrated and time synchronized, and come with drivers for reliable data capture. These sensors allow AMRs to detect objects and obstacles across a wide range of situations while also enabling simultaneous localization and mapping (SLAM).

NVIDIA provides two lidar options, one for applications that don’t need sensors certified for functional safety, and the other for those that do. In addition to these 2D lidars, Nova Orin supports 3D lidar for mapping and ground-truth data collection.

Building a Comprehensive AI Platform for OEMs, ISVs

NVIDIA is driving the Nova Orin platform forward with extensive software support in addition to the hardware and integration tools.

The base OS includes drivers and firmware for all the hardware and adaptation tools, as well as design guides for integrating it with robots. Nova can be integrated easily with a ROS-based robot application.

The sensors will have validated models in Isaac Sim for application development and testing without the need for an actual robot.

The cloud-native data acquisition tools eliminate the arduous task of setting up data pipelines for the vast amount of sensor data needed for training models, debugging and analytics. State-of-the-art GEMs developed for Nova sensors are GPU accelerated with the Jetson Orin platform, providing key building blocks such as visual SLAM, stereo depth estimation, obstacle detection, 3D reconstruction, semantic segmentation and pose estimation.

Nova Orin also addresses the need to quickly create high-fidelity, city-scale 3D maps for indoor environments in the cloud. These generated maps allow robot navigation, fleet planning and simulation. Plus, the maps can be continuously updated using data from the robots.

AMRs That Are Ready for Industries

As robotics systems evolve, the need for secure deployment and management of the critical AI software on board is paramount for future AMRs.

Nova Orin supports secure over-the-air updates, as well as device management and monitoring, to enable easy deployment and reduce the cost of maintenance. Its open, modular design enables developers to use some or all capabilities of the platform and extend it to quickly develop robotics applications.

NVIDIA is working closely with regulatory bodies to develop vision-enabled safety technology to further reduce the cost and improve reliability of AMRs. And we’re providing a software development kit for navigation, so developers can quickly develop applications.

Improving productivity for factories and warehouses will depend on AMRs working safely and efficiently side by side at scale. High levels of autonomy driven by 3D perception from Nova Orin will help drive that revolution.

Learn more about Nova Orin and sign up to be notified of its availability.

The post NVIDIA Isaac Nova Orin Opens New Era of Innovation for Autonomous Mobile Robots appeared first on NVIDIA Blog.

Read More

On Track: Digitale Schiene Deutschland Building Digital Twin of Rail Network in NVIDIA Omniverse

Deutsche Bahn’s rail network consists of 5,700 stations and 33,000 kilometers of track, making it the largest in Western Europe.

Digitale Schiene Deutschland (Digital Rail for Germany, or DSD), part of Germany’s national railway operator Deutsche Bahn, is working to increase the network’s capacity without building new tracks. It’s striving to create a powerful railway system in which trains are automated, safely run with less headway between each other and are optimally steered through the network.

In collaboration with NVIDIA, DSD is beginning to build the first country-scale digital twin to fully simulate automatic train operation across an entire network. That means creating a photorealistic and physically accurate emulation of the entire rail system. It will include tracks running through cities and countrysides, and many details from sources such as station platform measurements and vehicle sensors.

Using the AI-enabled digital twin created with NVIDIA Omniverse, DSD can develop highly capable perception and incident prevention and management systems to optimally detect and react to irregular situations during day-to-day railway operation.

“With NVIDIA technologies, we’re able to begin realizing the vision of a fully automated train network,” said Ruben Schilling, who leads the perception group at DB Netz, part of Deutsche Bahn. The envisioned future railway system improves the capacity, quality and efficiency of the network.

This is the basis for satisfied passengers and cargo customers, leading to more traffic on the tracks and thereby reducing the carbon footprint of the mobility sector.

Data, Data and More Data

Creating a digital twin at such a large scale is a massive undertaking. It needs a custom-built 3D pipeline that connects computer-aided design datasets that are built, for example, within the Siemens JT ecosystem with DSD’s high-definition 3D maps and various simulation tools. Using the Universal Scene Description 3D framework, DSD can connect and combine data sources into a single shared virtual model.

With its network perfectly synchronized with the real world, DSD can run optimization tests and “what if” scenarios to test and validate changes in the railway system, such as reactions to unforeseen situations.

Running on NVIDIA OVX, the computing system for running Omniverse simulations, DSD will be able to operate the persistent simulation, which is regularly improved by data stream updates from the physical world.

Watch the demo to see the digital twin in action:

Future computer vision-powered systems could continually perform route observation and incident recognition, automatically warning of and reacting to potential hazards.

The AI sensor models will be trained and optimized with a combination of real-world and synthetic data, some of which will be generated by the Omniverse Replicator software development kit framework. This will ensure models can perceive, plan and act when faced with everyday and unexpected scenarios.

The Future of Rail

With its pioneering approach to rail network optimization, DSD is contributing to the future of Europe’s rail system and industry development. Sharing its data pool across countries allows for continuous improvement and deployment across future vehicles, resulting in the highest possible quality while reducing costs.

Watch the GTC keynote on demand to see all of NVIDIA’s latest announcements, and register free for the conference, running through Thursday, Sept. 22, to explore how digital twins are transforming industries.

The post On Track: Digitale Schiene Deutschland Building Digital Twin of Rail Network in NVIDIA Omniverse appeared first on NVIDIA Blog.

Read More

Reinventing Retail: Lowe’s Teams With NVIDIA and Magic Leap to Create Interactive Store Digital Twins

With tens of millions of weekly transactions across its more than 2,000 stores, Lowe’s helps customers achieve their home-improvement goals. Now, the Fortune 50 retailer is experimenting with high-tech methods to elevate both the associate and customer experience.

Using NVIDIA Omniverse Enterprise to visualize and interact with a store’s digital data, Lowe’s is testing digital twins in Mill Creek, Wash,. and Charlotte, N.C. Its ultimate goal is to empower its retail associates to better serve customers, collaborate with one another in new ways and optimize store operations.

“At Lowe’s, we are always looking for ways to reimagine store operations and remove friction for our customers,” said Seemantini Godbole, executive vice president and chief digital and information officer at Lowe’s. “With NVIDIA Omniverse, we’re pulling data together in ways that have never been possible, giving our associates superpowers.”

Augmented Reality Restocking and ‘X-Ray Vision’

With its interactive digital twin, Lowe’s is exploring a variety of novel augmented reality use cases, including reconfiguring layouts, restocking support, real-time collaboration and what it calls “X-ray vision.”

Wearing a Magic Leap 2 AR headset, store associates can interact with the digital twin. This AR experience helps an associate compare what a store shelf should look like with what it actually looks like, and ensure it’s stocked with the right products in the right configurations.

And this isn’t just a single-player activity. Store associates on the ground can communicate and collaborate with centralized store planners via AR. For example, if a store associate notices an improvement that could be made to a proposed planogram for their store, they can flag it on the digital twin with an AR “sticky note.”

Lastly, a benefit of the digital twin and Magic Leap 2 headset is the ability to explore “X-ray vision.” Traditionally, a store associate might need to climb a ladder to scan or read small labels on cardboard boxes held in a store’s top stock. With an AR headset and the digital twin, the associate could look up at a partially obscured cardboard box from ground level, and, thanks to computer vision and Lowe’s inventory application programming interfaces, “see” what’s inside via an AR overlay.

Store Data Visualization and Simulation

Home-improvement retail is a tactile business. And when making decisions about how to create a new store display, a common way for retailers to see what works is to build a physical prototype, put it out into a brick-and-mortar store and examine how customers react.

With NVIDIA Omniverse and AI, Lowe’s is exploring more efficient ways to approach this process.

Just as e-commerce sites gather analytics to optimize the customer shopping experience online, the digital twin enables new ways of viewing sales performance and customer traffic data to optimize the in-store experience. 3D heatmaps and visual indicators that show the physical distance of items frequently bought together can help associates put these objects near each other. Within a 100,000 square-foot location, for example, minimizing the number of steps needed to pick up an item is critical.

Using historical order and product location data, Lowe’s can also use NVIDIA Omniverse to simulate what might happen when a store is set up differently. Using AI avatars created in Lowe’s Innovation Labs, the retailer can simulate how far customers and associates might need to walk to pick up items that are often bought together.

NVIDIA Omniverse allows for hundreds of simulations to be run in a fraction of the time that it takes to build a physical store display, Godbole said.

Expanding Into the Metaverse

Lowe’s also announced today at NVIDIA GTC that it will soon make the over 600 photorealistic 3D product assets from its home-improvement library free for other Omniverse creators to use in their virtual worlds. All of these products will be available in the Universal Scene Description format on which Omniverse is built, and can be used in any metaverse created by developers using NVIDIA Omniverse Enterprise.

For Lowe’s, the future of home improvement is one in which AI, digital twins and mixed reality play a part in the daily lives of its associates, Godbole said. With NVIDIA Omniverse, the retailer is taking steps to build this future – and there’s a lot more to come as it tests new strategies.

Join a GTC panel discussion on Wednesday, Sept. 21, with Lowe’s Innovation Labs VP Cheryl Friedman and Senior Director of Creative Technology Mason Sheffield, who will discuss how Lowe’s is using AI and NVIDIA Omniverse to make the home-improvement retail experience even better.

Watch the GTC keynote on demand to see all of NVIDIA’s latest announcements, and register free for the conference — running through Thursday, Sept. 22 — to explore how digital twins are transforming industries.

The post Reinventing Retail: Lowe’s Teams With NVIDIA and Magic Leap to Create Interactive Store Digital Twins appeared first on NVIDIA Blog.

Read More

Experience the Future of Vehicle Infotainment: NVIDIA DRIVE Concierge Brings Customized AI to Every Seat

With NVIDIA DRIVE, in-vehicle infotainment, or IVI, is so much more than just giving directions and playing music.

NVIDIA founder and CEO Jensen Huang demonstrated the capabilities of a truly IVI experience during today’s GTC keynote. Using centralized, high-performance compute, the NVIDIA DRIVE Concierge platform spans traditional cockpit and cluster capabilities, as well as personalized, AI-powered safety, convenience and entertainment features for every occupant.

Drivers in the U.S. spend an average of nearly 450 hours in their car every year. With just a traditional cockpit and infotainment display, those hours can seem even longer.

DRIVE Concierge makes time in vehicles more enjoyable, convenient and safe, extending intelligent features to every passenger using the DRIVE AGX compute platform, DRIVE IX software stack and Omniverse Avatar Cloud Engine (ACE).

These capabilities include crystal-clear graphics and visualizations in the cockpit and cluster, intelligent digital assistants, driver and occupant monitoring, and streaming content such as games and movies.

With DRIVE Concierge, every passenger can enjoy their own intelligent experience.

Cockpit Capabilities

By running on the cross-domain DRIVE platform, DRIVE Concierge can virtualize, as well as host, multiple virtual machines on a single chip — rather than distributed computers — for streamlined development.

With this centralized architecture, DRIVE Concierge seamlessly orchestrates driver information, cockpit and infotainment functions. It supports the Android Automotive operating system, so automakers can easily customize and scale their IVI offerings.

And digital cockpit and cluster features are just the beginning. DRIVE Concierge extends this premium functionality to the entire vehicle, with world-class confidence view, video-conferencing capabilities, digital assistants, gaming and more.

Visualizing Intelligence

Speed, fuel range and distance traveled are key data for human drivers to be aware of. When AI is at the wheel, however, a detailed view of the vehicle’s perception and planning layers is also crucial.

DRIVE Concierge is tightly integrated with the DRIVE Chauffeur platform to provide high-quality, 360-degree, 4D visualization with low latency. Drivers and passengers can always see what’s in the mind of the vehicle’s AI, with beautiful 3D graphics.

This visualization is critical to building trust between the autonomous vehicle and its passengers, so occupants can be confident in the AV system’s perception and planned path.

How May AI Help You?

In addition to revolutionizing driving, AI is creating a more intelligent vehicle interior with personalized digital assistants.

Omniverse ACE is a collection of cloud-based AI models and services for developers to easily build, customize and deploy interactive avatars.

With ACE, AV developers can create in-vehicle assistants that are easily customizable with speech AI, computer vision, natural language understanding, recommendation engines and simulation technologies.

These avatars can help make recommendations, book reservations, access vehicle controls and provide alerts for situations like if a valuable item is left behind.

Game On

With software-defined capabilities, cars are becoming living spaces, complete with the same entertainment available at home.

NVIDIA DRIVE Concierge lets passengers watch videos and experience high-performance gaming wherever they go. Users can choose from their favorite apps and stream videos and games on any vehicle screen.

By using the NVIDIA GeForce NOW cloud gaming service, passengers can access more than 1,400 titles without the need for downloads, benefitting from automatic updates and unlimited cloud storage.

Safety and Security

Intelligent interiors provide an added layer of safety to vehicles, in addition to convenience and entertainment.

DRIVE Concierge uses interior sensors and dedicated deep neural networks for driver monitoring, which ensures attention is on the road in situations where the human is in control.

It can also perform passenger monitoring to make sure that occupants are safe and no precious cargo is left behind.

Using NVIDIA DRIVE Sim on Omniverse, developers can collaborate to design passenger interactions with such cutting-edge features in the vehicle.

By tapping into NVIDIA’s past heritage of infotainment technology, DRIVE Concierge is revolutionizing the future of in-vehicle experiences.

The post Experience the Future of Vehicle Infotainment: NVIDIA DRIVE Concierge Brings Customized AI to Every Seat appeared first on NVIDIA Blog.

Read More

NVIDIA DRIVE Thor Strikes AI Performance Balance, Uniting AV and Cockpit on a Single Computer

The next generation of autonomous vehicle computing is improving performance and efficiency at the speed of light.

During today’s GTC keynote, NVIDIA founder and CEO Jensen Huang unveiled DRIVE Thor, a superchip of epic proportions. The automotive-grade system-on-a-chip (SoC) is built on the latest CPU and GPU advances to deliver 2,000 teraflops of performance while reducing overall system costs.

DRIVE Thor succeeds NVIDIA DRIVE Orin in the company’s product lineup, incorporating the newest compute technology to accelerate industry deployment of intelligent-vehicle technology, targeting automakers’ 2025 models.

DRIVE Thor is the next generation in the NVIDIA AI compute roadmap.

Geely-owned premium EV maker ZEEKR will be the first customer for the next-generation platform, with production starting in 2025.

DRIVE Thor unifies traditionally distributed functions in vehicles — including digital cluster, infotainment, parking and assisted driving — for greater efficiency in development and faster software iteration.

Manufacturers can configure the DRIVE Thor superchip in multiple ways. They can dedicate all of the platform’s 2,000 teraflops to the autonomous driving pipeline, or use a portion for in-cabin AI and infotainment and another portion for driver assistance.

Like the current-generation NVIDIA DRIVE Orin, DRIVE Thor uses the productivity of the NVIDIA DRIVE software development kit, is designed to be ASIL-D functionally safe, and is built on a scalable architecture, so developers can seamlessly port their past software development to the latest platform.

Lightning Fast

In addition to raw performance, DRIVE Thor delivers an incredible leap in deep neural network accuracy.

DRIVE Thor marks the first inclusion of a transformer engine in the AV platform family. The transformer engine is a new component of the NVIDIA GPU Tensor Core. Transformer networks process video data as a single perception frame, enabling the compute platform to process more data over time.

With 8-bit floating point (FP8) precision, the SoC introduces a new data type for automotive. Traditionally, AV developers see a loss in accuracy when moving from 32-bit floating point to 8-bit integer data formats. FP8 precision eases this transition, making it possible for developers to transfer data types without sacrificing accuracy.

Additionally, DRIVE Thor uses updated ARM Poseidon AE cores, making it one of the highest performance processors in the industry.

Multi-Domain Computing

DRIVE Thor is as efficient as it is powerful.

The SoC is capable of multi-domain computing, meaning it can partition tasks for autonomous driving and in-vehicle infotainment. This multi-compute domain isolation lets concurrent time-critical processes run without interruption. On one computer, the vehicle can simultaneously run Linux, QNX and Android.

Typically, these types of functions are controlled by tens of electronic control units distributed throughout a vehicle. Rather than relying on these distributed ECUs, manufacturers can now consolidate vehicle functions using DRIVE Thor’s ability to isolate specific tasks.

With DRIVE Thor, automakers can consolidate intelligent vehicle functions on a single SoC.

All vehicle displays, sensors and more can connect to this single SoC, simplifying what has been an incredibly complex supply chain for automakers.

Two Is Always Better Than One

If one DRIVE Thor seems incredible, try two.

Customers can use one DRIVE Thor SoC, or they can connect two via the latest NVLink-C2C chip interconnect technology to serve as a monolithic platform that runs a single operating system.

This capability provides automakers with the compute headroom and flexibility to build software-defined vehicles that are continuously upgradeable through secure, over-the-air updates.

Designed with the best of NVIDIA GPU technology, DRIVE Thor is truly an AV SoC of heroic proportions.

The post NVIDIA DRIVE Thor Strikes AI Performance Balance, Uniting AV and Cockpit on a Single Computer appeared first on NVIDIA Blog.

Read More

HEAVY.AI Delivers Digital Twin for Telco Network Planning and Operations Based on NVIDIA Omniverse

Telecoms began touting the benefits of 5G networks six years ago. Yet the race to deliver ultrafast wireless internet today resembles a contest between the tortoise and the hare, as some mobile network operators struggle with costly and complex network requirements.

Advanced data analytics company HEAVY.AI today unveiled solutions to put carriers on more even footing. Its initial product, HeavyRF, delivers a next-generation network planning and operations tool based on the NVIDIA Omniverse platform for creating digital twins.

“Building out 5G networks globally will cost trillions of dollars over the next decade, and our telco network customers are rightly worried about how much of that is money not well spent,” said Jon Kondo, CEO of HEAVY.AI. “Using HEAVY advanced analytics and NVIDIA Omniverse-based real-time simulations, they’ll see big savings in time and money.”

HEAVY.AI also announced that Charter Communications is collaborating on incorporating the tool into its modeling and planning operations for its Spectrum telco network, which has 32 million customers in 41 U.S. states. The collaboration extends HEAVY’s relationship with Charter, building on the existing analytics operations to 5G network planning.

“HEAVY.AI’s new digital twin capabilities give us a way to explore and fine-tune our expanding 5G networks in ways that weren’t possible before,” said Jared Ritter, senior director of analytics and automation at Charter Communications.

Without the digital twin approach, telco operators must either: physically place microcell towers in densely populated areas to understand the interaction between radio transmitters, the environment, and humans and devices that are on the move — or use tools that offer less detail about key factors  such as tree density or high-rise interference.

Early deployments of 5G needed 300% more base stations for the same level of coverage offered by the previous generation, called Long Term Evolution (LTE), because of higher spectrum bands. A 5G site will consume 300% more power and cost 4x more than an LTE site if they’re deployed in the same way, according to researcher Analysys Mason.

Those sobering figures are prompting the industry to look for efficiencies. Harnessing GPU-accelerated analytics and real-time geophysical mapping, HEAVY.AI’s digital twin solution allows telcos to test radio frequency (RF) propagation scenarios in seconds, powered by the HeavyRF module. This results in significant time and cost savings, because the base stations and microcells can be more accurately placed and tuned at first installation.

The HeavyRF module supports telcos’ goals to plan, build and operate new networks more efficiently by tightly integrating key business information such as mobility and parcels data, as well as customer experience data, within RF planning workflows.

Using an RF-synchronized digital twin would enable planners at Charter Communications to optimize capacity and coverage, plus interactively see how changes in deployment patterns translate into customer acquisition and retention at the household level.

The goal is to use machine learning and big data pipelines to continuously mirror existing real-world conditions.

The digital twin will use the parallel computing capabilities of modern GPUs for visual simulation, as well as to generate physical simulations of RF signals using real-time RTX ray tracing, powered by NVIDIA Omniverse’s RTX Renderer.

For telcos, it’s not just about investing in traditional networks. With the rise of AI applications and services, these companies seek to lay the foundation for 5G-enabled devices, autonomous vehicles, appliances, robots and city infrastructure.

Watch the GTC keynote on demand to see all of NVIDIA’s latest announcements, and register for the conference — running through Thursday, Sept. 22 — to explore how digital twins are transforming industries.

The post HEAVY.AI Delivers Digital Twin for Telco Network Planning and Operations Based on NVIDIA Omniverse appeared first on NVIDIA Blog.

Read More

Reconstructing the Real World in DRIVE Sim With AI

Autonomous vehicle simulation poses two challenges: generating a world with enough detail and realism that the AI driver perceives the simulation as real, as well as creating simulations at a large enough scale to cover all the cases on which the AI driver needs to be fully trained and tested.

To address these challenges, NVIDIA researchers have created new AI-based tools to build simulations directly from real-world data. NVIDIA founder and CEO Jensen Huang previewed the breakthrough during the GTC keynote.

This research includes award-winning work first published at SIGGRAPH, a computer graphics conference held last month.

Neural Reconstruction Engine

The Neural Reconstruction Engine is a new AI toolset for the NVIDIA DRIVE Sim simulation platform that uses multiple AI networks to turn recorded video data into simulation.

The new pipeline uses AI to automatically extract the key components needed for simulation, including the environment, 3D assets and scenarios. These pieces are then reconstructed into simulation scenes that have the realism of data recordings, but are fully reactive and can be manipulated as needed. Achieving this level of detail and diversity by hand is costly, time consuming and not scalable.

Environments and Assets

A simulation needs an environment in which to operate. The AI pipeline converts 2D video data from a real-world drive to a dynamic, 3D digital twin environment that can be loaded into DRIVE Sim.

A 3D simulation environment generated from recorded driving data using AI.

The DRIVE Sim AI pipeline follows a similar process to reconstruct other 3D assets. Engineers can use the assets to reconstruct the current scene or place them in a larger library of assets to be used in any simulation.

Using the asset-harvesting pipeline is key to growing the DRIVE Sim library and ensuring it matches the diversity and distribution of the real world.

Assets can be harvested from real-world data, turned into 3D objects and reused in other scenes. Here, the tow truck is reconstructed from the scene on the left and used in a different simulation shown on the right.

Scenarios

Scenarios are the events that take place during a simulation in an environment combined with assets.

The Neural Reconstruction Engine assigns AI-based behaviors to the actors in the scene, so that when presented with the original events, they behave precisely as they did in the real drive. However, since they have an AI behavior model, the figures in the simulation can respond and react to changes by the AV or other scene elements.

Because these scenarios are all occurring in simulation, they can also be manipulated to add new situations. Timing and location of events can be altered. Developers can even incorporate entirely new elements, synthetic or real, to make a scenario more challenging, such as the addition of a child chasing a ball to the scene below.

Synthetic objects can be mixed with real-world scenarios.

Integration Into DRIVE Sim

Once the environment, assets and scenario have been extracted, they’re reassembled in DRIVE Sim to create a 3D simulation of the recorded scene or mixed with other assets to create a completely new scene.

DRIVE Sim provides the tools for developers to adjust dynamic and static objects, the vehicle’s path, and the location, orientation and parameters of the vehicle sensors.

The same scenes in DRIVE Sim are also used to generate pre-labeled synthetic data to train perception systems. Randomizations are applied on top of recreated scenes to add diversity to the training data. Building scenes out of real-world data greatly reduces the sim-to-real gap.

Reconstructed scenes can be augmented with synthetic assets and used to produce new data with ground truth for training AV perception systems.

The ability to mix and match simulation formats is a significant advantage in comprehensively testing and validating AVs at scale. Engineers can manipulate events in a world that is responsive and matches their needs precisely.

The Neural Reconstruction Engine is the result of work by the research team at NVIDIA, and will be integrated into future releases of DRIVE Sim. This breakthrough will enable developers to take advantage of both physics-based and neural-driven simulation on the same cloud-based platform.

The post Reconstructing the Real World in DRIVE Sim With AI appeared first on NVIDIA Blog.

Read More