Fish-Farming Startup Casts AI to Make Aquaculture More Efficient, Sustainable

Fish-Farming Startup Casts AI to Make Aquaculture More Efficient, Sustainable

As a marine biology student, Josef Melchner always dreamed of spending his days cruising the oceans to find dolphins, whales and fish — but also “wanted to do something practical, something that would benefit the world,” he said. When it came time to choose a career, he dove head first into aquaculture.

He’s now CEO of GoSmart, an Israel-based company using AI and machine learning to make fish farming more efficient and sustainable.

A member of the NVIDIA Metropolis vision AI partner ecosystem and the NVIDIA Inception program for cutting-edge startups, GoSmart offers fully autonomous, energy-efficient systems — about the size of a soda bottle — that can be attached to aquaculture cages, ponds or tanks.

Powered by the NVIDIA Jetson platform for edge AI, these systems analyze the average weight and population distribution of the fish within the environment, as well as its temperature and oxygen levels.

This information is then provided to users through GoSmart’s software-as-a-service, which helps fish farmers more accurately and efficiently determine how much — and when best — to feed their fish and harvest them, all in real time.

“The parameters that GoSmart systems analyze are crucial for the fish feed regime,” Melchner said. “Managing the right levels of fish feed saves a lot of money for the farmers and reduces organic matter from excessive debris in the aqua environment.”

GoSmart systems have been deployed by Skretting, one of the world’s largest fish feed producers, as part of its initiative to sustainably expand production pipelines across eight countries in southern Europe and provide farmers with personalized, digitalized information.

Precision Farming for Sustainability

Founded in 2020, GoSmart is focused on fish farming because it’s focused on helping the environment.

“The world faces a lack of protein, and yet marine protein is often acquired the way it’s always been, with boats going out with fishing nets and long lines,” Melchner said. “While many alternative sources of protein — like cattle, pigs and chicken — are almost always farmed, about half of marine production still comes from wildlife.”

Overfishing in this manner negatively impacts the planet.

“It’s a critical issue that could affect us all eventually,” Melchner said. “Algae is one of the largest carbon sinks in the world. It consumes carbon from the atmosphere and releases oxygen, and overfishing impacts levels of algae in the ocean.”

Understanding this is what led Melchner to devote his life’s work to aquaculture, he said.

The GoSmart system uses lithium-ion batteries charged by solar panels, and is equipped with its own power-management software that enables it to autonomously enter sleep mode, shut down, wake up and conduct its work as appropriate.

Farming Efficiency Boosted With AI

GoSmart systems are built with sensors, cameras and NVIDIA Jetson modules, which enable AI at the edge to analyze factors of an environment that impact fish feeding, growth, health and welfare, as well as environmental pollution due to excessive organic matter dispersed in the water because of inefficient or inaccurate operations.

“We wanted to use the best processor for AI with high performance in a system that’s compact, submersible underwater and affordable for fish farmers, which is why we chose the Jetson series,” Melchner said.

GoSmart is now training its systems to analyze fish behavior and disease indicators — adding to current capabilities of determining fish weight, population distribution, temperature and oxygen levels. Since Jetson enables multiple AI algorithms to run in parallel, all of these characteristics can be analyzed simultaneously and in real time.

The company is also evaluating the powerful new Jetson Orin lineup of modules to take these capabilities to the next level.

To train its AI algorithms, the GoSmart team measured thousands of fish manually before deploying cameras to analyze millions more. “There was a lot of diving and many underwater experiments,” Melchner said.

For high-performance deep learning inference, GoSmart is looking to use the NVIDIA TensorRT software development kit and open-source NVIDIA Triton Inference Server software.

And as a member of the NVIDIA Metropolis and Inception programs, GoSmart works closely with NVIDIA engineers and is exploring latest-generation technologies. “This will help make our algorithms quicker and more efficient,” Melchner said.

GoSmart could help farmers reduce fish feed by up to 15%, according to Melchner. For some customers, GoSmart technology has shortened fish growth time and subsequent time to market by an entire month.

A Tidal Wave of Possibilities

Melchner predicts that in a few years, aquaculture will look completely different from how it is today.

“Our goal is to have our systems in every cage, every pond, every tank in the world — we want to cover the entire aquaculture industry,” he said.

In addition to integrating AI models that analyze fish behavior and disease, GoSmart is looking to expand its systems and eventually integrate its solution with an autonomous feeding barge that can give fish the exact amount of food they need, exactly when they need it.

Learn more about the NVIDIA Metropolis application framework, developer tools and partner ecosystem.

Read More

Technical Artist Builds Great Woolly Mammoth With NVIDIA Omniverse USD Composer This Week ‘In the NVIDIA Studio’

Technical Artist Builds Great Woolly Mammoth With NVIDIA Omniverse USD Composer This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. 

Keerthan Sathya, a senior technical artist specializing in 3D, emerged trium-elephant In the NVIDIA Studio this week with the incredibly detailed, expertly constructed, jaw-droppingly beautiful animation Tiny Mammoth.

Sathya used a collection of popular 3D apps for the project — including Adobe Substance 3D Modeler, Painter and Autodesk 3ds Max — and completed staging, environment preparation, lighting and rendering in NVIDIA Omniverse with the USD Composer app.

Plus, Marvelous Designer software for making, editing and reusing 3D clothes just launched an NVIDIA Omniverse Connector.

Serving as a bridge, the Universal Scene Description (OpenUSD) framework enables users to import files directly from the Omniverse Nucleus server, and merge and update versions for use in different 3D apps. This saves users time and eliminates difficulties with imports and exports.

Learn how to use the Marvelous Designer Omniverse Connector by watching this tutorial, and find out how OpenUSD can improve artists’ creative workflows. Don’t miss Marvelous Designer’s upcoming community livestream demonstrating a workflow with the new Omniverse Connector on Wednesday, June 14.

One for the (Stone) Ages

A 14-year veteran in the creative community, Bangalore-based Sathya has long been enamored by animals and the concept of extinction. “Animals become extinct for various reasons,” he said. This fact inspired Sathya to create an environment design and tell a unique story using materials, lighting and animation.

Gathering reference material is 3D modeling 101.

“Traditional polygon modeling isn’t my cup of tea,” Sathya admitted. Instead, he used Adobe Substance 3D Modeler to seamlessly sculpt his models in 3D. His NVIDIA Studio HP ZBook laptop with NVIDIA RTX 5000 graphics unlocked GPU-accelerated filters to speed up material creation.

“Sculpting in virtual reality is so much fun, and so fast,” said the artist. “I could finalize models in just a few hours, all while eliminating all those pesky anatomy details!”

He also deployed Substance 3D Modeler’s automatic UV wrapping feature to generate UV islands once models were imported, making it easier to apply paints, textures and materials.

Sculpting in virtual reality with Adobe Substance 3D Modeler.

Sathya then moved the project to Autodesk 3ds Max to use retopology tools for automatic optimization geometry of his high-resolution models to create a clean, quad-based mesh. This is useful for removing artifacts and other mesh issues before animation and rigging.

GPU-enabled, RTX-accelerated AI denoising with the default Autodesk Arnold renderer in the viewport allowed for smoother, highly interactive modeling.

Sathya used retopology tools in Autodesk 3ds Max.

“My NVIDIA GPU is an integral part of the artwork I create. Modeling, texturing, staging, painting and lighting is all accelerated by RTX.” — Keerthan Sathya

Adobe Substance 3D Painter was a “game changer” allowing super-fast asset painting, Sathya said. RTX-accelerated light and ambient occlusion bakes and optimizes assets in mere seconds.

“You don’t necessarily need to create everything from scratch,” Sathya said. “Substance 3D Painter offers a wide range of materials, smart materials and smart masks which I used in my project, along with a whole collection of materials that helped me save a lot of time.”

“You can even paint multiple channels on multiple UDIMs in real time,” said Sathya. This means textures on various models can have different resolutions. For example, 4K-resolution textures can be used for priority details, while 2K resolution can be tapped for less important touches.

Textures were applied quickly and efficiently in Adobe Substance 3D Painter powered by an NVIDIA RTX GPU.

Sathya imported Tiny Mammoth into NVIDIA Omniverse, a platform for developing and building industrial metaverse applications, via the USD Composer app. This is where the artist accomplished staging, environment preparation, lighting and rendering.

“NVIDIA Omniverse is a great platform for artists to connect their desired 3D apps and collaborate. Plus, I really like AI-driven apps and features.” — Keerthan Sathya

Sathya marveled at the ease of setting up the scene in USD Composer. “Just using a few assets — instancing, arranging and kitbashing them to make a huge environment — is so satisfying and efficient,” he said.

Sathya said OpenUSD is “so much more than just a file format.”

“The OpenUSD workflow is great to work with,” he said. “I used OpenUSD pretty much for the whole project: environment assets and textures, foliage, lighting, still camera and adjustment of layers for each shot if necessary.”

OpenUSD files neatly organized.

With each OpenUSD layer and file stacked and authored accordingly, Sathya had the option to plug and play assets and creative workflow stages. Such versatility in 3D creative workflows is enabled by Omniverse and OpenUSD.

Scene tweaks in Omniverse USD Composer.

The artist heightened realism by painting moss trees with USD Composer’s paint tool. “It was easy to add those tiny details to make my artwork look better,” he said. Sathya then rotated the camera and adjusted the lighting until he met his desired result.

Due to the sheer size of the scene, Sathya used real-time rendering to add animations, a bit of fog and corrections for limited renders. “I like the idea of render passes, where you have a complete control of the scene while compositing, but it wasn’t necessary here,” he said.

Sathya exported the scene into Adobe After Effects for post-processing and additional visual effects, using more than 30 GPU-accelerated features and effects to add even more detail.

More visual effects in Adobe After Effects.

The artist reviewed video feedback in Adobe Rush. “It’s more convenient when I’m traveling or on my couch to arrange the shots and do some quick edits on my phone,” he said. Sathya completed advanced edits and final renders in Adobe Premiere Pro.

Exquisite detail with multiple woolly mammoths on a single rock.

“Life is all about contrast,” Sathya said. “I’ve experienced failures and successes, complex and simple, good and bad, many more contrasts, all of which drip into my artwork to make it unique!”

Senior technical artist Keerthan Sathya.

Check out additional details on Tiny Mammoth and view Sathya’s complete portfolio on Behance.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Get started with NVIDIA Omniverse by downloading the standard license free, or learn how Omniverse Enterprise can connect your team. Developers can get started with Omniverse resources. Stay up to date on the platform by subscribing to the newsletter, and follow NVIDIA Omniverse on Instagram, Medium and Twitter

For more, join the Omniverse community and check out the Omniverse forums, Discord server, Twitch and YouTube channels.

Read More

Microsoft Bing Speeds Ad Delivery With NVIDIA Triton

Microsoft Bing Speeds Ad Delivery With NVIDIA Triton

Jiusheng Chen’s team just got accelerated.

They’re delivering personalized ads to users of Microsoft Bing with 7x throughput at reduced cost, thanks to NVIDIA Triton Inference Server running on NVIDIA A100 Tensor Core GPUs.

It’s an amazing achievement for the principal software engineering manager and his crew.

Tuning a Complex System

Bing’s ad service uses hundreds of models that are constantly evolving. Each must respond to a request within as little as 10 milliseconds, about 10x faster than the blink of an eye.

The latest speedup got its start with two innovations the team delivered to make AI models run faster: Bang and EL-Attention.

Together, they apply sophisticated techniques to do more work in less time with less computer memory. Model training was based on Azure Machine Learning for efficiency.

Flying With NVIDIA A100 MIG

Next, the team upgraded the ad service from NVIDIA T4 to A100 GPUs.

The latter’s Multi-Instance GPU (MIG) feature lets users split one GPU into several instances.

Chen’s team maxed out the MIG feature, transforming one physical A100 into seven independent ones. That let the team reap a 7x throughput per GPU with inference response in 10ms.

Flexible, Easy, Open Software

Triton enabled the shift, in part, because it lets users simultaneously run different runtime software, frameworks and AI modes on isolated instances of a single GPU.

The inference software comes in a software container, so it’s easy to deploy. And open-source Triton — also available with enterprise-grade security and support through NVIDIA AI Enterprise — is backed by a community that makes the software better over time.

Accelerating Bing’s ad system with Triton on A100 GPUs is one example of what Chen likes about his job. He gets to witness breakthroughs with AI.

While the scenarios often change, the team’s goal remains the same — creating a win for its users and advertisers.

Read More

Accelerating the Accelerator: Scientist Speeds CERN’s HPC With GPUs, AI

Accelerating the Accelerator: Scientist Speeds CERN’s HPC With GPUs, AI

Editor’s note: This is part of a series profiling researchers advancing science with high performance computing. 

Maria Girone is expanding the world’s largest network of scientific computers with accelerated computing and AI.

Since 2002, the Ph.D. in particle physics has worked on a grid of systems across 170 sites in more than 40 countries that support CERN’s Large Hadron Collider (LHC), itself poised for a major upgrade.

A high-luminosity version of the giant accelerator (HL-LHC) will produce 10x more proton collisions, spawning exabytes of data a year. That’s an order of magnitude more than it generated in 2012 when two of its experiments uncovered the Higgs boson, a subatomic particle that validated scientists’ understanding of the universe.

The Call of Geneva

Girone loved science from her earliest days growing up in Southern Italy.

“In college, I wanted to learn about the fundamental forces that govern the universe, so I focused on physics,” she said. “I was drawn to CERN because it’s where people from different parts of the world work together with a common passion for science.”

Tucked between Lake Geneva and the Jura mountains, the European Organization for Nuclear Research is a nexus for more than 12,000 physicists.

Map of CERN and LHC
A map of CERN and the LHC below it on the French-Swiss border. (Image courtesy of CERN)

Its 27-kilometer ring is sometimes called the world’s fastest racetrack because protons careen around it at 99.9999991% the speed of light. Its superconducting magnets operate near absolute zero, creating collisions that are briefly millions of times hotter than the sun.

Opening the Lab Doors

In 2016, Girone was named CTO of CERN openlab, a group that gathers academic and industry researchers to accelerate innovation and tackle future computing challenges. It works closely with NVIDIA through its collaboration with E4 Computer Engineering, a specialist in HPC and AI based in Italy.

In one of her initial acts, Girone organized the CERN openlab’s first workshop on AI.

Industry participation was strong and enthusiastic about the technology. In their presentations, physicists explained the challenges ahead.

“By the end of the day we realized we were from two different worlds, but people were listening to each other, and enthusiastically coming up with proposals for what to do next,” she said.

A Rising Tide of Physics AI

Today, the number of publications on applying AI across the whole data processing chain in high-energy physics is rising, Girone reports. The work attracts young researchers who see opportunities to solve complex problems with AI, she said.

Meanwhile, researchers are also porting physics software to GPU accelerators and using existing AI programs that run on GPUs.

“This wouldn’t have happened so quickly without the support of NVIDIA working with our researchers to solve problems, answer questions and write articles,” she said. “It’s been extremely important to have people at NVIDIA who appreciate how science needs to evolve in tandem with technology, and how we can make use of acceleration with GPUs.”

Energy efficiency is another priority for Girone’s team.

“We’re working on experiments on a number of projects like porting to lower power architectures, and we look forward to evaluating the next generation of lower power processors,” she said.

Digital Twins and Quantum Computers

To prepare for the HL-LHC, Girone, named head of CERN openlab in March, seeks new ways to accelerate science with machine learning and accelerated computing. Other tools are on the near and far horizons, too.

The group recently won funding to prototype an engine for building digital twins. It will provide services for physicists, as well as researchers in fields from astronomy to environmental science.

The LHC at CERN
A look inside the accelerator. (Image courtesy of CERN)

CERN also launched a collaboration among academic and industry researchers in quantum computing. The technology could advance science and lead to better quantum systems, too.

A Passion for Diversity

In another act of community-making, Girone was among four co-founders of a Swiss chapter of the Women in HPC group. It will help define specific actions to support women in every phase of their careers.

“I’m passionate about creating diverse teams where everyone feels they contribute and belong — it’s not just a checkbox about numbers, you want to realize a feeling of belonging,” she said.

Girone was among thousands of physicists who captured some of that spirit the day CERN announced the Higgs boson discovery.

She recalls getting up at 4 a.m. to queue for a seat in the main auditorium. It couldn’t hold all the researchers and guests who arrived that day, but the joy of accomplishment followed her and others watching the event from a nearby hall.

“I knew the contribution I made,” she said. “I was proud being among the many authors of the paper, and my parents and my kids felt proud, too.”

Check out other profiles in this series:

Read More

A New Age: ‘Age of Empires’ Series Joins GeForce NOW, Part of 20 Games Coming in June

A New Age: ‘Age of Empires’ Series Joins GeForce NOW, Part of 20 Games Coming in June

The season of hot sun and longer days is here, so stay inside this summer with 20 games joining GeForce NOW in June. Or stream across devices by the pool, from grandma’s house or in the car — whichever way, GeForce NOW has you covered.

Titles from the Age of Empires series are the next Xbox games to roll out to GeForce NOW, giving members plenty to do this summer, especially with over 1,600 games part of the GeForce NOW library.

Expand Your Empire

Age of Empires on GeForce NOW
From the Stone Age to the cloud.

NVIDIA released the first Xbox games to the cloud last month as part of its ongoing partnership with Microsoft. Now it’s the first to bring a smash hit Xbox series to the cloud with Ensemble Studios’ Age of Empires titles.

Since the first release in 1997, Age of Empires has established itself as one of the longest-running real-time strategy (RTS) series in existence. The critically acclaimed RTS series puts players in control of an entire empire with the goal of expanding and evolving to become a flourishing civilization.

All four of the franchise’s latest Steam versions will join GeForce NOW later this month: Age of Empires: Definitive Edition, Age of Empires II: Definitive Edition, Age of Empires III: Definitive Edition and Age of Empires IV: Anniversary Edition. Each title will also support new content and updates, like upcoming seasons or the recently released “Return of Rome” expansion for Age of Empires II: Definitive Edition.

Members will be able to rule from PC, Mac, Chromebooks and more when the Definitive Editions of these games join the GeForce NOW library later this month. Upgrade to Priority membership to skip the waiting lines and experience extended gaming sessions for those long campaigns. Or go for an Ultimate membership to conquer enemies at up to 4K resolution and up to eight-hour sessions.

Game on the Go

Now available in Europe, the Logitech G Cloud gaming handheld supports GeForce NOW, giving gamers a way to stream their PC library from the cloud. It features a seven-inch, full-HD touchscreen with a 60Hz refresh rate and precision controls to stream over 1,600 games in the GeForce NOW library.

Pick up the device and get one month of Priority membership for free to celebrate the launch, from now until Thursday, June 22.

“Look at You, Hacker…”

System Shock on GeForce NOW
Welcome to Citadel Station.

Stay cool as a cucumber in the remake of System Shock, the hit game from Nightdive Studios. This game has everything — first-person shooter, role-playing and action-adventure. Fight to survive in the depths of space after a hacker who awakens from a six-month coma finds the space station overrun with hostile mutants and rogue AI. Explore, use hacker skills and unravel the mysteries of the space station streaming System Shock in the cloud.

In addition, members can look for the following two games this week:

  • System Shock (New release on Steam)
  • Killer Frequency (New release on Steam, June 1)

And here’s what the rest of June looks like:

  • Amnesia: The Bunker (New release on Steam, June 6)
  • Harmony: The Fall of Reverie (New release on Steam, June 8)
  • Dordogne (New release on Steam, June 13)
  • Aliens: Dark Descent (New release on Steam, June 20)
  • Trepang2 (New release on Steam, June 21)
  • Layers of Fear (New release on Steam)
  • Park Beyond (New release on Steam)
  • Tom Clancy’s Rainbow Six Extraction (New release on Steam)
  • Age of Empires: Definitive Edition (Steam)
  • Age of Empires II: Definitive Edition (Steam)
  • Age of Empires III: Definitive Edition (Steam)
  • Age of Empires IV: Anniversary Edition (Steam)
  • Derail Valley (Steam)
  • I Am Fish (Steam)
  • Golf Gang (Steam)
  • Contraband Police (Steam)
  • Bloons TD 6 (Steam)
  • Darkest Dungeon (Steam)
  • Darkest Dungeon II (Steam)

Much Ado About May

In addition to the 16 games announced in May, six extra joined the GeForce NOW library:

Conquerer’s Blade didn’t make it in May, so stay tuned to GFN Thursday for any updates.

Finally, before moving forward into the weekend, lets take things back with our question of the week. Let us know your answer on Twitter or in the comments below.

Read More

Digital Renaissance: NVIDIA Neuralangelo Research Reconstructs 3D Scenes

Digital Renaissance: NVIDIA Neuralangelo Research Reconstructs 3D Scenes

Neuralangelo, a new AI model by NVIDIA Research for 3D reconstruction using neural networks, turns 2D video clips into detailed 3D structures — generating lifelike virtual replicas of buildings, sculptures and other real-world objects.

Like Michelangelo sculpting stunning, life-like visions from blocks of marble, Neuralangelo generates 3D structures with intricate details and textures. Creative professionals can then import these 3D objects into design applications, editing them further for use in art, video game development, robotics and industrial digital twins.

Neuralangelo’s ability to translate the textures of complex materials — including roof shingles, panes of glass and smooth marble — from 2D videos to 3D assets significantly surpasses prior methods. The high fidelity makes its 3D reconstructions easier for developers and creative professionals to rapidly create usable virtual objects for their projects using footage captured by smartphones.

“The 3D reconstruction capabilities Neuralangelo offers will be a huge benefit to creators, helping them recreate the real world in the digital world,” said Ming-Yu Liu, senior director of research and co-author on the paper. “This tool will eventually enable developers to import detailed objects — whether small statues or massive buildings — into virtual environments for video games or industrial digital twins.”

In a demo, NVIDIA researchers showcased how the model could recreate objects as iconic as Michelangelo’s David and as commonplace as a flatbed truck. Neuralangelo can also reconstruct building interiors and exteriors — demonstrated with a detailed 3D model of the park at NVIDIA’s Bay Area campus.

Neural Rendering Model Sees in 3D

Prior AI models to reconstruct 3D scenes have struggled to accurately capture repetitive texture patterns, homogenous colors and strong color variations. Neuralangelo adopts instant neural graphics primitives, the technology behind NVIDIA Instant NeRF, to help capture these finer details.

Using a 2D video of an object or scene filmed from various angles, the model selects several frames that capture different viewpoints — like an artist considering a subject from multiple sides to get a sense of depth, size and shape.

Once it’s determined the camera position of each frame, Neuralangelo’s AI creates a rough 3D representation of the scene, like a sculptor starting to chisel the subject’s shape.

The model then optimizes the render to sharpen the details, just as a sculptor painstakingly hews stone to mimic the texture of fabric or a human figure.

The final result is a 3D object or large-scale scene that can be used in virtual reality applications, digital twins or robotics development.

Find NVIDIA Research at CVPR, June 18-22

Neuralangelo is one of nearly 30 projects by NVIDIA Research to be presented at the Conference on Computer Vision and Pattern Recognition (CVPR), taking place June 18-22 in Vancouver. The papers span topics including pose estimation, 3D reconstruction and video generation.

One of these projects, DiffCollage, is a diffusion method that creates large-scale content — including long landscape orientation, 360-degree panorama and looped-motion images. When fed a training dataset of images with a standard aspect ratio, DiffCollage treats these smaller images as sections of a larger visual — like pieces of a collage. This enables diffusion models to generate cohesive-looking large content without being trained on images of the same scale.

sunset beach landscape generated by DiffCollage

The technique can also transform text prompts into video sequences, demonstrated using a pretrained diffusion model that captures human motion:



Learn more about NVIDIA Research at CVPR.

Read More

NVIDIA RTX Transforming 14-Inch Laptops, Plus Simultaneous Screen Encoding and May Studio Driver Available Today

NVIDIA RTX Transforming 14-Inch Laptops, Plus Simultaneous Screen Encoding and May Studio Driver Available Today

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

New 14-inch NVIDIA Studio laptops, equipped with GeForce RTX 40 Series Laptop GPUs, give creators peak portability with a significant increase in performance over the last generation. AI-dedicated hardware called Tensor Cores power time-saving tasks in popular apps like Davinci Resolve. Ray Tracing Cores together with our neural rendering technology, DLSS 3, boost performance in real-time 3D rendering applications like D5 Render and NVIDIA Omniverse.

NVIDIA also introduced a new method for accelerating video encoding. Simultaneous Scene Encoding sends independent groups of frames, or scenes, to each NVIDIA Encoder (NVENC). With multiple NVENCs fully utilized, video export times can be reduced significantly, without affecting image quality. The first software to integrate the technology is the popular video editing app CapCut.

The May Studio Driver is ready for download now. This month’s release includes support for updates to MAGIX VEGAS Pro, D5 Render and VLC Media Player — in addition to CapCut — plus AI model optimizations for popular apps.

COMPUTEX, Asia’s biggest annual tech trade show, kicks off a flurry of updates, bringing creators new tools and performance from the NVIDIA Studio platform — and plenty of AI power.

During his keynote address at COMPUTEX, NVIDIA founder and CEO Jensen Huang introduced a new generative AI to support game development, NVIDIA Avatar Cloud Engine (ACE) for Games. The platform adds intelligence to non-playable characters (NPCs) in gaming, with AI-powered natural language interactions.

The Kairos demo — a joint venture with Convai led by NVIDIA Creative Director Gabriele Leone — demonstrates how a single model can transform into a living, breathing, lifelike character this week In the NVIDIA Studio.

Ultraportable, Ultimate Performance

NVIDIA Studio laptops, powered by the NVIDIA Ada Lovelace architecture, are the world’s fastest laptops for creating and gaming.

For the first time, GeForce RTX performance comes to 14-inch devices. In the process, it’s transforming the ultraportable market, delivering the ultimate combination of performance and portability.

ASUS Zenbook Pro 14 comes with up to a GeForce RTX 4070 Laptop GPU.

These purpose-built creative powerhouses do it all. Backed by NVIDIA Studio, the platform supercharges over 110 creative apps, provides lasting stability with NVIDIA Studio Drivers and includes a powerful suite of AI-powered Studio software, such as NVIDIA Omniverse, Canvas and Broadcast.

Fifth-generation Max-Q technologies bring an advanced suite of AI-powered technologies that optimize laptop performance, power and acoustics for peak efficiency. Battery life improves by up to 70%. And DLSS is now optimized for laptops, giving creators incredible 3D rendering performance with DLSS 3 optical multi-frame generation and super resolution in Omniverse and D5 Render, and in hit games like Cyberpunk 2077.

As the ultraportable market heats up, PC laptop makers are giving creators more options than ever. Recently announced models, with more on the way, include the Acer Swift X 14, ASUS Zenbook Pro 14, GIGABYTE Aero 14, Lenovo’s Slim Pro 9i 14 and MSI Stealth 14.

Visit the Studio Shop for the latest GeForce RTX-powered NVIDIA Studio systems and explore the range of high-performance Studio products.

Simultaneous Scene Encoding

The recent release of Video Codec SDK 12.1 added support for multi-encoder support, which can cut export times in half. Our previously announced split encoding method — which splits a frame and sends each section to an encoder — now has an API that app developers can expose to their end users. Previously, split encoding would be engaged automatically for 4K or higher video and the faster export presets. With this update, developers can simply allow users to toggle on this option.

Video Codec SDK 12.1 also introduces a new encoding method: simultaneous scene encoding. Video apps can split groups of pictures or scenes as they’re sent into the rendering pipeline. Each group can then be rendered independently and ordered properly on the final output.

The result is a significant increase in encoding speed — approximately 80% for dual encoders, and further increases when more than two NVENCs are present, like in the NVIDIA RTX 6000 Ada Generation professional GPU. Image quality is also improved compared to current split encoding methods, where individual frames are sent to each encoder and then stitched back together in the final output.

CapCut users will be the first to experience this benefit on RTX GPUs with two or more encoders, starting with the software’s current release, available today.

Massive May Studio Driver Drops

The May Studio Driver features significant upgrades and optimizations.

MAGIX partnered with NVIDIA to move its line of VEGAS Pro AI models on WinML, enabling video editors to apply AI effects much faster.

The driver also optimizes AI features for applications running on WinML, including Adobe Photoshop, Lightroom, MAGIX Vegas Pro, ON1 and DxO, among many others.

The real-time ray tracing renderer D5 Render also added NVIDIA DLSS 3, delivering a smoother viewport experience to navigate scenes with super fluid motion, massively benefiting architects, designers, interior designers and all professional 3D artists.

D5 Render and DLSS 3 work brilliantly to create photorealistic imagery.

NVIDIA RTX Video Super Resolution — video upscaling technology that uses AI and RTX Tensor Cores to upscale video quality — is now fully integrated into VLC Media Player, no longer requiring a separate download. Learn more.

Download GeForce Experience or NVIDIA RTX Experience for the easiest way to upgrade and to be notified of the latest driver releases.

Gaming’s ACE in the Hole

During NVIDIA founder and CEO Jensen Huang’s keynote address at COMPUTEX, he introduced NVIDIA ACE for Games, a new foundry that adds intelligence to NPCs in gaming with AI-powered natural language interactions.

Game developers and studios can use ACE for Games to build and deploy customized speech, conversation and animation AI models in their software and games. The AI technology can transform entire worlds, breathing new life into individuals, groups or an entire town’s worth of characters — the sky’s the limit.

ACE for Games builds on technology inside NVIDIA Omniverse, an open development platform for building and operating metaverse applications, including optimized AI foundation models for speech, conversation and character animation.

This includes the NVIDIA NeMo for conversational AI fine-tuned for game characters, NVIDIA Riva for automatic speech recognition and text-to-speech, and Omniverse Audio2Face for instantly creating expressive facial animation of game characters to match any speech tracks. Audio2Face features Omniverse connectors for Unreal Engine 5, so developers can add facial animation directly to MetaHuman characters.

Seeing Is Believing: Kairos Demo

Huang debuted for COMPUTEX attendees ACE for Games — and provided a sneak-peek of the future of gaming — in a demo dubbed Kairos.

Convai, an NVIDIA Inception startup, specializes in cutting-edge conversational AI for virtual game worlds. NVIDIA Lightspeed Studios, led by Creative Director and 3D artist Gabriele Leone, built the remarkably realistic scene and demo. Together, they’ve showcased the opportunity developers have to use NVIDIA ACE for Games to build NPCs.

In the demo, players interact with Jin, owner and proprietor of a ramen shop. The photorealistic shop was modeled after the virtual ramen shop built in NVIDIA Omniverse.

For this, an NVIDIA artist traveled to a real ramen restaurant in Tokyo and collected over 2,000 high-resolution reference images and videos. Each captured aspects from the kitchen’s distinct areas for cooking, cleaning, food preparation and storage. “We probably used 70% of the existing models, 30% new and 80% retextures,” said Leone.

Kairos: Beautifully rendered in Autodesk Maya, Blender, Unreal Engine 5 and NVIDIA Omniverse.

In the digital ramen shop, objects were modeled in Autodesk 3ds Max with RTX-accelerated AI denoising, and Blender benefiting from RTX-accelerated OptiX ray tracing for smooth, interactive movement in the viewport — all powered by the team’s arsenal of GeForce RTX 40 Series GPUs.

“It’s fair to say that without GeForce RTX GPUs and Omniverse, this project would’ve been impossible to complete without adding considerable time” — Gabriele Leone

The texture phase in Adobe Substance 3D Painter used NVIDIA Iray rendering technology with RTX-accelerated light and ambient occlusion, baking large assets in mere moments.

Next, Omniverse and the Audio2Face app, via the Unreal Engine 5 Connector, allowed the team to add facial animation and audio directly to the ramen shop NPC.

Although he is an NPC, Jin replies to natural language realistically and consistent with the narrative backstory — all with the help of generative AI.

Lighting and animation work was done in Unreal Engine 5 aided by NVIDIA DLSS using AI to upscale frames rendered at lower resolution while still retaining high-fidelity detail, again increasing interactivity in the viewport for the team.

Direct your ramen order to the NPC, ahem, interactive, conversational character.

Suddenly, NPCs just got a whole lot more engaging. And they’ve never looked this good.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Read More

MediaTek Partners With NVIDIA to Transform Automobiles With AI and Accelerated Computing

MediaTek Partners With NVIDIA to Transform Automobiles With AI and Accelerated Computing

MediaTek, a leading innovator in connectivity and multimedia, is teaming with NVIDIA to bring drivers and passengers new experiences inside the car.

The partnership was announced today at a COMPUTEX press conference with MediaTek CEO Rick Tsai and NVIDIA founder and CEO Jensen Huang.

“NVIDIA is a world-renowned pioneer and industry leader in AI and computing. With this partnership, our collaborative vision is to provide a global one-stop shop for the automotive industry, designing the next generation of intelligent, always-connected vehicles,” said Tsai. “Through this special collaboration with NVIDIA, we will together be able to offer a truly unique platform for the compute-intensive, software-defined vehicle of the future.”

“AI and accelerated computing are fueling the transformation of the entire auto industry,” said Huang. “The combination of MediaTek’s industry-leading system-on-chip plus NVIDIA’s GPU and AI software technologies will enable new user experiences, enhanced safety and new connected services for all vehicle segments, from luxury to entry-level.”

A Collaboration to Transform Automotive

The partnership combines the best competencies of each company to deliver the most compelling solutions for the next generation of connected vehicles.

Today, NVIDIA offers GPUs for laptops, desktops, workstations and servers, along with systems-on-chips (SoCs) for automotive and robotics applications. With this new GPU chiplet, NVIDIA can extend its GPU and accelerated compute leadership across broader markets.

MediaTek will develop automotive SoCs and integrate the NVIDIA GPU chiplet, featuring NVIDIA AI and graphics intellectual property, into the design architecture. The chiplets are connected by an ultra-fast and coherent chiplet interconnect technology.

In addition, MediaTek will run the NVIDIA DRIVE OS, DRIVE IX, CUDA and TensorRT software technologies on these new automotive SoCs to enable connected infotainment and in-cabin convenience and safety functions. This partnership makes more in-vehicle infotainment options available to automakers on the NVIDIA DRIVE platform.

MediaTek will develop automotive SoCs integrating NVIDIA GPU chiplet. Image courtesy of MediaTek.

By tapping NVIDIA’s core expertise in AI, cloud, graphics technology and software ecosystem, and pairing it with NVIDIA advanced driver assistance systems, MediaTek can bolster the capabilities of its Dimensity Auto platform.

A Rich Heritage of Innovation

This collaboration empowers MediaTek’s automotive customers to offer cutting-edge NVIDIA RTX graphics and advanced AI capabilities, plus safety and security features enabled by NVIDIA DRIVE software, for all types of vehicles. According to Gartner, the infotainment and instrument cluster SoCs used within vehicles is projected to reach $12 billion in 2023.*

MediaTek’s Dimensity Auto platform draws on its decades of experience in mobile computing, high-speed connectivity, entertainment and extensive Android ecosystem. The platform includes the Dimensity Auto Cockpit, which supports smart multi-displays, high-dynamic range cameras and audio processing, so drivers and passengers can seamlessly interact with cockpit and infotainment systems.

For well over a decade, automakers have been turning to NVIDIA to help modernize their vehicle cockpits, using its technology for infotainment systems, graphical user interfaces and touchscreens.

By integrating the NVIDIA GPU chiplet into its automotive offering, MediaTek aims to enhance the performance capabilities of its Dimensity Auto platform to deliver the most advanced in-cabin experience available in the market. The platform also includes Auto Connect, a feature that will ensure drivers remain wirelessly connected with high-speed telematics and Wi-Fi networking.

With today’s announcement, MediaTek aims to raise the bar even higher for its automotive offerings — delivering intelligent, connected in-cabin solutions that cater to the evolving needs and demands of customers, while providing a safe, secure and enjoyable experience in the car.

*Gartner, Forecast Analysis: Automotive Semiconductors, Worldwide, 2021-2031; Table 1 – Automotive Semiconductor Forecast by Application (Billions of U.S. Dollars), January 18, 2023. Calculation performed by NVIDIA based on Gartner research.

Read More

Live From Taipei: NVIDIA CEO Unveils Gen AI Platforms for Every Industry

Live From Taipei: NVIDIA CEO Unveils Gen AI Platforms for Every Industry

In his first live keynote since the pandemic, NVIDIA founder and CEO Jensen Huang today kicked off the COMPUTEX conference in Taipei, announcing platforms that companies can use to ride a historic wave of generative AI that’s transforming industries from advertising to manufacturing to telecom.

“We’re back,” Huang roared as he took the stage after years of virtual keynotes, some from his home kitchen. “I haven’t given a public speech in almost four years — wish me luck!”

Speaking for nearly two hours to a packed house of some 3,500, he described accelerated computing services, software and systems that are enabling new business models and making current ones more efficient.

“Accelerated computing and AI mark a reinvention of computing,” said Huang, whose travels in his hometown over the past week have been tracked daily by local media.

In a demonstration of its power, he used the massive 8K wall he spoke in front of to show a text prompt generating a theme song for his keynote, singable as any karaoke tune. Huang, who occasionally bantered with the crowd in his native Taiwanese, briefly led the audience in singing the new anthem.

“We’re now at the tipping point of a new computing era with accelerated computing and AI that’s been embraced by almost every computing and cloud company in the world,” he said, noting 40,000 large companies and 15,000 startups now use NVIDIA technologies with 25 million downloads of CUDA software last year alone.

Top News Announcements From the Keynote

A New Engine for Enterprise AI

For enterprises that need the ultimate in AI performance, he unveiled DGX GH200, a large-memory AI supercomputer. It uses NVIDIA NVLink to combine up to 256 NVIDIA GH200 Grace Hopper Superchips into a single data-center-sized GPU.

The GH200 Superchip, which Jensen said is now in full production, combines an energy-efficient NVIDIA Grace CPU with a high-performance NVIDIA H100 Tensor Core GPU in one superchip.

The DGX GH200 packs an exaflop of performance and 144 terabytes of shared memory, nearly 500x more than in a single NVIDIA DGX A100 320GB system. That lets developers build large language models for generative AI chatbots, complex algorithms for recommender systems, and graph neural networks used for fraud detection and data analytics.

Google Cloud, Meta and Microsoft are among the first expected to gain access to the DGX GH200, which can be used as a blueprint for future hyperscale generative AI infrastructure.

NVIDIA DGX GH200
NVIDIA’s DGX GH200 AI supercomputer delivers 1 exaflop of performance for generative AI.

“DGX GH200 AI supercomputers integrate NVIDIA’s most advanced accelerated computing and networking technologies to expand the frontier of AI,” Huang told the audience in Taipei, many of whom had lined up outside the hall for hours before the doors opened.

NVIDIA is building its own massive AI supercomputer, NVIDIA Helios, coming online this year. It will use four DGX GH200 systems linked with NVIDIA Quantum-2 InfiniBand networking to supercharge data throughput for training large AI models.

The DGX GH200 forms the pinnacle of hundreds of systems announced at the event. Together, they’re bringing generative AI and accelerated computing to millions of users.

Zooming out to the big picture, Huang announced more than 400 system configurations are coming to market powered by NVIDIA’s latest Hopper, Grace, Ada Lovelace and BlueField architectures. They aim to tackle the most complex challenges in AI, data science and high performance computing.

Acceleration in Every Size

To fit the needs of data centers of every size, Huang announced NVIDIA MGX, a modular reference architecture for creating accelerated servers. System makers will use it to quickly and cost-effectively build more than a hundred different server configurations to suit a wide range of AI, HPC and NVIDIA Omniverse applications.

MGX lets manufacturers build CPU and accelerated servers using a common architecture and modular components. It supports NVIDIA’s full line of GPUs, CPUs, data processing units (DPUs) and network adapters as well as x86 and Arm processors across a variety of air- and liquid-cooled chassis.

QCT and Supermicro will be the first to market with MGX designs appearing in August. Supermicro’s ARS-221GL-NR system announced at COMPUTEX will use the Grace CPU, while QCT’s S74G-2U system, also announced at the event, uses Grace Hopper.

ASRock Rack, ASUS, GIGABYTE and Pegatron will also use MGX to create next-generation accelerated computers.

5G/6G Calls for Grace Hopper

Separately, Huang said NVIDIA is helping shape future 5G and 6G wireless and video communications. A demo showed how AI running on Grace Hopper will transform today’s 2D video calls into more lifelike 3D experiences, providing an amazing sense of presence.

Laying the groundwork for new kinds of services, Huang announced NVIDIA is working with telecom giant SoftBank to build a distributed network of data centers in Japan. It will deliver 5G services and generative AI applications on a common cloud platform.

The data centers will use NVIDIA GH200 Superchips and NVIDIA BlueField-3 DPUs in modular MGX systems as well as NVIDIA Spectrum Ethernet switches to deliver the highly precise timing the 5G protocol requires. The platform will reduce cost by increasing spectral efficiency while reducing energy consumption.

The systems will help SoftBank explore 5G applications in autonomous driving, AI factories, augmented and virtual reality, computer vision and digital twins. Future uses could even include 3D video conferencing and holographic communications.

Turbocharging Cloud Networks

Separately, Huang unveiled NVIDIA Spectrum-X, a networking platform purpose-built to improve the performance and efficiency of Ethernet-based AI clouds. It combines Spectrum-4 Ethernet switches with BlueField-3 DPUs and software to deliver 1.7x gains in AI performance and power efficiency over traditional Ethernet fabrics.

NVIDIA Spectrum-X, Spectrum-4 switches and BlueField-3 DPUs are available now from system makers including Dell Technologies, Lenovo and Supermicro.

NVIDIA Spectrum-X for Ethernet AI clouds
NVIDIA Spectrum-X accelerates AI workflows that can experience performance losses on traditional Ethernet networks.

Bringing Game Characters to Life

Generative AI impacts how people play, too.

Huang announced NVIDIA Avatar Cloud Engine (ACE) for Games, a foundry service developers can use to build and deploy custom AI models for speech, conversation and animation. It will give non-playable characters conversational skills so they can respond to questions with lifelike personalities that evolve.

NVIDIA ACE for Games includes AI foundation models such as NVIDIA Riva to detect and transcribe the player’s speech. The text prompts NVIDIA NeMo to generate customized responses animated with NVIDIA Omniverse Audio2Face.

NVIDIA ACE for Games
NVIDIA ACE for Games provides a tool chain for bringing characters to life with generative AI.

Accelerating Gen AI on Windows

Huang described how NVIDIA and Microsoft are collaborating to drive innovation for Windows PCs in the generative AI era.

New and enhanced tools, frameworks and drivers are making it easier for PC developers to develop and deploy AI. For example, the Microsoft Olive toolchain for optimizing and deploying GPU-accelerated AI models and new graphics drivers will boost DirectML performance on Windows PCs with NVIDIA GPUs.

The collaboration will enhance and extend an installed base of 100 million PCs sporting RTX GPUs with Tensor Cores that boost performance of more than 400 AI-accelerated Windows apps and games.

Digitizing the World’s Largest Industries

Generative AI is also spawning new opportunities in the $700 billion digital advertising industry.

For example, WPP, the world’s largest marketing services organization, is working with NVIDIA to build a first-of-its kind generative AI-enabled content engine on Omniverse Cloud.

In a demo, Huang showed how creative teams will connect their 3D design tools such as Adobe Substance 3D, to build digital twins of client products in NVIDIA Omniverse. Then, content from generative AI tools trained on responsibly sourced data and built with NVIDIA Picasso will let them quickly produce virtual sets. WPP clients can then use the complete scene to generate a host of ads, videos and 3D experiences for global markets and users to experience on any web device.

“Today ads are retrieved, but in the future when you engage information much of it will be generated — the computing model has changed,” Huang said.

Factories Forge an AI Future

With an estimated 10 million factories, the $46 trillion manufacturing sector is a rich field for industrial digitalization.

“The world’s largest industries make physical things. Building them digitally first can save billions,” said Huang.

The keynote showed how electronics makers including Foxconn Industrial Internet, Innodisk, Pegatron, Quanta and Wistron are forging digital workflows with NVIDIA technologies to realize the vision of an entirely digital smart factory.

They’re using Omniverse and generative AI APIs to connect their design and manufacturing tools so they can build digital twins of factories. In addition, they use NVIDIA Isaac Sim for simulating and testing robots and NVIDIA Metropolis, a vision AI framework, for automated optical inspection.

The latest component, NVIDIA Metropolis for Factories, can create custom quality-control systems, giving manufacturers a competitive advantage. It’s helping companies develop state-of-the-art AI applications.

AI Speeds Assembly Lines

For example, Pegatron — which makes 300 products worldwide, including laptops and smartphones — is creating virtual factories with Omniverse, Isaac Sim and Metropolis. That lets it try out processes in a simulated environment, saving time and cost.

Pegatron also used the NVIDIA DeepStream software development kit to develop intelligent video applications that led to a 10x improvement in throughput.

Foxconn Industrial Internet, a service arm of the world’s largest technology manufacturer, is working with NVIDIA Metropolis partners to automate significant portions of its circuit-board quality-assurance inspection points.

Computex 2023 keynote
Crowds lined up for the keynote hours before doors opened.

In a video, Huang showed how Techman Robot, a subsidiary of Quanta, tapped NVIDIA Isaac Sim to optimize inspection on the Taiwan-based giant’s manufacturing lines. It’s essentially using simulated robots to train robots how to make better robots.

In addition, Huang announced a new platform to enable the next generation of autonomous mobile robot (AMR) fleets. Isaac AMR helps simulate, deploy and manage fleets of autonomous mobile robots.

A large partner ecosystem — including ADLINK, Aetina, Deloitte, Quantiphi and Siemens — is helping bring all these manufacturing solutions to market, Huang said.

It’s one more example of how NVIDIA is helping companies feel the benefits of generative AI with accelerated computing.

“It’s been a long time since I’ve seen you, so I had a lot to tell you,” he said after the two-hour talk to enthusiastic applause.

To learn more, watch the full keynote.

Read More

NVIDIA Brings Advanced Autonomy to Mobile Robots With Isaac AMR

NVIDIA Brings Advanced Autonomy to Mobile Robots With Isaac AMR

As mobile robot shipments surge to meet the growing demands of industries seeking operational efficiencies, NVIDIA is launching a new platform to enable the next generation of autonomous mobile robot (AMR) fleets.

Isaac AMR brings advanced mapping, autonomy and simulation to mobile robots and will soon be available for early customers, NVIDIA founder and CEO Jensen Huang announced during his keynote address at the COMPUTEX technology conference in Taipei.

Isaac AMR is a platform to simulate, validate, deploy, optimize and manage fleets of autonomous mobile robots. It includes edge-to-cloud software services, computing and a set of reference sensors and robot hardware to accelerate development and deployment of AMRs, reducing costs and time to market.

Mobile robot shipments are expected to climb from 251,000 units in 2023 to 1.6 million by 2028, with revenue forecast to jump from $12.6 billion to $64.5 billion in the period, according to ABI Research.

Simplifying the Path to Autonomy

Despite the explosive adoption of robots, the intralogistics industry faces challenges.

Traditionally, software applications for autonomous navigation are often coded from scratch for each robot, making rolling out autonomy across different robots complex. Also, warehouses, factories and fulfillment centers are enormous, frequently running a million square feet or more, making them hard to map for robots and keep updated. And integrating AMRs into existing workflows, fleet management and warehouse management systems can be complicated.

For those working in advanced robotics and seeking to migrate traditional forklifts or automated guided vehicles to fully autonomous mobile robots, Isaac AMR provides the blueprint to accelerate the migration to full autonomy, reducing costs and speeding deployment of state-of-the-art AMRs.

Orin-Based Reference Architecture 

Isaac AMR is built on the foundations of the  NVIDIA Nova Orin reference architecture.

Nova Orin is the brains and eyes of Isaac AMR. It integrates multiple sensors including stereo cameras, fisheye cameras, 2D and 3D lidars with the powerful NVIDIA Jetson AGX Orin system-on-module. The reference robot hardware comes with Nova Orin pre-integrated, making it easy for developers to evaluate Isaac AMR in their own environments.

The compute engine of Nova is Orin, which delivers access to some of the most advanced AI and hardware-accelerated algorithms that can be run using 275 tera operations per second (TOPS) of edge computing in real time.

The synchronized and calibrated sensor suite offers sensor diversity and redundancy for real-time 3D perception and mapping. Cloud-native tools for record, upload and replay enable easy debugging, map creation, training and analytics.

Isaac AMR: Mapping, Autonomy, Simulation

Isaac AMR offers a foundation for mapping, autonomy and simulation.

Isaac AMR accelerates mapping and semantic understanding of large environments by tying into DeepMap’s cloud-based service to help accelerate robot mapping of large facilities from weeks to days, offering centimeter-level accuracy without the need for a highly skilled team of technicians. It can generate rich 3D voxel maps, which can be used to create occupancy maps and semantic maps for multiple types of AMRs.

Additionally, Isaac AMR shortens the time to develop and deploy robots in large, highly dynamic and unstructured environments with autonomy that’s enabled by multimodal navigation with cloud-based fleet optimization using NVIDIA cuOpt software.

An accelerated and modular framework enables real-time camera and lidar perception. Planning and control using advanced path planners, behavior planners and use of semantic information make the robot operate autonomously in complex environments. A low-code, no-code interface makes it easy to rapidly develop and customize applications for different scenarios and use cases.

Finally, Isaac AMR simplifies robot operations by tapping into physics-based simulation from Isaac Sim, powered by NVIDIA Omniverse, an open development platform for industrial digitalization. This can bring digital twins to life, so the robot application can be developed, tested and customized for each customer before deploying in the physical world. This significantly reduces the operational cost and complexity of deploying AMRs.

Sign up for early access to Isaac AMR.

 

Read More