Hyperscale Digital Twins to Give Us “Amazing Superpowers,” NVIDIA Exec Says at ISC 2022

Highly accurate digital representations of physical objects or systems, or “digital twins,” will enable the next era of industrial virtualization and AI, executives from NVIDIA and BMW said Tuesday.

Kicking off the ISC 2022 conference in Hamburg, Germany, NVIDIA’s Rev Lebaredian (left), vice president for Omniverse and simulation technology, was joined by Michele Melchiorre, senior vice president for product system, technical planning, and tool shop at BMW Group.

“If you can construct a virtual world that matches the real world in its complexity, in its scale and in its precision, then there are a great many things you can do with this,” Lebaredian said.

While Lebaredian outlined the broad trends and technological advancements driving the evolution of digital twin simulations, Melchiorre offered a detailed look at how BMW has put digital twins to work in its own factories.

Melchiorre explained BMW’s plans to use digital twins as a tool to become more “lean, green and digital,” describing real-time collaboration with digital twins and opportunities for training AIs as a “revolution in factory planning.”

Digital twins such as the BMW iFACTORY initiative described by Melchiorre — which harnesses real-time data, simulation and machine learning — are an example of how swiftly digital twins have become workhorses for industrial companies such as Amazon Robotics, BMW and others.

These systems will link our representations of the world with data streaming in, in real-time, from these worlds, Lebaredian explained.

“What we’re trying to introduce now is a mechanism by which we can link the two together, where we can detect all the changes in the physical version, and reflect them in the digital world,” Lebaredian said. “If we can establish that link we gain some amazing superpowers.”

Supercomputing Is Transforming Every Field of Discovery

And it’s another powerful example of how technologies from the supercomputing industry — particularly its focus on simulation and data center scale GPU computing — are spilling over into the broader world.

At the same time, converging technologies have transformed high-performance computing, Lebaredian said. GPU-accelerated systems have become a mainstay not just in scientific computing, but edge computing, data centers and cloud systems.

NVIDIA’s Rev Lebaredian, vice president for Omniverse and simulation technology, speaking at ISC 2022.

And AI-accelerated GPU computing has also become a cornerstone of modern high-performance computing. That’s positioned supercomputing to realize the original intent of computer graphics: simulation.

Computers, algorithms and AI have all matured enough that we can begin simulating worlds that are complex enough to be useful on an industrial scale, even using these simulations as training grounds for AI.

World Simulation at an Inflection Point

With digital twins, a new class of simulation is possible, Lebaredian said.

These require precision timing — the ability to simulate multiple autonomous systems at the same time.

They require physically accurate simulation.

And they require accurate ingestion of information from the “real twin,” and continuous synchronization.

These digital twin simulations will give us “superpowers.”

The first one Lebaradian dug into was teleportation. “Just like in a multiplayer video game any human anywhere on earth can teleport into that virtual world,” Lebaradian said.

The next: time travel.

“If you record the state of the world over time, you can recall it at any point, this allows time travel,” Lebaradian said.

“You can not only now teleport to that world, but you can scrub your timeline and go backwards to any point in time, and explore that space at any point in time,” he added.

And, finally, these simulations, if accurate enough, will let us understand what’s next.

“If you have a simulator that is extremely accurate and actually predictive of what will happen in the future if you understand the laws of physics well enough you essentially get time travel to the future,” Lebaredian said.

“You can compute not just one possible future, but many possible futures,” he added, outlining how this could let city planners see what could happen as they modify a city, plan the road and change the traffic systems to find “the best possible future.”

Modern supercomputing is unlocking these digital twins, which are extremely compute-intensive and require precision timing networking with extremely low latency.

“We need a new kind of supercomputer, one that can really accelerate artificial intelligence and run these massive simulations in true real-time,” Lebaradian said.

That will require GPU-accelerated systems that are optimized at every layer of the system to enable precision timing.

These systems will need to run not just on the data center, but reach the edge of networks to bring data into virtual simulations with precision timing.

Such systems will be key to advances on scales both small — such as drug discovery, and large — such as climate simulation.

“We need to simulate our climate we need to look really far out, we need to do so at a precision that’s never been done before, and we need to be able to trust our simulations are actually predictive and accurate, if we do that we have some hope we can deal with this climate change situation,” Lebaradian said.

BMW’s iFACTORY: “Lean, Green and Digital”

BMW’s Melchiorre provided an example of how this broad vision is being put to work today at BMW, as the automaker seeks to become “lean, green and digital.”

Michelle Melchiorre, senior vice president for product system, technical planning, and tool shop at BMW Group.

BMW has built exceptionally complex digital twins, simulating its factories with humans and robots interacting in the same space, at the same time.

It’s an effort that stretches from the factory floor to the company’s data center, to its entire supply chain. This digital twin involves millions of moving parts and pieces that are connected to an enormous supply chain.

Melchiorre walked his audience through a number of examples of how digital twins simulate various pieces of the plant, simulating how industrial machinery, robots, and people will move together.

Inside the digital twin of BMW’s assembly system, powered by Omniverse, an entire factory in simulation.

And he explained how they are leveraging NVIDIA technology to simulate entire factories before they’re even built.

Melchiorre showed an aerial image of the site where BMW is building a new factory in Hungary. While the real-world factory is still mostly open field, the digital factory is 80% complete.

“This will be the first plant where we will have a complete digital twin much before production starts,” Melchiorre said.

In the future, the iFACTORY will be real in all of BMW’s plants, Melchiorre explained, from BMW’s 100-year-old home plant in Munich to its forthcoming plant in Debrecen, Hungary.​

“This is our production network, not just one factory – each and every plant will go in this direction, every plant will develop into a BMW iFACTORY, this is our master plan for our future,” Melchiorre said.

The post Hyperscale Digital Twins to Give Us “Amazing Superpowers,” NVIDIA Exec Says at ISC 2022 appeared first on NVIDIA Blog.

Read More

A Devotion to Emotion: Hume AI’s Alan Cowen on the Intersection of AI and Empathy

Can machines experience emotions? They might, according to Hume AI, an AI research lab and technology company that aims to “ensure artificial intelligence is built to serve human goals and emotional well-being.”

So how can AI genuinely understand how we are feeling, and respond appropriately?

On this episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Alan Cowen, founder of Hume AI and The Hume Initiative. Cowen — a former researcher at Google who holds a Ph.D. in Psychology from UC Berkeley — talks about the latest work at the intersection of computing and human emotion.

You Might Also Like

What Is Conversational AI? ZeroShot Bot CEO Jason Mars Explains

Companies use automated chatbots to help customers solve issues, but conversations with these chatbots can sometimes be a tiring affair. ZeroShotBot CEO Jason Mars explains how he’s trying to change that by using AI to improve automated chatbots.

How Audio Analytic Is Teaching Machines to Listen

From active noise cancellation to digital assistants that are always listening for your commands, audio is perhaps one of the most important but often overlooked aspects of modern technology in our daily lives. Chris Mitchell, CEO and founder of Audio Analytic, discusses the challenges, and the fun, involved in teaching machines to listen.

Lilt CEO Spence Green Talks Removing Language Barriers in Business

When large organizations require translation services, there’s no room for the amusing errors often produced by automated apps. Lilt CEO Spence Green aims to correct that using a human-in-the-loop process to achieve fast, accurate and affordable translation.

Subscribe to the AI Podcast: Now available on Amazon Music

You can now listen to the AI Podcast through Amazon Music.

You can also get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out our listener survey.

The post A Devotion to Emotion: Hume AI’s Alan Cowen on the Intersection of AI and Empathy appeared first on NVIDIA Blog.

Read More

Ready, Set, Game: GFN Thursday Brings 10 New Titles to GeForce NOW

It’s a beautiful day to play video games. And it’s GFN Thursday, which means we’ve got those games.

Ten total titles join the GeForce NOW library of over 1,300 games, starting with the release of Roller Champions – a speedy, free-to-play roller skating title launching with competitive season 0.

Rollin’ Into the Weekend

Roll with the best or get left behind with the rest in the newest free-to-play sports game from Ubisoft, Roller Champions.

Roller Champions on GeForce NOW
Skate, tackle and roll your way to glory in Roller Champions. Discover a free-to-play, team PvP sports game like no other.

Become a sports legend and compete for fame in fast-paced 3v3 matches. The rules are simple: take the ball, make a lap while maintaining team possession and score. Take advantage of passes, tackles and team moves to win against opponents and climb the leaderboard kicking off with the Kickoff Season today.

Stream the game across nearly all devices, even on Mac or mobile. RTX 3080 members can take their experience to the next level, playing at up to 4K resolution and 60 frames per second from the PC and Mac apps. They can also zoom around in next-to-native time with ultra-low latency for eight hour-long gaming sessions.

Start playing the game for free today, streaming on GeForce NOW.

On top of that, members can look for the following games streaming this week:

Finally, Star Conflict (Steam) was announced to arrive this month but will be coming to the cloud at a future date.

The weekend fun is about to begin. There’s only one question left – who is on your roller derby dream team? Let us know on Twitter or in the comments below.

The post Ready, Set, Game: GFN Thursday Brings 10 New Titles to GeForce NOW appeared first on NVIDIA Blog.

Read More

Deciphering the Future: HPE Switches on AI Supercomputer in France

Recalling the French linguist who deciphered the Rosetta Stone 150 years ago, Hewlett Packard Enterprise today switched on a tool to unravel its customers’ knottiest problems.

The Champollion AI supercomputer takes its name from Jean-François Champollion (1790-1832), who decoded hieroglyphics that opened a door to study of ancient Egypt’s culture. Like Champollion, the mega-system resides in Grenoble, France, where it will seek patterns in massive datasets at HPE’s Centre of Excellence.

The work will include AI model development and training, as well as advanced simulations for users in science and industry.

Among the system’s global user community, researchers in France’s AI for Humanity program will use Champollion to advance industries and boost economic growth with machine learning.

Inside an AI Supercomputer 

Champollion will help HPE’s customers explore new opportunities with accelerated computing.  The system is based on a cluster of 20 HPE Apollo 6500 Gen10 Plus systems running the HPE Machine Learning Development Environment, a software stack to build and train AI models at scale.

It’s powered in part by 160 NVIDIA A100 Tensor Core GPUs, delivering 100 petaflops of peak AI performance for the cluster. They’re linked on high-throughput, low-latency NVIDIA Quantum InfiniBand that sports in-network computing.

The system can access NGC, NVIDIA’s online catalog for HPC and AI software, including tools like NVIDIA Triton Inference Server that orchestrates AI deployments, and application frameworks like NVIDIA Clara for healthcare.

Users can test and benchmark their own workloads on Champollion to speed their work into production. It’s the perfect tool for Grenoble, home to a dense cluster of research centers for companies in energy, medicine and high tech.

Powerful Possibilities

The system could help find molecular patterns for a new, more effective drug or therapy. It could build a digital twin to explore more efficient ways of routing logistics in a warehouse or factory.

The possibilities are as varied as the number of industries and research fields harnessing the power of high performance computing.

So, it’s appropriate that the Champollion system debuts ahead of ISC, Europe’s largest gathering of HPC developers. This year’s event in Hamburg will provide an in-person experience for the first time since the pandemic.

Whether you will be in Hamburg or online, join NVIDIA and watch the conference keynote, Supercomputing: The Key to Unlocking the Next Level of Digital Twins, to learn more about the potential of HPC+AI to transform every field.

Rev Lebaredian, who leads NVIDIA Omniverse and simulation technology at NVIDIA, along with Michele Melchiorre, a senior vice president at BMW Group, will show how supercomputing can unlock a new level of opportunities with digital twins.

Feature image credit: Steven Zucker, Smarthistory.

The post Deciphering the Future: HPE Switches on AI Supercomputer in France appeared first on NVIDIA Blog.

Read More

NVIDIA Brings Data Center, Robotics, Edge Computing, Gaming and Content Creation Innovations to COMPUTEX 2022

Digital twins that revolutionize the way the most complex products are produced. Silicon and software that transforms data centers into AI factories. Gaming advances that bring the world’s most popular games to life.

Taiwan has become the engine that brings the latest innovations to the world. So it only makes sense that NVIDIA leaders brought their best ideas to this week’s COMPUTEX technology conference in Taipei.

“Taiwan is the birthplace of the PC ecosystem and the spirit of COMPUTEX is to celebrate the incredible journey that built this $500 billion industry,” Jeff Fisher, senior vice president for gaming products at NVIDIA told attendees.

The headline news:

  • NVIDIA announced Taiwan’s leading computer makers will release the first wave of systems powered by the NVIDIA Grace CPU Superchip and Grace Hopper Superchip for workloads such as digital twins, AI, high-performance computing, cloud graphics and gaming.
  • NVIDIA announced liquid-cooled NVIDIA A100 GPUs for data centers. They’ll be available in the fall as a PCIe card and will ship from OEMs with the HGX A100 server. The H100 Liquid Cooled will be available in the HGX H100 server, and as the H100 PCIe in early 2023.
  • Partners creating products around the NVIDIA Jetson edge AI and robotics platform announced more than 30 servers and appliances based on the NVIDIA Orin system-on-module.
  • Momentum for NVIDIA RTX is growing, with over 250 RTX games and applications available, double that at last year’s COMPUTEX. And GeForce gamers continue to upgrade, with over 30% now on RTX hardware, logging over 1 and a half billion hours of playtime with RTX. And DLSS is in the games that gamers want to play, with 12 new added to the ever-growing library.

The announcements punctuated a talk from six NVIDIA leaders who wove together advances from robotics to AI, silicon to software and highlighted the work of partners throughout the industry.

Clockwise, from top left: NVIDIA VP for Accelerated Computing Ian Buck, Senior VP for Hardware Engineering Brian Kelleher, Director of Product Management for Accelerated Computing Ying Yin Shih, CTO Michael Kagan, Senior VP for GeForce Jeff Fisher, VP of Embedded and Edge Computing Deepu Talla.
Clockwise, from top left: NVIDIA VP for Accelerated Computing Ian Buck, Senior VP for Hardware Engineering Brian Kelleher, Director of Product Management for Accelerated Computing Ying Yin Shih, CTO Michael Kagan, Senior VP for GeForce Jeff Fisher, VP of Embedded and Edge Computing Deepu Talla.

Transforming Data Centers

First up, NVIDIA VP for Hyperscale and HPC Ian Buck detailed how data centers are transforming into AI factories.

“This transformation requires us to reimagine the data center at every level, from hardware to software, from chips to infrastructure to systems,” Buck said.

This will drive massive business opportunities for NVIDIA’s partners in data centers, HPC, in digital twins and cloud-based gaming referencing a “half-trillion market opportunity.”

Powering these modern AI factories requires end-to-end innovation at every level, Buck said.

And with data centers becoming “AI factories,” data processing is essential.

These include NVIDIA Hopper GPUs, NVIDIA Grace CPUs and NVIDIA BlueField DPUs as the building blocks networked together by NVIDIA Quantum and Spectrum switches.

“The Bluefield DPU along with the Quantum and Spectrum networking switches comprise the infrastructure platform for the AI factory of the future,” said CTO Michael Kagan

NVIDIA technologies will be featured in a wide range of server designs, including NVIDIA CGX for cloud gaming, OVX for digital twins, and HGX Grace and HGX Grace Hopper for science, data analytics and AI.

NVIDIA announced the first wave of systems powered by the NVIDIA Grace CPU Superchip and Grace Hopper Superchip are expected starting in the first half of 2023.

“Grace will be amazing at AI, data analytics, scientific computing, and hyperscale computing,” said NVIDIA senior VP for hardware engineering Brian Kelleher. “And, of course, the full suite of NVIDIA software platforms will run on Grace.”

The Grace-powered systems from ASUS, Foxconn Industrial Internet, GIGABYTE, QCT, Supermicro and Wiwynn join x86 and other Arm-based servers to offer customers a broad range of choices.

“All of these servers are optimized for NVIDIA accelerated computing software stacks, and can be qualified as part of our NVIDIA-Certified Systems lineup,” said Director of Product Management for Accelerated Computing Ying Yin Shih

To provide enterprises with options to deploy green data centers, NVIDIA also announced its first data center PCIe GPU with direct chip liquid cooling.

The liquid-cooled A100 PCIe GPUs will be supported in mainstream servers by at least a dozen system builders, with the first shipping in the third quarter of this year.

“All of these combine to deliver the infrastructure of the data center of the future that handles these massive workloads,” Buck said.

Finally, getting all of this to run seamlessly requires NVIDIA AI Enterprise software, which delivers robust 24/7 AI deployment, Buck said.

“When it comes to reimagining the data center, NVIDIA has the complete, open platform of hardware and software to build the AI factories of the future,” Buck said.

Revolutionizing Robotics with AI

AI is also reaching more deeply into the world around us.

Deepu Talla, VP of Embedded and Edge Computing, spoke about how the global drive to automation makes robotics a major new application for AI.

NVIDIA announced this week that more than 30 leading partners worldwide will be among those offering the first wave of NVIDIA Jetson AGX Orin-powered production systems at COMPUTEX in Taipei.

New products are coming from a dozen Taiwan-based camera, sensor and hardware providers for use in edge AI, AIoT, robotics and embedded applications.

“We are entering the age of robotics — autonomous machines that are keenly aware of their environment and that can make smart decisions about their actions,” Talla said.

Available worldwide since GTC in March, the NVIDIA Jetson AGX Orin developer kit delivers 275 trillion operations per second, packing over 8x the processing power of its predecessor, NVIDIA AGX Xavier, in the same pin-compatible form factor.

Jetson Orin features the NVIDIA Ampere architecture GPU, Arm Cortex-A78AE CPUs, next-generation deep learning and vision accelerators, high-speed interfaces, faster memory bandwidth, and multimodal sensor support capable of feeding multiple, concurrent AI application pipelines.

Offering server-class performance for edge AI, new Jetson AGX Orin production modules will be available in July, while Orin NX modules are coming in September.

Such modules are key to embedding smarter devices in the world around us, Talla said.

NVIDIA Isaac, the company’s robotics platform, has four pillars, he explained.

The first pillar is about creating the AI, “a very time-consuming and difficult process that we are making fast and easy,” Talla said, highlighting how tools such as the Isaac Replicator for synthetic data generation, NVIDIA pre-trained models available on NGC, and the NVIDIA TAO toolkit are addressing this challenge.

The second pillar is simulating the operation of the robot in the virtual world before it is deployed in the real world with Isaac Sim, Talla explained.

The third pillar is building the physical robots.

And the fourth pillar is about managing the fleet of robots over their lifetimes, typically many years if not more than a decade, Talla said.

As part of that, Talla detailed Isaac Nova Orin, a reference design for state-of-the-art compute and sensors for autonomous mobile robots (AMR) — packed with technologies such as DeepMap, CuOpt and Metropolis.

And he explained how NVIDIA Fleet Command provides secure management for fleets of AMRs.

“This is the industry’s most comprehensive end-to-end robotics platform and we continue to invest in it,” Talla said.

More Power for Gaming and Content Creation

Speaking last, Fisher detailed how NVIDIA is working to deliver innovation to gamers and content creators.

Over the past 20 years, NVIDIA and its partners have dedicated themselves to building the best platform for gaming and creating, Fisher said.

“Hundreds of millions now count on it to play, work and learn,” he said.

NVIDIA RTX, introduced in 2018, has reinvented graphics thanks to advanced features such as real-time ray tracing — and the momentum around it continues to grow.

There are now over 250 RTX-enabled games and applications, doubling since last Computex, Fisher said.

NVIDIA DLSS continues to set the standard for super resolution with best in class performance and image quality, and is now integrated into more than 180 games and applications.

At COMPUTEX, DLSS is in the games that gamers want to play, with 12 new games added to the ever-growing library.

Among the highlights: the developers of the critically-acclaimed HITMAN 3 announced they will be adding NVIDIA DLSS along with ray-traced reflections and ray-traced shadows on May 24.

In addition, NVIDIA Reflex is now supported in 38 games, 22 displays, and 45 mice. With over 20M gamers playing with Reflex ON every month, Reflex has become one of NVIDIA’s most successful technologies.

The Reflex ecosystem is continuing to grow: ASUS debuted the world’s first 500Hz G-SYNC display, the ASUS ROG Swift 500Hz gaming monitor. Acer also launched the Predator X28 G-SYNC display. Meanwhile, Cooler Master introduced the MM310 and MM730 gaming mice with Reflex.

Gaming laptops continue to be the fastest-growing PC category and 4th generation Max-Q Technologies — the latest iteration of NVIDIA’s design for thin and light laptops — is delivering a new level of power efficiency. GeForce RTX laptop models now total over 180.

“These are our most portable, highest performance laptops ever,” Fisher said.

These powerful systems are being used to help build massive, interconnected 3D destinations.

NVIDIA Studio, the RTX-powered platform that includes dozens of SDKs and accelerates the top creative apps and tools, and NVIDIA Omniverse, the company’s platform for building interconnected 3D virtual worlds, are designed to enable collaboration and construction of these virtual worlds, Fisher said.

Omniverse is getting a number of updates to accelerate creator workflows. Omniverse Cloud Simple Share, now in closed early access, allows users to send an Omniverse scene for others to view with a single click. Audio2Emotion will soon be coming to Audio2Face, providing an AI-powered animation feature that generates realistic facial expressions based on an audio file, Fisher said.

In addition, the Omniverse XR App is now available in beta. With it you can open your photorealistic Omniverse scene and experience it, fully immersive, in Virtual Reality, Fisher said. And Omniverse Machinima has been updated to make it easier than ever for 3D artists to create animated shorts.

“Omniverse is the future of 3D content creation and how virtual worlds will be built,” Fisher said.

“Over the past 20 years, NVIDIA and our partners have dedicated ourselves to building the best platform for gaming and creating,” Fisher said. “Hundreds of millions now count on it to play, work, and learn.”

Featured image credit: ynes95, some rights reserved.

The post NVIDIA Brings Data Center, Robotics, Edge Computing, Gaming and Content Creation Innovations to COMPUTEX 2022 appeared first on NVIDIA Blog.

Read More

NVIDIA Adds Liquid-Cooled GPUs for Sustainable, Efficient Computing

In the worldwide effort to halt climate change, Zac Smith is part of a growing movement to build data centers that deliver both high performance and energy efficiency.

He’s head of edge infrastructure at Equinix, a global service provider that manages more than 240 data centers and is committed to becoming the first in its sector to be climate neutral.

“We have 10,000 customers counting on us for help with this journey. They demand more data and more intelligence, often with AI, and they want it in a sustainable way,” said Smith, a Julliard grad who got into tech in the early 2000’s building websites for fellow musicians in New York City.

Marking Progress in Efficiency

As of April, Equinix has issued $4.9 billion in green bonds. They’re investment-grade instruments Equinix will apply to reducing environmental impact through optimizing power usage effectiveness (PUE), an industry metric of how much of the energy a data center uses goes directly to computing tasks.

Data center operators are trying to shave that ratio ever closer to the ideal of 1.0 PUE.  Equinix facilities have an average 1.48 PUE today with its best new data centers hitting less than 1.2.

Equinix drives data center efficiency with liquid cooled GPUs
Equinix is making steady progress in the energy efficiency of its data centers as measured by PUE (inset).

In another step forward, Equinix opened in January a dedicated facility to pursue advances in energy efficiency. One part of that work focuses on liquid cooling.

Born in the mainframe era, liquid cooling is maturing in the age of AI. It’s now widely used inside the world’s fastest supercomputers in a modern form called direct-chip cooling.

Liquid cooling is the next step in accelerated computing for NVIDIA GPUs that already deliver up to 20x better energy efficiency on AI inference and high performance computing jobs than CPUs.

Efficiency Through Acceleration 

If you switched all the CPU-only servers running AI and HPC worldwide to GPU-accelerated systems, you could save a whopping 11 trillion watt-hours of energy a year. That’s like saving the energy more than 1.5 million homes consume in a year.

Today, NVIDIA adds to its sustainability efforts with the release of our first data center PCIe GPU using direct-chip cooling.

Equinix is qualifying the A100 80GB PCIe Liquid-Cooled GPU for use in its data centers as part of a comprehensive approach to sustainable cooling and heat capture. The GPUs are sampling now and will be generally available this summer.

Saving Water and Power

“This marks the first liquid-cooled GPU introduced to our lab, and that’s exciting for us because our customers are hungry for sustainable ways to harness AI,” said Smith.

Data center operators aim to eliminate chillers that evaporate millions of gallons a water a year to cool the air inside data centers. Liquid cooling promises systems that recycle small amounts of fluids in closed systems focused on key hot spots.

“We’ll turn a waste into an asset,” he said.

Same Performance, Less Power

In separate tests, both Equinix and NVIDIA found a data center using liquid cooling could run the same workloads as an air-cooled facility while using about 30 percent less energy. NVIDIA estimates the liquid-cooled data center could hit 1.15 PUE, far below 1.6 for its air-cooled cousin.

Liquid-cooled data centers can pack twice as much computing into the same space, too. That’s because the A100 GPUs use just one PCIe slot; air-cooled A100 GPUs fill two.

NVIDIA drives efficiency with liquid cooled GPUs
NVIDIA sees power savings, density gains with liquid cooling.

At least a dozen system makers plan to incorporate these GPUs into their offerings later this year. They include ASUS, ASRock Rack, Foxconn Industrial Internet, GIGABYTE, H3C, Inspur, Inventec, Nettrix, QCT, Supermicro, Wiwynn and xFusion

A Global Trend

Regulations setting energy-efficiency standards are pending in Asia, Europe and the U.S. That’s motivating banks and other large data center operators to evaluate liquid cooling, too.

And the technology isn’t limited to data centers. Cars and other systems need it to cool high-performance systems embedded inside confined spaces.

The Road to Sustainability

“This is the start of a journey,” said Smith of the debut of liquid-cooled mainstream accelerators.

Indeed, we plan to follow up the A100 PCIe card with a version next year using the H100 Tensor Core GPU based on the NVIDIA Hopper architecture. We plan to support liquid cooling in our high-performance data center GPUs and our NVIDIA HGX platforms for the foreseeable future.

For fast adoption, today’s liquid-cooled GPUs deliver the same performance for less energy. In the future, we expect these cards will provide an option of getting more performance for the same energy, something users say they want.

“Measuring wattage alone is not relevant, the performance you get for the carbon impact you have is what we need to drive toward,” said Smith.

Learn more about our new A100 PCIe liquid-cooled GPUs.

The post NVIDIA Adds Liquid-Cooled GPUs for Sustainable, Efficient Computing appeared first on NVIDIA Blog.

Read More

NVIDIA Partners Announce Wave of New Jetson AGX Orin Servers and Appliances at COMPUTEX

More than 30 leading technology partners worldwide announced this week the first wave of NVIDIA Jetson AGX Orin-powered production systems at COMPUTEX in Taipei.

New products are coming from a dozen Taiwan-based camera, sensor and hardware providers for use in edge AI, AIoT, robotics and embedded applications.

Available worldwide since GTC in March, the NVIDIA Jetson AGX Orin developer kit delivers 275 trillion operations per second, packing over 8x the processing power of its predecessor, NVIDIA AGX Xavier, in the same pin-compatible form factor.

Jetson Orin features the NVIDIA Ampere architecture GPU, Arm Cortex-A78AE CPUs, next-generation deep learning and vision accelerators, high-speed interfaces, faster memory bandwidth, and multimodal sensor support capable of feeding multiple, concurrent AI application pipelines.

Offering server-class performance for edge AI, new Jetson AGX Orin production modules will be available in July, while Orin NX modules are coming in September.

“The new Jetson AGX Orin is supercharging the next generation of robotics and edge AI applications,” said Deepu Talla, vice president of Embedded and Edge Computing at NVIDIA. “This momentum continues to accelerate as our ecosystem partners release Jetson Orin-based production systems in various form factors tailored towards specific industries and use cases.”

Robust Product and Partner Ecosystem

Jetson-based products announced include servers, edge appliances, industrial PCs, carrier boards, AI software and more. They will come in fan and fanless configurations with multiple connectivity and interface options, also including specifications for commercial or ruggedized applications in robotics, manufacturing, retail, transportation, smart cities, healthcare and other essential sectors of the economy.

Among the releases are Taiwan-based members of the NVIDIA Partner Network, including AAEON, Adlink, Advantech, Aetina, AIMobile, Appropho, AverMedia, Axiomtek, EverFocus, Neousys, Onyx and Vecow.

Other NVIDIA partners launching new Jetson Orin-based solutions worldwide include Auvidea, Basler AG, Connect Tech, D3 Engineering, Diamond Systems, e-Con Systems, Forecr, Framos, Infineon, Leetop, Leopard Imaging, MiiVii, Quectel, RidgeRun, Sequitur, Silex, SmartCow, Stereolabs, Syslogic, Realtimes, Telit and TZTEK, to name a few.

Million-Plus Jetson Developers

Today more than 1 million developers and over 6,000 companies are building commercial products on the NVIDIA Jetson edge AI and robotics computing platform to create and deploy autonomous machines and edge AI applications.

And, with over 150 members, the growing Jetson ecosystem of partners offers a wide range of support, including from companies specialized in AI software, hardware and application design services, cameras, sensors and peripherals, developer tools and development systems. This year, the AAEON BOXER-8240 powered by the Jetson AGX Xavier won the COMPUTEX 2022 Best Choice Golden Award.

Developers are building their next-generation applications on the Jetson AGX Orin developer kit for seamless deployment on the production modules. Jetson AGX Orin users can tap into the NVIDIA CUDA-X accelerated computing stack, NVIDIA JetPack SDK and the latest NVIDIA tools for application development and optimization, including cloud-native development workflows.

Comprehensive Software Support

Jetson Orin enables developers to deploy the largest, most complex models needed to solve edge AI and robotics challenges in natural language understanding, 3D perception, multisensor fusion and other areas.

“NVIDIA is the recognized leader in AI and continues to leverage this expertise to advance robotics through a robust ecosystem and complete end-to-end solutions, including a range of hardware platforms that leverage common tools and neural network models,” said Jim McGregor, principal analyst at TIRIAS Research.

“The new Jetson platform brings the performance and versatility of the NVIDIA Ampere architecture to enable even further advancements in autonomous mobile robots for a wide range of applications ranging from agriculture and manufacturing to healthcare and smart cities,” he said.

Pretrained models from the NVIDIA NGC catalog are optimized and ready for fine-tuning with the NVIDIA TAO toolkit and customer datasets. This reduces time and cost for production-quality AI deployments, while cloud-native technologies allow seamless updates throughout a product’s lifetime.

For specific use cases, NVIDIA software platforms include NVIDIA Isaac Sim on Omniverse for robotics; Riva, a GPU-accelerated SDK for building speech AI applications; the DeepStream streaming analytics toolkit for AI-based multi-sensor processing, video, audio and image understanding; as well as Metropolis, an application framework, set of developer tools and partner ecosystem that brings visual data and AI together to improve operational efficiency and safety across industries.

Watch NVIDIA’s COMPUTEX keynote address on Monday, May 23, at 8 p.m. PT.

The post NVIDIA Partners Announce Wave of New Jetson AGX Orin Servers and Appliances at COMPUTEX appeared first on NVIDIA Blog.

Read More

Master of Arts: NVIDIA RTX GPUs Accelerate Creative Ecosystems, Delivering Unmatched AI and Ray-Tracing Performance

The future of content creation was on full display during the virtual NVIDIA keynote at COMPUTEX 2022, as the NVIDIA Studio platform expands with new Studio laptops and RTX-powered AI apps — all backed by the May Studio Driver released today.

Built-for-creator designs from ASUS, Lenovo, Acer and HP join the NVIDIA Studio laptop lineup. With up to GeForce RTX 3080 Ti or NVIDIA RTX A5500 GPUs, these new machines power unrivaled performance in 3D rendering and AI applications.

NVIDIA Studio is powering the AI revolution in content creation, giving creators time-saving tools that help them go from concept to completion faster. A host of AI-powered software updates are supported in the latest driver. Notably, dive In the NVIDIA Studio with Blackmagic Design DaVinci Resolve 18 to explore three new features that will reduce previously tedious tasks to simple button clicks.

New Hardware on Display

ASUS recently announced the Zenbook Pro 14 Duo, Pro 16X OLED and Pro 17, plus Vivobook Pro 14X, 15X and 16X laptops with up to GeForce RTX 30 Series Laptop GPUs. These new systems join the ProArt line as NVIDIA Studio laptops, giving creators a slew of options: professional-grade ProArt laptops with displays apt for film editing; the portable and balanced Zenbooks with beautiful designs and powerful GPUs; and the new Vivobooks, great for aspiring creators or advanced users.

Ignite creativity with the ASUS Vivobook 16X featuring a 16-inch NanoEdge 4K OLED display and the exclusive ASUS DialPad for intuitive and precise creative tool control, the world’s first in a laptop.

Unleash the full force of your creative ambitions with new NVIDIA Studio laptops from Lenovo. The Lenovo Slim 7i Pro X and Lenovo Slim 7 Pro X (or Yoga Slim 7i Pro X and Yoga Slim 7 Pro X in some regions) come with a 3K 120Hz Lenovo PureSight display, hardware calibrated for Delta E <1 color accuracy, sporting 100% sRGB color space and color volume – for full accuracy no matter the display brightness. These laptops feature up to a GeForce RTX 3050 GPU.

The Lenovo Slim 7 Pro X sports a 120Hz refresh rate, touch support and a pin-sharp 3K PureSight display, all in a lightweight, aesthetically pleasing design.

Acer’s ConceptD 5 and ConceptD 5 Pro come equipped with up to an NVIDIA GeForce RTX 3070 Ti and RTX A5500 GPU, respectively. Less than an inch thick, their sophisticated and durable metal design makes them easy to take on the road.

Acer’s ConceptD 5 Pantone-validated, 16-inch, OLED screen displays beautiful, color-accurate imagery, all with a sophisticated, matte finish design.

The HP ZBook Studio G9 is engineered to deliver pro-level performance in a thin and light form factor. Equipped with up to an NVIDIA RTX A5500 or GeForce RTX 3080 Ti Laptop GPU and professional-grade HP Dreamcolor displays, the HP ZBook Studio G9 offers optimal performance for multitasking, rendering 3D models and using powerful creative tools. HP also announced the HP Envy 16, fitted with a GeForce RTX 3060. With a beautiful design and extended display, the HP Envy 16 is a fantastic laptop for video editors.

Creative professionals with the HP ZBook Studio G9 benefit from the beautiful HP DreamColor display with optimal performance for rendering 3D models, video editing and completing complex creative tasks.

It’s Not Magic, It’s ‘In the NVIDIA Studio’

This week In The NVIDIA Studio, take a deeper look at three new features that help streamline video editing with RTX GPUs in Blackmagic Design’s DaVinci Resolve 18.

DaVinci Resolve is the only all-in-one editing, color grading, visual effects (VFX) and audio post-production app. NVIDIA Studio benefits extend into the software, with GPU-accelerated color grading, video editing, and color scopes; hardware encoder and decoder accelerated video transcoding; and RTX-accelerated AI features.

In addition to the incredibly valuable new cloud collaboration update which allows multiple editors, colorists, VFX artists and audio engineers to work simultaneously — on the same project, on the same timeline, anywhere in the world — the recent update also introduced a number of new features accelerated on RTX GPUs.

Automatic Depth Map uses AI to instantly generate a 3D depth matte of a scene to quickly grade the foreground separately from the background, and vice versa.

Generate 3D depth scenes using AI with the new Automatic Depth Map feature in DaVinci Resolve 18.

The feature enables creators to easily add creative effects and color corrections to footage. Change the mood by adding environmental effects like fog or atmosphere. It also makes it easier to mimic the characteristics of different high-quality lenses by adding blur or depth of field to further enhance the shot.

Object Mask Tracking also takes advantage of AI to recognize and track the movement of thousands of unique objects without having to manually rotoscope.

Object Mask Tracking in DaVinci Resolve 18 can track the movement of thousands of unique objects eliminating manual rotoscoping, image courtesy of Blackmagic Design.

Found within the magic mask palette, the DaVinci Neural Engine intuitively isolates animals, vehicles, people and food, plus countless other elements for advanced secondary grading and effects application.

Surface Tracking uses the CUDA cores found on RTX GPUs to quickly calculate and track any surface and apply graphics to surfaces that warp or change perspective in dramatic ways.

Add static or animated graphics to moving objects with the new Surface Tracking feature in DaVinci Resolve 18.

It allows creators to add static or animated graphics to just about anything that moves. The customizable mesh follows the motion of textured surfaces, meaning the feature works even on visuals that warp or change perspective — like a wrinkled t-shirt on an individual who’s in motion. It also allows for quick and easy cloning out of unwanted objects.

With NVIDIA GPUs doing all the hard work, creators can leverage these newly unlocked features to eliminate long manual work, resulting in more time to focus on creating.

Supplementing Creativity With AI

New AI features support creators by helping to reduce or eliminate tedious tasks.

Topaz Labs Gigapixel AI increases image resolution in a natural way for higher quality scaled images.

Updated to version 6.1 this month, Topaz Labs’ Gigapixel AI introduced improvements to face recovery when upscaling photos with notable performance improvements on NVIDIA GPUs. By transitioning the AI models from DirectML to TensorRT, users can process photos up to 2.5x faster, by leveraging the Tensor Cores on their RTX GPU.

Marmoset Toolbag 4.04, available now, includes a ton of new features. One example is Depth of Field in the camera object to include ray-traced depth of field. It produces a higher quality effect with more natural transitions between the subject and out-of-focus areas. The update also migrates the software to DirectX 12, giving NVIDIA GeForce RTX users a 1.3x increase in rendering speeds.

Reallusion recently unveiled iClone 8 and Character Creator 4, along with updated Omniverse Connectors for each. iClone 8 introduces NVIDIA volumetric lighting and GPU-accelerated skinning for ActorCore characters, ensuring smooth animations.

Time-saving AI-features in these apps, including DaVinci Resolve 18, are all backed by the May Studio Driver available for download today.

NVIDIA Omniverse Evolution

Creators globally are using NVIDIA Omniverse as a hub to interconnect 3D workflows. At COMPUTEX, NVIDIA introduced Omniverse features to help creators and technical artists create faster and easier than ever.

Introducing Omniverse Cloud and Omniverse XR (beta) with updates to Audio2Face and Machinima.

Omniverse Cloud is a suite of cloud services helping 3D designers, artists and developers collaborate easily from anywhere. Omniverse Cloud Simple Share is now available for early access by application — it lets users click once to package and send an Omniverse scene to friends.

Audio2Face: quickly and easily generate expressive facial animation from just an audio source with NVIDIA’s deep learning AI technology.

The Omniverse Audio2Face app has a suite of new updates launching in a few weeks, including full facial animation control and Audio2Emotion — an AI-powered animation feature that generates realistic facial expressions from just an audio file.

The Omniverse XR App (beta) is the world’s first full-fidelity, fully ray-traced virtual reality, allowing modelers to see every reflection, soft shadow and limitless lights — and enabling instant rendering of high-poly models without special imports.

Omniverse Machinima has a reinvented sequencer, as well as animation and rendering features that make it easier than ever for 3D artists to make animated shorts. New free game assets are also now available in the app — including Post Scriptum, Beyond the Wire, Shadow Warrior 3 and Squad.

The #MadeinMachinima contest is in full swing. Easily create an animated short with Omniverse materials, physics effects and game assets to win top-of-the-line Studio laptops.

Follow NVIDIA Studio on Instagram, Twitter and Facebook, Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

The post Master of Arts: NVIDIA RTX GPUs Accelerate Creative Ecosystems, Delivering Unmatched AI and Ray-Tracing Performance appeared first on NVIDIA Blog.

Read More

Energy Grids Plug into AI for a Brighter, Cleaner Future

Electric utilities are taking a course in machine learning to create smarter grids for tough challenges ahead.

The winter 2021 megastorm in Texas left millions without power. Grid failures the past two summers sparked devastating wildfires amid California’s record drought.

“Extreme weather events of 2021 highlighted the risks climate change is introducing, and the importance of investing in more resilient electricity grids,” said a May 2021 report from the International Energy Agency, a group with members from more than 30 countries. It called for a net-zero carbon grid by 2050, fueled by hundreds more gigawatts in renewable sources.

The goal demands a transformation. Yesterday’s hundred-year-old grid — a one-way system from a few big power plants to many users — must morph into a two-way, flexible, distributed network connected to homes and buildings that sport solar panels, batteries and electric vehicles.

Given the changes ahead, experts say the grid must expand autonomous control systems that gather data at every node and use it to respond in real time.

An Essential Ingredient

“AI will play a crucial role maintaining stability for an electric grid that’s becoming exponentially more complex with large numbers of low-capacity, variable generation sources like wind and solar coming online and two-way power flowing into and out of houses,” said Jeremy Renshaw, a senior program manager at the Electric Power Research Institute (EPRI), an independent, non-profit that collaborates with more than 450 companies in 45 countries on energy R&D.

“AI can support grid operators already stretched to their limits by automating repetitive or time-consuming tasks,” said Renshaw, who manages EPRI’s AI initiative.

Rick Perez, a principal at Deloitte Consulting LLP with more than 16 years working with utilities and data analytics, agrees.

“The future energy grid will be distributed and fueled by thousands of intermittent power sources including wind farms and various storage technologies. Managing it requires advanced AI methods and high performance computing,” he said.

Real Projects, Real Results

Work is already underway at power plants and substations, on distribution lines and inside homes and businesses.

“Some of the largest utilities in the U.S. are taking the first steps of creating a data engineering platform and an edge-computing practice, using sensor arrays and real-time analysis,” said Perez.

For example, a utility in a large U.S. city recently got traction with AI on NVIDIA GPUs, determining in less than 30 minutes the best truck routes for responding to a storm. Past efforts on CPU-based systems took up to 36 hours, too long to be useful.

To show utilities what’s possible, Deloitte runs jobs on NVIDIA DGX A100 systems in its Center for AI Computing. One effort combines data on the state of the electric grid with local weather conditions to identify — in time to dispatch a repair crew — distribution lines caked with ice and in danger of failing.

“Because it’s an open system, we could use our existing IT staff and, with NVIDIA’s support, do supercomputing-class work for our client,” Perez said.

Building AI Models, Datasets

At EPRI, Renshaw reports progress on several fronts.

For example, more than 300 organizations have joined its L2RPN challenge to build AI models with reinforcement learning. Some are capable of controlling as many as five tasks at once to prevent an outage.

“We want to automate 80 percent of the mundane tasks for operators, so they can do a better job focusing on the 20 percent of the most complex challenges,” said Renshaw.

A 2021 report on how AI can address climate change cited as an important use case the L2RPN work which is expanding this year to include more complex models.

Separately, EPRI is curating 10 sets of anonymous data utilities can use to train AI models for their most critical jobs. One is a database that already sports 150,000 images taken by drones of aging equipment on powerlines.

EPRI also leads a startup incubator where utilities can collaborate with AI startups like Noteworthy AI, a member of NVIDIA Inception, to work on innovative projects. To keep shared data private, it can use NVIDIA FLARE software to train AI models.

Power Plants Get Digital Twins

Both EPRI and Deloitte are helping create industrial digital twins to optimize operations and training at power plants. For example, a power plant in one southern U.S. state is acting as a demo facility in an EPRI project that’s gathered broad interest.

Separately, Deloitte plans to use NVIDIA Omniverse Enterprise to develop a physically accurate digital twin of a nuclear power plant for worker training scenarios.

“Regulators are providing multiple grants for building digital twins of power plants to increase safety and reduce the high costs of shutting systems down for tests,” Perez said.

Truly Smart Meters Debut This Year

Similarly, both EPRI and Deloitte are helping define the next generation of smart meters.

“We call today’s systems smart meters, but in reality they send maybe one data point every 15 minutes which is very slow by today’s standards,” said Renshaw.

By contrast, software-defined smart grid chips and meters in development by Utilidata, a member of NVIDIA Inception, a free program for cutting-edge startups, and Anuranet use the next generation of the NVIDIA Jetson edge AI platform to process more than 30,000 data points per second. They seek insights that save energy and cost while increasing the grid’s resilience.

“If we can get sub-second data, it opens up a wealth of opportunities — we’ve identified 81 use cases for data from the next generation of smart meters,” he said.

AI using data from one of these new meters could have predicted his home HVAC system needed repair before it failed last year, costing him more than $1,000.

An Inflection Point

In addition, EPRI has pilot programs in two office buildings using AI to reduce energy waste by as much as 30 percent. And it’s starting a collaboration on ways machine learning could enhance cybersecurity, a rising concern in the wake of last year’s ransomware attack on an energy pipeline.

The to-do list goes on. The good news, said Perez, is significant funding is on the way to create a smarter, cleaner and more secure grid with initiatives around the globe, including the U.S. Infrastructure Investment and Jobs Act.

“We’re at an inflection point, and there simply is no viable plan for the grid’s future without AI and high performance computing,” he said.

Watch a GTC talk (viewable on-demand with registration) to see how utilities can use edge AI and high performance computing to modernize grid operations. And learn more about NVIDIA’s work with utilities and NVIDIA Inception.

The post Energy Grids Plug into AI for a Brighter, Cleaner Future appeared first on NVIDIA Blog.

Read More

What is Extended Reality?

Advances in extended reality have already changed the way we work, live and play, and it’s just getting started.

Extended reality, or XR, is an umbrella category that covers a spectrum of newer, immersive technologies, including virtual reality, augmented reality and mixed reality.

From gaming to virtual production to product design, XR has enabled people to create, collaborate and explore in computer-generated environments like never before.

What Is Extended Reality?

Virtual, augmented and mixed reality are all elements of XR technology.

Virtual reality puts users inside a virtual environment. VR users typically wear a headset that transports them into a virtual world — one moment they’re standing in a physical room, and the next they’re immersed in a simulated environment.

The latest VR technologies push these boundaries, making these environments look and behave more like the real world. They’re also adding support for additional senses, including touch, sound and smell.

With VR, gamers can become fully immersed in a video game, designers and customers can review building projects to finalize details prior to construction, and retailers can test virtual displays before committing to a physical one.

Augmented reality is when a rendered image is overlaid onto the real world. The mobile game Pokémon GO famously brought AR to the mainstream by showing computer-rendered monsters standing on lawns and sidewalks as players roam their neighborhoods.

AR graphics are visible through cell phones, tablets and other devices, bringing a new kind of interactive experience to users. Navigating directions, for example, can be improved with AR. Rather than following a 2D map, a windshield can superimpose directions over one’s view of the road, with simulated arrows directing the driver exactly where to turn.

Mixed reality is a seamless integration of the real world and rendered graphics, which creates an environment in which users can directly interact with the digital and physical worlds together.

With MR, real and virtual objects blend, and are presented together within a single display. Users can experience MR environments through a headset, phone or tablet, and can interact with digital objects by moving them around or placing them in the physical world.

There are two types of MR:

  • Mixing virtual objects into the real world — for instance, where a user sees the real world through cameras in a VR headset with virtual objects seamlessly mixed into the view. See this example video.
  • Mixing real-world objects into virtual worlds — for example, a camera view of a VR participant mixed into the virtual world, like watching a VR gamer playing in a virtual world.

The History of XR

To understand how far XR has come, consider its origins in VR.

VR began in the federal sector, where it was used to train people in flight simulators. The energy and automotive design industries were also early adopters. These simulation and visualization VR use cases required large supercomputers. It also needed dedicated spaces, including powerwalls, which are ultra-high-resolution displays, and VR CAVEs, which are empty rooms that have the VR environment projected on each surface, from the walls to the ceiling.

For decades, VR remained unaffordable for most users, and the small VR ecosystem was mainly composed of large institutions and academic researchers.

But early in the previous decade, several key component technologies reached a tipping point, which precipitated the launch of the HTC Vive and Oculus Rift head-mounted displays (HMDs), along with the SteamVR runtime.

Individuals could now purchase personal HMDs to experience great immersive content. And they could drive those HMDs and experiences from an individual PC or workstation with a powerful GPU.

Suddenly, VR was accessible to millions of individuals, and a large ecosystem quickly sprung up, filled with innovation and enthusiasm.

In recent years, a new wave of VR innovation started with the launch of all-in-one (AIO) headsets. Previously, fully immersive VR experiences required a physical connection to a powerful PC. The HMD couldn’t operate as a self-contained device, as it had no operating system and no ability to compute the image.

But with AIO headsets, users gained access to a dedicated device with a simple setup that could deliver fully tracked VR anywhere, anytime. Coupled with the innovation of VR streaming technology, users could now experience powerful VR environments, even while on the go.

Latest Trends in XR

High-quality XR is becoming increasingly accessible. Consumers worldwide are purchasing AIOs to experience XR, from immersive gaming to remote learning to virtual training. Large enterprises are adding XR into their workflows and design processes. XR drastically improves design implementation with the inclusion of a digital twin.

Image courtesy of Innoactive.

And one of today’s biggest trends is streaming XR experiences through 5G from the cloud. This removes the need to be tethered to workstations or limit experiences to a single space.

By streaming over 5G from the cloud, people can use XR devices and get the computational power to run XR experiences from a data center, regardless of location and time. Advanced solutions like NVIDIA CloudXR are making immersive streaming more accessible, so more XR users can experience high-fidelity environments from anywhere.

AR is also becoming more common. After Pokémon GO became a household name, AR emerged in a number of additional consumer-focused areas. Many social media platforms added filters that users could overlay on their faces. Organizations in retail incorporated AR to showcase photorealistic rendered 3D products, enabling customers to place these products in a room and visualize it in any space.

Plus, enterprises in various industries like architecture, manufacturing, healthcare and more are using the technology to vastly improve workflows and create unique, interactive experiences. For example, architects and design teams are integrating AR for construction project monitoring, so they can see onsite progress and compare it to digital designs.

And though it’s still fairly new, MR is developing in the XR space. Trends are shown through the emergence of many new headsets built for MR, including the Varjo XR-3. With MR headsets, professionals in engineering, design, simulation and research can develop and interact with their 3D models in real life.

Varjo XR-3 headset. Image courtesy of Varjo.

The Future of XR

As XR technology advances, another technology is propelling users into a new era: artificial intelligence.

AI will play a major role in the XR space, from virtual assistants helping designers in VR to intelligent AR overlays that can walk individuals through do-it-yourself projects.

For example, imagine wearing a headset and telling the content what to do through natural speech and gestures. With hands-free and speech-driven virtual agents at the ready, even non-experts will be able to create amazing designs, complete exceedingly complex projects and harness the capabilities of powerful applications.

Platforms like NVIDIA Omniverse have already changed how users create 3D simulations and virtual worlds. Omniverse allows users from across the globe to develop and operate digital twin simulations. The platform provides users with the flexibility to portal into the physically accurate, fully ray-traced virtual world through 2D monitors, or their preferred XR experience, so they can experience vast virtual worlds immersively.

Entering the next evolution of XR, the possibilities are virtually limitless.

Learn more and see how organizations can integrate XR with NVIDIA technologies.

Featured blog image includes KPF and Lenovo.

The post What is Extended Reality? appeared first on NVIDIA Blog.

Read More