AI’s Highlight Reel: Top Five NVIDIA Videos of 2022

AI’s Highlight Reel: Top Five NVIDIA Videos of 2022

If AI had a highlight reel, the NVIDIA YouTube channel might just be it.

The channel showcases the latest breakthroughs in artificial intelligence, with demos, keynotes and other videos that help viewers see and believe the astonishing ways in which the technology is changing the world.

NVIDIA’s most popular videos of 2022 put spotlights on photorealistically animated data centers, digital twins for climate science, AI for healthcare and more.

And the latest GTC keynote address by NVIDIA founder and CEO Jensen Huang racked up 19 million views in just three months, making it the channel’s most-watched video of all time.

It all demonstrates the power of AI, its growth and applications.

But don’t just take our word for it — watch NVIDIA’s top five YouTube videos of the year:

Meet NVIDIA — the Engine of AI

While watching graphics cards dance and autonomous vehicles cruise, learn more about how NVIDIA’s body of work is fueling all things AI.

NVIDIA DGX A100 — Bringing AI to Every Industry

In a dazzling clip that unpacks NVIDIA DGX A100, the universal system for AI workloads, check out the many applications for the world’s first 5 petaFLOPS AI system.

A New Era of Digital Twins and Virtual Worlds With NVIDIA Omniverse

Watch stunning demos and hear about how the NVIDIA Omniverse platform enables real-time 3D simulation, design collaboration and the creation of virtual worlds.

Optimizing an Ultrarapid DNA Sequencing Technique for Critical Care Patients

A collaboration including NVIDIA led to a record-breaking AI technique where a whole genome was sequenced in just about seven hours.

Maximizing Wind Energy Production Using Wake Optimization

Dive into how Siemens Gamesa is using NVIDIA-powered, physics-informed, super-resolution AI models to simulate wind farms and boost energy production.

The post AI’s Highlight Reel: Top Five NVIDIA Videos of 2022 appeared first on NVIDIA Blog.

Read More

Accelerated Computing, AI and Digital Twins: A Recipe for US Manufacturing Leadership

Accelerated Computing, AI and Digital Twins: A Recipe for US Manufacturing Leadership

A national initiative in semiconductors provides a once-in-a-generation opportunity to energize manufacturing in the U.S.

The CHIPS and Science Act includes an $13 billion R&D investment in the chip industry. Done right, it’s a recipe for bringing advanced manufacturing techniques to every industry and cultivating a highly skilled workforce.

The semiconductor industry uses the most complex manufacturing processes and equipment in human history. To produce each chip inside a car or computer, hundreds of steps must be executed perfectly, most already automated with robotics.

The U.S. government asked industry where it should focus its efforts on improving this sector. In response, NVIDIA released a 12-page document with its best ideas.

Supercharged with accelerated computing and AI, a modern fab is also a guidepost for all other types of complex manufacturing — from making smartphones to shoes — flexibly and efficiently.

The World’s Most Expensive Factories

Semiconductors are made in factories called fabs. Building and outfitting a new one costs as much as $20 billion.

The latest factories rely heavily on computers that are built, programmed and operated by skilled workers armed with machine learning for the next generation of manufacturing processes.

For example, AI can find patterns no human can see, including tiny defects in a product on a fast-moving assembly line. The semiconductor industry needs this technology to create tomorrow’s increasingly large and complex chips. Other industries will be able to use it to make better products faster, too.

Efficiency Through Simulation

We can now create a digital copy of an entire factory. Using NVIDIA technologies, BMW is already building a digital twin of one of its automotive plants to bring new efficiencies to its business.

No one has built anything as complex as a digital twin of a chip fab yet, but that goal is now within reach.

A virtual fab would let specialists design and test new processes much more quickly and cheaply without stopping production in a physical plant. A simulation also can use AI to analyze data from sensors inside physical factories, finding new ways to route materials that reduce waste and speed operations.

Soon, any manufacturing plant with a digital twin will be more economically competitive than a plant without one.

Virtual Factories, Real Operators

Digital twins enable remote specialists to collaborate as if they were in the same room. They also take worker training to a new level.

Some of the most vital tools in a fab are the size of a shipping container and cost as much as $200 million each. Digital twins let workers train on these expensive systems before they’re even installed.

Once trained, workers can qualify, operate and service them without needing to set foot in the ultra-clean rooms where they’re installed. This kind of work represents the future of all manufacturing.

Factories designed with virtual twins also can optimize energy efficiency, water consumption and maximize reuse, reducing environmental impact.

Wanted: More Performance per Watt

Tomorrow’s factories will need more computing muscle than ever. To deliver it, we need investments in energy-efficient technologies at every level.

The circuits inside chips need to use and waste significantly less energy. The signals they send to nearby chips and across global networks must move faster while consuming less power.

Computers will need to tackle more data-intensive jobs while increasing productivity. To design and build these systems, we need research on new kinds of accelerator chips, accelerated systems and the software that will run on them.

NVIDIA and others have made great progress in green computing. Now we have an opportunity to take another big step forward.

A Broad Agenda and Partnerships

These are just some of the ways NVIDIA wants to help advance the U.S. semiconductor industry and by extension all manufacturers.

No company can do this work alone. Industry, academia and government must collaborate to get this right.

NVIDIA is at the center of a vibrant ecosystem of 3.5 million developers and more than 12,000 global startups registered in the NVIDIA Inception program.

The University of Florida provides a model for advancing AI and data science education across every field of study.

In 2020, it kicked off a plan to become one of the nation’s first AI universities. Today it’s infusing its entire curriculum with machine learning. At its heart, UF’s AI supercomputer is already advancing research in fields such as healthcare, agriculture and engineering.

It’s one more example of the transformative power of accelerated computing and AI. We look forward to the opportunity to take part in this grand adventure in U.S. manufacturing.

To learn more about NVIDIA’s ideas on the future of semiconductor manufacturing, including how AI is critical to advancing lithography, electronic design tools and cybersecurity processes, read the full document

 

The post Accelerated Computing, AI and Digital Twins: A Recipe for US Manufacturing Leadership appeared first on NVIDIA Blog.

Read More

Safe Travels: NVIDIA DRIVE OS Receives Premier Safety Certification

Safe Travels: NVIDIA DRIVE OS Receives Premier Safety Certification

To make transportation safer, autonomous vehicles (AVs) must have processes and underlying systems that meet the highest standards.

NVIDIA DRIVE OS is the operating system for in-vehicle accelerated computing powered by the NVIDIA DRIVE platform. DRIVE OS 5.2 is now functional safety-certified by TÜV SÜD, one of the most experienced and rigorous assessment bodies in the automotive industry.

TÜV SÜD has determined that the software meets the International Organization for Standardization (ISO) 26262 ASIL B standard, which targets functional safety, or “the absence of unreasonable risk due to hazards caused by malfunctioning behavior of electrical or electronic systems.”

Based in Munich, Germany, TÜV SÜD assesses compliance to national and international standards for safety, durability and quality in cars, as well as for factories, buildings, bridges and other infrastructure.

Safety architecture, design and methodologies are pervasive throughout NVIDIA DRIVE solutions, from the data center to the car. NVIDIA has invested 15,000 engineering years in safety systems and processes.

A Strong Foundation

DRIVE OS is the foundation of the NVIDIA DRIVE SDK and is the first functionally safe operating system for complex in-vehicle accelerated computing platforms.

It includes NVIDIA CUDA libraries for efficient parallel computing, the NVIDIA TensorRT SDK for real-time AI inferencing, the NvMedia library for sensor input processing and other developer tools and modules for access to hardware engines.

NVIDIA is working across the industry to ensure the safe deployment of AVs. It participates in standardization and regulation bodies worldwide, including ISO, the Society of Automotive Engineers (SAE), the Institute of Electrical and Electronics Engineers (IEEE) and more.

Measuring Up

NVIDIA DRIVE is an open platform, meaning experts at top car companies can build upon this industrial-strength system.

TÜV SÜD, among the world’s most respected safety experts, measured DRIVE OS against industry safety standards, specifically ISO 26262, the definitive global standard for functional safety of road vehicles’ systems, hardware and software.

To meet that standard, software must detect failures during operation, as well as be developed in a process that handles potential systematic faults along the whole V-model — from safety-requirements definition to coding, analysis, verification and validation.

That is, the software must avoid failures whenever possible, but detect and respond to them if they cannot be avoided.

TÜV SÜD’s team determined DRIVE OS 5.2 complies with the testing criteria and is suitable for safety-related use in applications up to ASIL B.

Safety Across the Stack

Safety is NVIDIA’s first priority in AV development.

This certification builds on TÜV SÜD’s 2020 assessment of the NVIDIA DRIVE Xavier system-on-a-chip, which determined that it meets ISO 26262 random hardware integrity of ASIL C and a systematic capability of ASIL D for process — the strictest standard for functional safety.

These processes all contribute to our dedication to a comprehensive safety approach that extends from the SoC to the operating system, the application software and the cloud.

The post Safe Travels: NVIDIA DRIVE OS Receives Premier Safety Certification appeared first on NVIDIA Blog.

Read More

Have a Holly, Jolly Holiday Streaming Top Titles on GeForce NOW

Have a Holly, Jolly Holiday Streaming Top Titles on GeForce NOW

While the weather outside may or may not be frightful this holiday season, new games on GeForce NOW each week make every GFN Thursday delightful.

It doesn’t matter whether you’re on the naughty or nice list. With over 1,400 titles streaming from the cloud, there’s something for everyone to play across nearly all of their devices — including six titles that join the GeForce NOW library today.

Let It Stream, Let It Stream, Let It Stream

With so many games streaming across nearly every device, the options for great gaming and ways to play on GeForce NOW are practically endless.

Light games up like a holiday tree, turning RTX ON for cinematic, real-time ray tracing in titles like Marvel’s Guardians of the Galaxy, Control and Cyberpunk 2077. RTX ON is available to RTX 3080 and Priority members, who also get the perks of extended play sessions and dedicated servers to get into games faster.

Transform Macs into gaming rigs with the power of the cloud and play PC-exclusive titles like New World and Lost Ark, or take top titles to the big screen streaming to NVIDIA SHIELD or Samsung Smart TVs in glorious 4K resolution.

Take gaming on the go while traveling for the holidays with mobile devices. Drop into Fortnite or tap your way through Teyvet in Genshin Impact, streaming to mobile devices with touch controls.

Games on GeForce NOW
From A to Z, GeForce NOW has top titles streaming across devices.

Experience top titles from publishers like Ubisoft, including Assassin’s Creed Valhalla and Far Cry 6, and enjoy games that will test your skills like ICARUS and Crysis Remastered — all streaming in 4K resolution from PC and Mac apps with an RTX 3080 membership.

Get a head start on building out your library of games with over 100 free-to-play titles like Apex Legends and the newest season of Roller Champions — and take game progress to any device with cloud saves. RTX 3080 members gain a competitive advantage in multiplayer games, as no-sweat streaming with ultra-low latency leads to more victories.

And with the holiday season underway, the Epic Games Store free games are in full swing. Check its Free Games page regularly to claim titles, many of which we’ll work to bring to the cloud in the weeks ahead.

Dash over to the membership page for more information on the benefits of a GeForce NOW premium membership.

Tis the Season to Get Your Game On

A new set of games arrives just in time as the holiday season heats up.

Marvels Midnight Suns on GeForce NOW
Play ‘Marvel’s Midnight Suns,’ a tactical role-playing game set in the darker, supernatural side of the Marvel Universe.

Check out the following titles streaming from the cloud this week:

  • Master of Magic (New Release on Steam)
  • Roller Champions (New Release on Steam)
  • Wavetale (New Release on Steam)
  • Cosmoteer: Starship Architect & Commander (Steam)
  • Floodland (Steam)
  • Marvel’s Midnight Suns (Epic Games Store)

Members can also now experience the next-gen update for The Witcher 3: Wild Hunt — Complete Edition. The update is free for those who own the game and GeForce NOW members can take advantage of upgraded visuals across nearly all of their devices.

Keep an eye out as Origin versions of Electronic Arts games transition to the new EA app, starting with Battlefield 2042 this week. Along with ownership of these games, members’ content, cloud saves and friends list will transfer to the EA app.

Give the gift of gaming with all of the perks of a GeForce NOW membership through a GeForce NOW gift card. It’s the perfect stocking stuffer or last-minute gift to treat friends with.

With so many titles to play on the cloud, what game are you most looking forward to playing over the holidays? Let us know on Twitter or in the comments below.

The post Have a Holly, Jolly Holiday Streaming Top Titles on GeForce NOW appeared first on NVIDIA Blog.

Read More

2023 Predictions: AI That Bends Reality, Unwinds the Golden Screw and Self-Replicates

2023 Predictions: AI That Bends Reality, Unwinds the Golden Screw and Self-Replicates

After three years of uncertainty caused by the pandemic and its post-lockdown hangover, enterprises in 2023 — even with recession looming and uncertainty abounding — face the same imperatives as before: lead, innovate and problem solve.

AI is becoming the common thread in accomplishing these goals. On average, 54% of enterprise AI projects made it from pilot to production, according to a recent Gartner survey of nearly 700 enterprises in the U.S., U.K. and Germany. A whopping 80% of executives in the survey said automation can be applied to any business decision, and that they’re shifting away from tactical to strategic uses of AI.

The mantra for 2023? Do more with less. Some of NVIDIA’s experts in AI predict businesses will prioritize scaling their AI projects amid layoffs and skilled worker shortages by using cloud-based integrated software and hardware offerings that can be purchased and customized to any enterprise, application or budget.

Cost-effective AI development also is a recurring theme among our expert predictions for 2023. With Moore’s law running up against the laws of physics, installing on-premises compute power is getting more expensive and less energy efficient. And the Golden Screw search for critical components is speeding the shift to the cloud for developing AI applications as well as for finding data-driven solutions to supply chain issues.

Here’s what our experts have to say about the year ahead in AI:

Anima Anandkumar headshot

ANIMA ANANDKUMAR
Director of ML Research, and Bren Professor at Caltech

Digital Twins Get Physical: We will see large-scale digital twins of physical processes that are complex and multi-scale, such as weather and climate models, seismic phenomena and material properties. This will accelerate current scientific simulations as much as a million-x, and enable new scientific insights and discoveries.

Generalist AI Agents: AI agents will solve open-ended tasks with natural language instructions and large-scale reinforcement learning, while harnessing foundation models — those large AI models trained on a vast quantity of unlabeled data at scale — to enable agents that can parse any type of request and adapt to new types of questions over time.

Manuvir Das headshot

MANUVIR DAS
Vice President, Enterprise Computing

Software Advances End AI Silos: Enterprises have long had to choose between cloud computing and hybrid architectures for AI research and development — a practice that can stifle developer productivity and slow innovation. In 2023, software will enable businesses to unify AI pipelines across all infrastructure types and deliver a single, connected experience for AI practitioners. This will allow enterprises to balance costs against strategic objectives, regardless of project size or complexity, and provide access to virtually unlimited capacity for flexible development.

Generative AI Transforms Enterprise Applications: The hype about generative AI becomes reality in 2023. That’s because the foundations for true generative AI are finally in place, with software that can transform large language models and recommender systems into production applications that go beyond images to intelligently answer questions, create content and even spark discoveries. This new creative era will fuel massive advances in personalized customer service, drive new business models and pave the way for breakthroughs in healthcare.

Kimberly Powell headshot

KIMBERLY POWELL
Vice President, Healthcare

Biology Becomes Information Science: Breakthroughs in large language models and the fortunate ability to describe biology in a sequence of characters are giving researchers the ability to train a new class of AI models for chemistry and biology. The capabilities of these new AI models give drug discovery teams the ability to generate, represent and predict the properties and interactions of molecules and proteins — all in silicon. This will accelerate our ability to explore the essentially infinite space of potential therapies.

Surgery 4.0 Is Here: Flight simulators serve to train pilots and research new aircraft control. The same is now true for surgeons and robotic surgery device makers. Digital twins that can simulate at every scale, from the operating room environment to the medical robot and patient anatomy, are breaking new ground in personalized surgical rehearsals and designing AI-driven human and machine interactions. Long residencies won’t be the only way to produce an experienced surgeon. Many will become expert operators when they perform their first robot-assisted surgery on a real patient.

DANNY SHAPIRO
Vice President, Automotive

Training Autonomous Vehicles in the Metaverse: The more than 250 auto and truck makers, startups, transportation and mobility-as-a-service providers developing autonomous vehicles are tackling one of the most complex AI challenges of our time. It’s simply not possible to encounter every scenario they must be able to handle by testing on the road, so much of the industry in 2023 will turn to the virtual world to help.

On-road data collection will be supplemented by virtual fleets that generate data for training and testing new features before deployment. High-fidelity simulation will run autonomous vehicles through a virtually infinite range of scenarios and environments. We’ll also see the continued deployment of digital twins for vehicle production to improve manufacturing efficiencies, streamline operations and improve worker ergonomics and safety.

Moving to the Cloud: 2023 will bring more software-as-a-service (SaaS) and infrastructure-as-a-service offerings to the transportation industry. Developers will be able to access a comprehensive suite of cloud services to design, deploy and experience metaverse applications anywhere. Teams will design and collaborate on 3D workflows — such as AV development simulation, in-vehicle experiences, cloud gaming and even car configurators delivered via the web or in showrooms.

Your In-Vehicle Concierge: Advances in conversational AI, natural language processing, gesture detection and avatar animation are making their way to next-generation vehicles in the form of digital assistants. This AI concierge can make reservations, access vehicle controls and provide alerts using natural language understanding. Using interior cameras, deep neural networks and multimodal interaction, vehicles will be able to ensure that driver attention is on the road and ensure no passenger or pet is left behind when the journey is complete.

Rev Lebaredian headshot

REV LEBAREDIAN
Vice President, Omniverse and Simulation Technology

The Metaverse Universal Translator: Just as HTML is the standard language of the 2D web, Universal Scene Description is set to become the most powerful, extensible, open language for the 3D web. As the 3D standard for describing virtual worlds in the metaverse, USD will allow enterprises and even consumers to move between different 3D worlds using various tools, viewers and browsers in the most seamless and consistent fashion.

Bending Reality With Digital Twins: A new class of true-to-reality digital twins of goods, services and locations is set to offer greater windfalls than their real-world counterparts. Imagine selling many virtual pairs of sneakers in partnership with a gaming company that are simply undergoing design testing — long before sending the pattern to manufacturing. Companies also stand to benefit by saving on waste, increasing operational efficiencies and boosting accuracy.

Ronnie Vasishta

RONNIE VASISHTA
Senior Vice President, Telecoms

Cutting the Cord on AR/VR Over 5G Networks: While many businesses will move to the cloud for hardware and software development, edge design and collaboration also will grow as 5G networks become more fully deployed around the world. Automotive designers, for instance, can don augmented reality headsets and stream the same content they see over wireless networks to colleagues around the world, speeding collaborative changes and developing innovative solutions at record speeds. 5G also will lead to accelerated deployments of connected robots across industries — used for restocking store shelves, cleaning floors, delivering pizzas and picking and packing goods in factories.

RAN in the Cloud: Network operators around the world are rolling out software-defined virtual radio access network 5G to save time and money as they seek faster returns on their multibillion-dollar investments. Now, they’re shifting away from bespoke L1 accelerators to 100% software-defined and full-stack, 5G-baseband acceleration that includes L2, RIC, Beamforming and FH offerings. This shift will lead to an increase in the utilization of RAN systems by enabling multi-tenancy between RAN and AI workloads.

BOB PETTE
Vice President, Professional Visualization 

An Industrial Revolution via Simulation: Everything built in the physical world will first be simulated in a virtual world that obeys the laws of physics. These digital twins — including large-scale environments such as factories, cities and even the entire planet — and the industrial metaverse are set to become critical components of digital transformation initiatives. Examples already abound: Siemens is taking industrial automation to a new level. BMW is simulating entire factory floors to optimally plan manufacturing processes. Lockheed Martin is simulating the behavior of forest fires to anticipate where and when to deploy resources. DNEG, SONY Pictures, WPP and others are boosting productivity through globally distributed art departments that enable creators, artists and designers to iterate on scenes virtually in real time.

Rethinking of Enterprise IT Architecture: Just as many businesses scrambled to adapt their culture and technologies to meet the challenges of hybrid work, the new year will bring a re-architecting of many companies’ entire IT infrastructure. Companies will seek powerful client devices capable of tackling the ever-increasing demands of applications and complex datasets. And they’ll embrace flexibility, moving to burst to the cloud for exponential scaling. The adoption of distributed computing software platforms will enable a globally dispersed workforce to collaborate and stay productive under the most disparate working environments.

Similarly, complex AI model development and training will require powerful compute infrastructure in the data center and the desktop. Businesses will look at curated AI software stacks for different industrial use cases to make it easy for them to bring AI into their workflows and deliver higher quality products and services to customers faster.

Azita Martin

AZITA MARTIN
Vice President, AI for Retail and Consumer Products Group

Tackling Shrinkage: Brick-and-mortar retailers perennially struggle with a commonplace problem: shrinkage, the industry parlance for theft. As more and more adopt AI-based services for contactless checkout, they’ll seek sophisticated software that combines computer vision with store analytics data to make sure what a shopper rings up is actually the item being purchased. The adoption of smart self-tracking technology will aid in the development of fully automated store experiences and help solve for labor shortages and lost income.

AI to Optimize Supply Chains: Even the most sophisticated retailers and e-commerce companies had trouble the past two years balancing supply with demand. Consumers embraced home shopping during the pandemic and then flocked back into brick-and-mortar stores after lockdowns were lifted. After inflation hit, they changed their buying habits once again, giving supply chain managers fits. AI will enable more frequent and more accurate forecasting, ensuring the right product is at the right store at the right time. Also, retailers will embrace route optimization software and simulation technology to provide a more holistic view of opportunities and pitfalls.

Malcolm DeMayo

MALCOLM DEMAYO
Vice President, Financial Services

Better Risk Management: Firms will look for opportunities like accelerated compute to drive efficiencies. The simulation techniques used to value risk in derivatives trading are computationally intensive and typically consume large swaths of data center space, power and cooling. What runs all night on traditional compute will run over a lunch break or faster on accelerated compute. A real-time value of sensitivities will enable firms to better manage risk and improve the value they deliver to their investors.

Cloud-First for Financial Services: Banks have a new imperative: get agile fast. Facing increasing competition from non-traditional financial institutions, changing customer expectations rising from their experiences in other industries and saddled with legacy infrastructure, banks and other institutions will embrace a cloud-first AI approach. But as a highly regulated industry that requires operational resiliency, an industry term that means your systems can absorb and survive shocks (like a pandemic), banks will look for open, portable, hardened, hybrid solutions. As a result, banks are obligated to purchase support agreements when available.

Charlie Boyle headshot

CHARLIE BOYLE
Vice President, DGX systems

AI Becomes Cost-Effective With Energy-Efficient Computing: In 2023, inefficient, x86-based legacy computing architectures that can’t support parallel processing will give way to accelerated computing solutions that deliver the computational performance, scale and efficiency needed to build language models, recommenders and more.

Amidst economic headwinds, enterprises will seek out AI solutions that can deliver on objectives, while streamlining IT costs and boosting efficiency. New platforms that use software to integrate workflows across infrastructure will deliver computing performance breakthroughs — with lower total cost of ownership, reduced carbon footprint and faster return on investment on transformative AI projects — displacing more wasteful, older architectures.

DAVID REBER
Chief Security Officer

Data Scientists Are Your New Cyber Asset: Traditional cyber professionals can no longer effectively defend against the most sophisticated threats because the speed and complexity of attacks and defense have effectively exceeded human capacities. Data scientists and other human analysts will use AI to look at all of the data objectively and discover threats. Breaches are going to happen, so data science techniques using AI and humans will help find the needle in the haystack and respond quickly.

AI Cybersecurity Gets Customized: Just like recommender systems serve every consumer on the planet, AI cybersecurity systems will accommodate every business. Tailored solutions will become the No. 1 need for enterprises’ security operations centers as identity-based attacks increase. Cybersecurity is everyone’s problem, so we’ll see more transparency and sharing of various types of cybersecurity architectures. Democratizing AI enables everyone to contribute to the solution. As a result, the collective defense of the ecosystem will move faster to counter threats.

Kari Briski headshot

KARI BRISKI
Vice President, AI and HPC Software

The Rise of LLM Applications: Research on large language models will lead to new types of practical applications that can transform languages, text and even images into useful insights that can be used across a multitude of diverse organizations by everyone from business executives to fine artists. We’ll also see rapid growth in demand for the ability to customize models so that LLM expertise spreads to languages and dialects far beyond English, as well as across business domains, from generating catalog descriptions to summarizing medical notes.

Unlabeled Data Finds Its Purpose: Large language models and structured data will also extend to the reams of photos, audio recordings, tweets and more to find hidden patterns and clues to support healthcare breakthroughs, advancements in science, better customer engagements and even major advances in self-driving transportation. In 2023, adding all this unstructured data to the mix will help develop neural networks that can, for instance, generate synthetic profiles to mimic the health records they’ve learned from. This type of unsupervised machine learning is set to become as important as supervised machine learning.

The New Call Center: Keep an eye on the call center in 2023, where adoption of more and more easily implemented speech AI workflows will provide business flexibility at every step of the customer interaction pipeline — from modifying model architectures to fine-tuning models on proprietary data and customizing pipelines. As the accessibility of speech AI workflows broadens, we’ll see a widening of enterprise adoption and giant increase in call center productivity by speeding time to resolution. AI will help agents pull the right information out of a massive knowledge base at the right time, minimizing wait times for customers.

Kevin Deierling

KEVIN DEIERLING
Vice President, Networking

Moore’s Law on Life Support: As CPU design runs up against the laws of physics and struggles to keep up with Moore’s law — the postulation that roughly every two years the number of transistors on microchips would double and create faster, more efficient processing — enterprises increasingly will turn to accelerated computing. They’ll use custom combinations of CPUs, GPUs, DPUs and more in scalable data centers to innovate faster while becoming more cloud oriented and energy efficient.

The Network as the New Computing Platform: Just as personal computers combined software, hardware and storage into productivity-generating tools for everyone, the cloud is fast becoming the new computing tool for AI and the network is what enables the cloud. Enterprises will use third-party software, or bring their own, to develop AI applications and services that run both on-prem and in the cloud. They’ll use cloud services operators to purchase the capacity they need when they need it, working across CPUs, GPUs, DPUs and intelligent switches to optimize compute, storage and the network for their different workloads. What’s more, with zero-trust security being rapidly adopted by cloud service providers, the cloud will deliver computing as secure as on-prem solutions.

DEEPU TALLA
Vice President, Embedded and Edge Computing

Robots Get a Million Lives: More robots will be trained in virtual worlds as photorealistic rendering and accurate physics modeling combine with the ability to simulate in parallel millions of instances of a robot on GPUs in the cloud. Generative AI techniques will make it easier to create highly realistic 3D simulation scenarios and further accelerate the adoption of simulation and synthetic data for developing more capable robots.

Expanding the Horizon: Most robots operate in constrained environments where there is limited to no human activity. Advancements in edge computing and AI will enable robots to have multi-modal perception for better semantic understanding of their environment. This will drive increased adoption of robots operating in brownfield facilities and public spaces such as retail stores, hospitals and hotels.

Marc Spieler

MARC SPIELER
Senior Director, Energy

AI-Powered Energy Grid: As the grid becomes more complex due to the unprecedented rate of distributed energy resources being added, electric utility companies will require edge AI to improve operational efficiency, enhance functional safety, increase accuracy of load and demand forecasting, and accelerate the connection time of renewable energy, like solar and wind. AI at the edge will increase grid resiliency, while reducing energy waste and cost.

More Accurate Extreme Weather Forecasting: A combination of AI and physics can help better predict the world’s atmosphere using a technique called Fourier Neural Operator. The FourCastNet system is able to predict a precise path of a hurricane and can also make weather predictions in advance and provide real-time updates as climate conditions change. Using this information will allow energy companies to better plan for renewable energy expenditures, predict generation capacity and prepare for severe weather events.

The post 2023 Predictions: AI That Bends Reality, Unwinds the Golden Screw and Self-Replicates appeared first on NVIDIA Blog.

Read More

Ferrari of Finance: Accelerated Computing Drives Milan Bank Forward

Ferrari of Finance: Accelerated Computing Drives Milan Bank Forward

Banks require more than cash in the vault these days, they also need accelerated computing in the back room.

“The boost we’re getting with GPUs not only significantly improved our performance at the same cost, it helped us redefine our business and sharpen our focus on customers,” said Marco Airoldi, who’s been head of financial engineering for more than 20 years at Mediobanca, a Milan-based banking group that provides lending and investment services in Europe.

High performance computing is especially important for investment banks whose services involve computationally intensive transactions on baskets of securities and derivative products.

Thanks, in part, to its GPU-powered systems, Mediobanca is thriving amid the current market downturn.

“We can’t disclose numbers, but I can tell you with a good degree of confidence I don’t think we’ve had more than a dozen negative days in the last 250 trading days,” said Stefano Dova, a Ph.D. in finance and head of markets at Mediobanca.

That’s, in part, because Airoldi’s team enabled real-time risk management on GPUs early in the year.

“It’s a fundamental step forward,” said Dova, who plays his electric piano or clarinet to unwind at the end of a stressful day. “You can lose money on a daily basis in the current market volatility, but we’ve been very happy with the results we’ve had in the last six months.”

Sharing the Wealth

Now, Mediobanca is preparing to offer its customers the same computing capabilities it enjoys.

“Because the GPUs are so fast, we can offer clients the ability to build their own products and see their risk profiles in real time, so they can decide where and when to invest — you can only do this if you have the computational power for live pricing,” Dova said.

The service, now in final testing, puts customers at the center of the bank’s business. It uses automation made possible by the parallel computing capabilities of the bank’s infrastructure, Airoldi notes.

Next Stop: Machine Learning

Looking further ahead, Airoldi’s group is mapping the investment bank’s journey into AI.

It starts with sentiment analysis, powered by natural language processing. That will help the bank understand market trends more deeply, so it can make even better investment decisions.

“AI will give us useful ways to map customer and investor behaviors, and we will invest in the technology to develop more AI apps for finance,” said Dova.

Mediobanca Milan uses GPUs
Mediobanca’s headquarters is in central Milan, around the corner from the famed Teatro alla Scala.

Their work comes as banks of all sorts are starting to apply AI to dozens of use cases.

“AI is one of the most promising technologies in finance,” said Airoldi, who foresees using it for classical quantitative problems, too.

It’s All About the Math

In the last few years, the bank has added dozens of GPUs to its infrastructure. Each offers up to 100x the performance of a CPU, he said.

That means Mediobanca can do more with less. It reduces its total cost of ownership while accelerating workloads that create competitive advantages such as Monte Carlo simulations used to create and price advanced investment products.

Under the hood, great financial performance is based on excellence in math, said Airoldi, who earned his Ph.D. in theoretical condensed matter physics.

“The mathematical models and numeric methods of finance are closely related to those found in theoretical physics, so investment banking is a great job for a physicist,” he said.

When Airoldi needs a break from work, you might find him playing chess in the Piazza della Scala, across from the famed opera house, just around the corner from the bank’s headquarters.

The post Ferrari of Finance: Accelerated Computing Drives Milan Bank Forward appeared first on NVIDIA Blog.

Read More

Face All Fears With Creative Studio Fabian&Fred This Week ‘In the NVIDIA Studio’

Face All Fears With Creative Studio Fabian&Fred This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

The short film I Am Not Afraid! by creative studio Fabian&Fred embodies childlike wonder, curiosity and imagination this week In the NVIDIA Studio.

Plus, the NVIDIA Studio #WinterArtChallenge shows no signs of letting up, so learn more and check out featured artwork at the end of this blog. Keep those posts coming.

For inspiration, watch NVIDIA artist Jeremy Lightcap and Adobe Substance 3D expert Pierre Maheut create a winter-themed scene in NVIDIA Omniverse, a platform for creating and operating metaverse applications that enables artists to connect their favorite 3D design tools for a more seamless workflow.

Invoke Emotion

For almost a decade, Fabian&Fred co-founders Fabian Driehost and Frederic Schuld have focused on relatable narratives — stories understood by audiences of all ages — while not shying away from complex emotional and social topics.

The short film’s hero Vanja faces her fears.

One of their latest works, I Am Not Afraid!, features a little girl named Vanja who discovers that being brave means facing your own fears and that everyone, even the bigger personalities in this world, are scared now and again.

“Everybody knows how it feels to be afraid of the dark,” said Fabian&Fred.

The concept for the film started when director and Norwegian native Marita Mayor shared her childhood experiences with the team. These emotional moments had a profound artistic impact on the work’s visual-layer-based, flat, minimal style and an appropriate color system.

“We combined structures from nature, brush strokes used for texture, and a kid’s voice — all designed to ensure the feeling of fear was authentic,” said the team.

With the script in hand, pre-production work included various sketches, moodboards and photographs of urban neighborhoods, people, animals and plants to match the narrative tone.

Moodboards and sketches assist in tone.

Work began in the Adobe Creative Cloud suite of creative apps, starting with the creation of multiple characters in Adobe Photoshop. These characters were then prepared and rigged in Adobe Animate.

 

Animated characters were used in Premiere Pro to create an animatic to test out voices and sounds. With the new GeForce RTX 40 Series GPUs, studios like Fabian&Fred can deploy NVIDIA’s dual encoders to cut export times nearly in half, speeding up review cycles for teams.

 

3D assets were modeled in Blender, with Blender Cycles RTX-accelerated OptiX ray tracing in the viewport, ensuring interactive modeling with sharp graphical output.

Preliminary sketches in Adobe Photoshop.

In parallel, large, detailed backgrounds were created in Adobe Illustrator with the GPU-accelerated canvas. Fabian&Fred were able to smoothly and interactively pan across, and zoom in and out of, their complex vector graphics, thanks to their GeForce RTX 3090 GPU.

Stunning backgrounds detailed in Adobe Illustrator.

Fabian&Fred returned to Adobe Animate to stage all assets and backgrounds with a mix of frame-by-frame and rig animation techniques. Sound production was done in the digital audio app ProTools, and final composite work completed in Adobe After Effects with more than 45 RTX GPU-accelerated features and effects at the duo’s disposal.

 

Finally, Fabian&Fred color corrected I Am Not Afraid! using Blackmagic Design’s DaVinci Resolve RTX GPU-accelerated, AI-powered, auto-color-correct feature to improve hues and contrast with ease. They then applied some final manual touches.

 

The new GeForce RTX 40 Series GPUs speed up AI tools in DaVinci Resolve, including Object Select Mask, which rotoscopes or highlights parts of motion footage frame by frame 70% faster than the previous generation, thanks to close collaboration with Blackmagic Design.

 

“We have worked closely with NVIDIA for many years, and we look forward to continuing our collaboration to produce even more groundbreaking tools and performance for creators,” said Rohit Gupta, director of software development at Blackmagic Design.

“Each project in our portfolio has benefited from reliable GeForce RTX GPU performance, whether it’s 2D animation or a photogrammetry-based, real-time 3D project.” – Fabian&Fred

Virtually every stage in Fabian&Fred’s creative workflow was made faster and easier with their GeForce RTX GPU. And while these powerful graphics cards are well known for accelerating the most difficult and complex workflows, they’re a boon for efficiency in smaller projects, as well.

Reflecting on their shared experiences, Fabian&Fred agreed that teamwork and diversity are their strengths. “In our studio, we come together from multicultural roots and make unique films as a team, with different methods, but the films have a truth in their heart that works for many people.”

Creative Studio Fabian&Fred.

View more of Fabian&Fred’s work on their Instagram page.

The Weather Outside Is Frightful, the #WinterArtChallenge Is Delightful

Enter NVIDIA Studio’s #WinterArtChallenge, running through the end of the year, by sharing winter-themed art on Instagram, Twitter or Facebook for a chance to be featured on our social media channels.

Like @mtw75 with Santa Claus and his faithful elves preparing gifts for all the good little boys and girls this holiday season.

Be sure to tag #WinterArtChallenge to join.

Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

The post Face All Fears With Creative Studio Fabian&Fred This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

What Is a Pretrained AI Model?

What Is a Pretrained AI Model?

Imagine trying to teach a toddler what a unicorn is. A good place to start might be by showing the child images of the creature and describing its unique features.

Now imagine trying to teach an artificially intelligent machine what a unicorn is. Where would one even begin?

Pretrained AI models offer a solution.

A pretrained AI model is a deep learning model — an expression of a brain-like neural algorithm that finds patterns or makes predictions based on data — that’s trained on large datasets to accomplish a specific task. It can be used as is or further fine-tuned to fit an application’s specific needs.

Why Are Pretrained AI Models Used?

Instead of building an AI model from scratch, developers can use pretrained models and customize them to meet their requirements.

To build an AI application, developers first need an AI model that can accomplish a particular task, whether that’s identifying a mythical horse, detecting a safety hazard for an autonomous vehicle or diagnosing a cancer based on medical imaging. That model needs a lot of representative data to learn from.

This learning process entails going through several layers of incoming data and emphasizing goals-relevant characteristics at each layer.

To create a model that can recognize a unicorn, for example, one might first feed it images of unicorns, horses, cats, tigers and other animals. This is the incoming data.

Then, layers of representative data traits are constructed, beginning with the simple — like lines and colors — and advancing to complex structural features. These characteristics are assigned varying degrees of relevance by calculating probabilities.

As opposed to a cat or tiger, for example, the more like a horse a creature appears, the greater the likelihood that it is a unicorn. Such probabilistic values are stored at each neural network layer in the AI model, and as layers are added, its understanding of the representation improves.

To create such a model from scratch, developers require enormous datasets, often with billions of rows of data. These can be pricey and challenging to obtain, but compromising on data can lead to poor performance of the model.

Precomputed probabilistic representations — known as weights — save time, money and effort. A pretrained model is already built and trained with these weights.

Using a high-quality pretrained model with a large number of accurate representative weights leads to higher chances of success for AI deployment. Weights can be modified, and more data can be added to the model to further customize or fine-tune it.

Developers building on pretrained models can create AI applications faster, without having to worry about handling mountains of input data or computing probabilities for dense layers.

In other words, using a pretrained AI model is like getting a dress or a shirt and then tailoring it to fit your needs, rather than starting with fabric, thread and needle.

Pretrained AI models are often used for transfer learning and can be based on several model architecture types. One popular architecture type is the transformer model, a neural network that learns context and meaning by tracking relationships in sequential data.

According to Alfredo Ramos, senior vice president of platform at AI company Clarifai — a Premier partner in the NVIDIA Inception program for startups — pretrained models can cut AI application development time by up to a year and lead to cost savings of hundreds of thousands of dollars.

How Are Pretrained Models Advancing AI?

Since pretrained models simplify and quicken AI development, many developers and companies use them to accelerate various AI use cases.

Top areas in which pretrained models are advancing AI include:

  • Natural language processing. Pretrained models are used for translation, chatbots and other natural language processing applications. Large language models, often based on the transformer model architecture, are an extension of pretrained models. One example of a pretrained LLM is NVIDIA NeMo Megatron, one of the world’s largest AI models.
  • Speech AI. Pretrained models can help speech AI applications plug and play across different languages. Use cases include call center automation, AI assistants and voice-recognition technologies.
  • Computer vision. Like in the unicorn example above, pretrained models can help AI quickly recognize creatures — or objects, places and people. In this way, pretrained models accelerate computer vision, giving applications human-like vision capabilities across sports, smart cities and more.
  • Healthcare. For healthcare applications, pretrained AI models like MegaMolBART — part of the NVIDIA BioNeMo service and framework — can understand the language of chemistry and learn the relationships between atoms in real-world molecules, giving the scientific community a powerful tool for faster drug discovery. 
  • Cybersecurity. Pretrained models provide a starting point to implement AI-based cybersecurity solutions and extend the capabilities of human security analysts to detect threats faster. Examples include digital fingerprinting of humans and machines, and detection of anomalies, sensitive information and phishing.
  • Art and creative workflows. Bolstering the recent wave of AI art, pretrained models can help accelerate creative workflows through tools like GauGAN and NVIDIA Canvas.

Pretrained AI models can be applied across industries beyond these, as their customization and fine-tuning can lead to infinite possibilities for use cases.

Where to Find Pretrained AI Models

Companies like Google, Meta, Microsoft and NVIDIA are inventing cutting-edge model architectures and frameworks to build AI models.

These are sometimes released on model hubs or as open source, enabling developers to fine-tune pretrained AI models, improve their accuracy and expand model repositories.

NVIDIA NGC — a hub for GPU-optimized AI software, models and Jupyter Notebook examples — includes pretrained models as well as AI benchmarks and training recipes optimized for use with the NVIDIA AI platform.

NVIDIA AI Enterprise, a fully managed, secure, cloud-native suite of AI and data analytics software, includes pretrained models without encryption. This allows developers and enterprises looking to integrate NVIDIA pretrained models into their custom AI applications to view model weights and biases, improve explainability and debug easily.

Thousands of open-source models are also available on hubs like GitHub, Hugging Face and others.

It’s important that pretrained models are trained using ethical data that’s transparent and explainable, privacy compliant, and obtained with consent and without bias.

NVIDIA Pretrained AI Models

To help more developers move AI from prototype to production, NVIDIA offers several pretrained models that can be deployed out of the box, including:

  • NVIDIA SegFormer, a transformer model for simple, efficient, powerful semantic segmentation — available on GitHub.
  • NVIDIA’s purpose-built computer vision models, trained on millions of images for smart cities, parking management and other applications.
  • NVIDIA NeMo Megatron, the world’s largest customizable language model, as part of NVIDIA NeMo, an open-source framework for building high-performance and flexible applications for conversational AI, speech AI and biology.
  • NVIDIA StyleGAN, a style-based generator architecture for generative adversarial networks, or GANs. It uses transfer learning to generate infinite paintings in a variety of styles.

In addition, NVIDIA Riva, a GPU-accelerated software development kit for building and deploying speech AI applications, includes pretrained models in ten languages.

And MONAI, an open-source AI framework for healthcare research developed by NVIDIA and King’s College London, includes pretrained models for medical imaging.

Learn more about NVIDIA pretrained AI models.

The post What Is a Pretrained AI Model? appeared first on NVIDIA Blog.

Read More

The Hunt Is On: ‘The Witcher 3: Wild Hunt’ Next-Gen Update Coming to GeForce NOW

The Hunt Is On: ‘The Witcher 3: Wild Hunt’ Next-Gen Update Coming to GeForce NOW

It’s a wild GFN Thursday — The Witcher 3: Wild Hunt next-gen update will stream on GeForce NOW day and date, starting next week. Today, members can stream new seasons of Fortnite and Genshin Impact, alongside eight new games joining the library.

In addition, the newest GeForce NOW app is rolling out this week with support for syncing members’ Ubisoft Connect library of games, which helps them get into their favorite Ubisoft games even quicker.

Plus, gamers across the U.K., Netherlands and Poland have the first chance to pick up the new HP Chromebook x360, 13.3 inches, built for extreme multitasking with an adaptive 360-degree design and great for cloud gaming. Each Chromebook purchase comes with a one-month GeForce NOW Priority membership for free.

Triss the Season

The Witcher 3 on GeForce NOW
“Hmm”

CD PROJEKT RED releases the next-gen update for The Witcher 3: Wild Hunt — Complete Edition on Wednesday, Dec. 14. The update is free for anyone who owns the game on Steam, Epic Games, or GOG.com, and GeForce NOW members can take advantage of upgraded visuals across nearly all of their devices.

The next-gen update brings vastly improved visuals, a new photo mode, and content inspired by Netflix’s The Witcher series. It also adds RTX Global Illumination, as well as ray-traced ambient occlusion, shadows and reflections that add cinematic detail to the game.

Play as Geralt of Rivia on a quest to track down his adopted daughter Ciri, the Child of Prophecy — and the carrier of the powerful Elder Blood — across all your devices without needing to wait for the update to download and install. GeForce NOW RTX 3080 and Priority members can play with RTX ON and NVIDIA DLSS to explore the beautiful open world of The Witcher at high frame rates on nearly any device — from Macs to mobile devices and more.

Get in Sync

The GeForce NOW 2.0.47 app update begins rolling out this week with support for syncing Ubisoft Connect accounts with your GeForce NOW library.

Ubisoft Connect Library GeForce NOW
The 2.0.47 app update brings Ubisoft Connect library syncing.

Members will be able to get to their Ubisoft games faster and easier with this new game-library sync for Ubisoft Connect. Once synced, members will be automatically logged into their Ubisoft account across all devices when streaming supported GeForce NOW games purchased directly from the Ubisoft or Epic Games Store. These include titles like Rainbow Six Siege and Far Cry 6.

The update also adds improvements to voice chat with Chromebook built-in mics, as well as bug fixes. Look for the update to hit PC, Mac and browser clients in the coming days.

‘Tis the Seasons

Fortnite Chapter 4 on GeForce NOW
Fortnite Chapter 4 is available to play in the cloud.

The action never stops on GeForce NOW. This week brings updates to some of the hottest titles streaming from the cloud, and eight new games to play.

Members can jump into Fortnite Chapter 4, now available on GeForce NOW. The chapter features a new island, newly forged weapons, a new realm and new ways to get around, whether riding a dirt bike or rolling around in a snowball. A new cast of combatants is also available, including Geralt of Rivia himself.

Genshin Impact’s Version 3.3 “All Senses Clear, All Existence Void” is also available to stream on GeForce NOW, bringing a new season of events, a new card game called the Genius Invokation TCG, and two powerful allies — the Wanderer and Faruzan — for more stories, fun and challenges.

Here’s the full list of games coming to the cloud this week:

A GeForce NOW paid membership makes a great present for the gamer in your life, so give the gift of gaming with a GeForce NOW gift card. It’s the perfect stocking stuffer or last-minute treat for yourself or a buddy.

Finally, with The Witcher 3: Wild Hunt — Complete Edition on the way, we need to know – Which Geralt are you today? Tell us on Twitter or in the comments below.

The post The Hunt Is On: ‘The Witcher 3: Wild Hunt’ Next-Gen Update Coming to GeForce NOW appeared first on NVIDIA Blog.

Read More

‘23 and AV: Transportation Industry to Drive Into Metaverse, Cloud Technologies

‘23 and AV: Transportation Industry to Drive Into Metaverse, Cloud Technologies

As the autonomous vehicle industry enters the next year, it will start navigating into even greater technology frontiers.

Next-generation vehicles won’t just be defined by autonomous driving capabilities. Everything from the design and production process to the in-vehicle experience is entering a new era of digitization, efficiency, safety and intelligence.

These trends arrive after a wave of breakthroughs in 2022. More automakers announced plans to build software-defined vehicles on the NVIDIA DRIVE Orin system-on-a-chip — including Jaguar Land Rover, NIO, Polestar, Volvo Cars and Xpeng. And in-vehicle compute pushed the envelope with the next-generation NVIDIA DRIVE Thor platform.

Delivering up to 2,000 trillion floating operations per second, DRIVE Thor unifies autonomous driving and cockpit functions on a single computer for unprecedented speed and efficiency.

In the coming year, the industry will see even more wide-ranging innovations begin to take hold, as industrial metaverse and cloud technologies become more prevalent.

Simulation technology for AV development has also flourished in the past year. New tools and techniques on NVIDIA DRIVE Sim, including using AI tools for training and validation, have narrowed the gap between the virtual and real worlds.

Here’s what to expect for intelligent transportation in 2023.

Enter the Metaverse

The same NVIDIA Omniverse platformthat serves as the foundation of DRIVE Sim for AV development is also revolutionizing the automotive product cycle. Automakers can leverage Omniverse to unify the 3D design and simulation pipelines for vehicles, and build persistent digital twins of their production facilities.

Designers can collaborate between 3D software ecosystems from anywhere in the world, in real time, with Omniverse. With full fidelity RTX ray tracing showing physically accurate lighting and reflections and physical behavior, vehicle designs can be more acutely evaluated and tested before physical prototyping ever begins.

Production is the next step in this process, and it requires thousands of parts and workers moving in harmony. With Omniverse, automakers can develop a unified view of their manufacturing processes across plants to streamline operations.

Planners can access the full-fidelity digital twin of the factory, reviewing and optimizing as needed. Every change can be quickly evaluated and validated in virtual, then implemented in the real world to ensure maximum efficiency and ergonomics for factory workers.

Customers can also benefit from enhanced product experiences. Full-fidelity, real-time car configurators, 3D simulations of vehicles, demonstrations in augmented reality and virtual test drives all help bring the vehicle to the customer.

These technologies bridge the gap between the digital and the physical, as the buying experience evolves to include both physical retail spaces and online engagement.

Cloud Migration

As remote work becomes a permanent fixture, cloud capabilities are proving vital to growing industries, including transportation.

Looking ahead, AV developers will be able to access a comprehensive suite of services using NVIDIA Omniverse Cloud to design, deploy and experience metaverse applications anywhere. These applications include simulation, in-vehicle experiences and car configurators.

With cloud-based simulation, AV engineers can generate physically based sensor data and traffic scenarios to test and validate self-driving technology. Developers can also use simulation to design intelligent vehicle interiors.

An autonomous test vehicle running in simulation.

These next-generation cabins will feature personalized entertainment, including streaming content. With the NVIDIA GeForce NOW cloud gaming service, passengers will be able to stream over 1,000 titles from the cloud into the vehicle while charging or waiting to pick up passengers.

Additionally, Omniverse Cloud enables automakers to offer a virtual showroom for an immersive experience to customize a vehicle before purchasing it from anywhere in the world.

Individualized Interiors

Autonomous driving capabilities will deliver a smoother, safer driving experience for all road users. As driving functions become more automated across the industry, vehicle interiors are taking on a bigger role for automakers to create branded experiences.

In addition to gaming, advances in AI and in-vehicle compute are enabling a range of new infotainment technologies, including digital assistants, occupant monitoring, AV visualization, video conferencing and more.

AI and cloud technologies provide personalized infotainment experiences for every passenger.

With NVIDIA DRIVE Concierge, automakers can provide these features across multiple displays in the vehicle. And with software-defined, centralized compute, they can continuously add new capabilities over the air.

This emerging cloud-first approach is transforming every segment of the AV industry, from developing vehicles and self-driving systems to operating global fleets.

The post ‘23 and AV: Transportation Industry to Drive Into Metaverse, Cloud Technologies appeared first on NVIDIA Blog.

Read More