European Researchers Develop AI-Native Wireless Networks With NVIDIA 6G Research Portfolio

European Researchers Develop AI-Native Wireless Networks With NVIDIA 6G Research Portfolio

Using NVIDIA platforms, tools and libraries, European telecommunications institutions are accelerating efforts to develop 6G — the next generation of cellular technology, with AI woven in from the start.

6G will be an AI-native platform that fosters innovation, enables new services, enhances customer experiences and promotes sustainability. Since the launch of the NVIDIA 6G Developer Program last year, over 200 telecommunications organizations across 30+ European countries have used NVIDIA technologies to accelerate their work.

In the U.K., the Department for Science, Innovation and Technology announced a collaboration with NVIDIA to promote the nation’s goals for AI development in telecom. Leading U.K. universities will gain access to a suite of powerful AI tools, 6G research platforms and training resources — including NVIDIA AI Aerial and Sionna — to bolster research and development on AI-native wireless networks.

“This collaboration between the U.K. government and NVIDIA marks a pivotal step in our ambition to make the U.K. a global leader in the development of advanced connectivity technologies,” said Sir Chris Bryant, minister of state for data protection and telecoms of the U.K.“The use of AI in telecoms will make our networks more intelligent, efficient and reliable, and by equipping our world leading academia and researchers with cutting-edge AI tools and training, we will accelerate innovation that improves the everyday digital experience for people across the country.”

In Finland, the University of Oulu is conducting research for wireless channel estimation with a real-time network digital twin that taps into synthetic lidar data, using the NVIDIA Isaac Sim reference application for robotics simulation.

The project enables advanced development of AI and machine learning features for integrated sensing and communications, or ISAC, a capability that allows the network itself to act as a sensor of the physical world to enhance operations. The project also enables modeling of the 6G access system.

France-based OpenAirInterface (OAI) and NVIDIA are collaborating to advance AI-native wireless networks by integrating OAI’s open-source virtualized and open RAN stack with the GPU-accelerated NVIDIA AI Aerial– and NVIDIA Sionna-based systems. OAI provides the layer 2+ software for Aerial Commercial Testbed and full-stack O-RAN software for Sionna Research Kit, enabling researchers to innovate in 5G and 6G radio access networks using AI and machine learning at every layer.

In Germany, Fraunhofer HHI is conducting groundbreaking research on neuromorphic wireless cognition for robotic control using the NVIDIA AI Aerial suite of accelerated computing platforms and software for designing, simulating and operating wireless networks.

The research involves an event-based camera that senses and captures robotic movements, and forwards the information to a neuromorphic processor. Then, neural network models are used for decoding and gesture recognition to boost transmission over the base station, enhancing the quality of the connection.

Rohde & Schwarz, also based in Germany, is helping set new benchmarks in AI-powered wireless communication research with its latest milestone in neural receiver design and testing.

Showcased in March at Mobile World Congress in Barcelona, Rohde & Schwarz’s innovative proof of concept — developed in collaboration with NVIDIA — integrates advanced digital twin technology and high-fidelity ray tracing to create a robust framework for testing 5G-Advanced and 6G neural receivers under real-world radio environments. Tapping into simulations built with the NVIDIA Sionna library, this initiative paves the way for more efficient, accurate and reliable testing of next-generation receiver architectures.

ETH Zurich and NVIDIA are working on 6G projects related to the performance of AI-native 6G networks. This includes a new machine learning-based architecture, called DUIDD (Deep Unfolded Iterative Detector Decoder), which was developed using NVIDIA Sionna to improve the amount of data a base station can transmit or receive using information learned from its local environment. DUIDD is expected to be implemented on the real-time, over-the-air NVIDIA AI Aerial commercial testbed, dubbed ARC-OTA.

Other projects include a new approach to machine learning-assisted model training, machine learning approaches for positioning mobile devices using channel charting, and device identification based on their radio frequency signature, developed with NVIDIA 6G research tools.

The University of Leeds is developing an agentic architecture for integrating large language models into RAN operations to realize scalable, intelligent orchestration. The research involves creating standardized frameworks for deploying agent-based architectures, establishing key performance indicators for benchmarking performance and building templates for new agents to enhance performance and reduce operational costs.

Europe Key to Developing AI-Native 6G

Europe’s role in wireless networks dates back to the 1987 development of the Global System for Mobile Communications, or GSM, a widely used standard for digital cellular communication.

Today, the European Union continues to drive innovation through substantial governmental support and flagship initiatives such as the Smart Networks and Services Joint Undertaking, 6G SNS and the 6G Flagship project. These programs unite universities, research institutions and industry to create next-generation AI-native networks, pioneering work in AI integration, sustainability and security while educating future industry experts.

Major European telecommunications vendors play a vital role in shaping the vision and standards for 6G through their leadership and participation in major research consortia.

NVIDIA Technologies for AI-Native 6G Research and Development

For these European 6G researchers, the NVIDIA 6G research portfolio provides a three-computer solution for 1) developing and training AI algorithms, 2) simulating them and 3) deploying them into wireless stacks for AI-native 6G.

NVIDIA AI Aerial tools for AI-native wireless networks research and development, based on the three-computer solution.

The portfolio includes accelerated compute infrastructure, as well as software libraries, including NVIDIA AI Aerial, Sionna and NVIDIA CUDA-X for accelerating workloads. NVIDIA provides the world’s only 6G research portfolio with open-source and source-code offerings, cloud and on-premises options, and full-stack systems or components-level options for researchers to choose the best tool for their mission.

In addition, the NVIDIA Deep Learning Institute provides training on skills essential to 6G development, such as simulating physical environments.

The NVIDIA 6G Developer Program offers early access to advanced tools, technical support and a global community of innovators. So far, more than 2,000 researchers across 85 countries have joined the program, leading to over 190,000 downloads of NVIDIA tools and 350+ citations in technical papers and journals.

Learn more about the latest AI advancements for telecom and other industries at NVIDIA GTC Paris, running June 11-14 at VivaTech, including a special address from Ronnie Vasishta, senior vice president of telecom at NVIDIA.

Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech 2025, and explore GTC Paris sessions.

See notice regarding software product information.

Read More

NVIDIA Research Casts New Light on Scenes With AI-Powered Rendering for Physical AI Development

NVIDIA Research Casts New Light on Scenes With AI-Powered Rendering for Physical AI Development

NVIDIA Research has developed an AI light switch for videos that can turn daytime scenes into nightscapes, transform sunny afternoons to cloudy days and tone down harsh fluorescent lighting into soft, natural illumination.

Called DiffusionRenderer, it’s a new technique for neural rendering — a process that uses AI to approximate how light behaves in the real world. It brings together two traditionally distinct processes — inverse rendering and forward rendering — in a unified neural rendering engine that outperforms state-of-the-art methods.

DiffusionRenderer provides a framework for video lighting control, editing and synthetic data augmentation, making it a powerful tool for creative industries and physical AI development.

Creators in advertising, film and game development could use applications based on DiffusionRenderer to add, remove and edit lighting in real-world or AI-generated videos. Physical AI developers could use it to augment synthetic datasets with a greater diversity of lighting conditions to train models for robotics and autonomous vehicles (AVs).

DiffusionRenderer is one of over 60 NVIDIA papers accepted to the Computer Vision and Pattern Recognition (CVPR) conference, taking place June 11-15 in Nashville, Tennessee.

Creating AI That Delights

DiffusionRenderer tackles the challenge of de-lighting and relighting a scene from only 2D video data.

De-lighting is a process that takes an image and removes its lighting effects, so that only the underlying object geometry and material properties remain. Relighting does the opposite, adding or editing light in a scene while maintaining the realism of complex properties like object transparency and specularity — how a surface reflects light.

Classic, physically based rendering pipelines need 3D geometry data to calculate light in a scene for de-lighting and relighting. DiffusionRenderer instead uses AI to estimate properties including normals, metallicity and roughness from a single 2D video.

With these calculations, DiffusionRenderer can generate new shadows and reflections, change light sources, edit materials and insert new objects into a scene — all while maintaining realistic lighting conditions.

Using an application powered by DiffusionRenderer, AV developers could take a dataset of mostly daytime driving footage and randomize the lighting of every video clip to create more clips representing cloudy or rainy days, evenings with harsh lighting and shadows, and nighttime scenes. With this augmented data, developers can boost their development pipelines to train, test and validate AV models that are better equipped to handle challenging lighting conditions.

Creators who capture content for digital character creation or special effects could use DiffusionRenderer to power a tool for early ideation and mockups — enabling them to explore and iterate through various lighting options before moving to expensive, specialized light stage systems to capture production-quality footage.

Enhancing DiffusionRenderer With NVIDIA Cosmos

Since completing the original paper, the research team behind DiffusionRenderer has integrated their method with Cosmos Predict-1, a suite of world foundation models for generating realistic, physics-aware future world states.

By doing so, the researchers observed a scaling effect, where applying Cosmos Predict’s larger, more powerful video diffusion model boosted the quality of DiffusionRenderer’s de-lighting and relighting correspondingly — enabling sharper, more accurate and temporally consistent results.

Cosmos Predict is part of NVIDIA Cosmos, a platform of world foundation models, tokenizers, guardrails and an accelerated data processing and curation pipeline to accelerate synthetic data generation for physical AI development. Read about the new Cosmos Predict-2 model on the NVIDIA Technical Blog.

NVIDIA Research at CVPR 

At CVPR, NVIDIA researchers are presenting dozens of papers on topics spanning automotive, healthcare, robotics and more. Three NVIDIA papers are nominated for this year’s Best Paper Award:

  • FoundationStereo: This foundation model reconstructs 3D information from 2D images by matching pixels in stereo images. Trained on a dataset of over 1 million images, the model works out-of-the-box on real-world data, outperforming existing methods and generalizing across domains.
  • Zero-Shot Monocular Scene Flow Estimation in the Wild: A collaboration between researchers at NVIDIA and Brown University, this paper introduces a generalizable model for predicting scene flow — the motion field of points in a 3D environment.
  • Difix3D+: This paper, by researchers from the NVIDIA Spatial Intelligence Lab, introduces an image diffusion model that removes artifacts from novel viewpoints in reconstructed 3D scenes, enhancing the overall quality of 3D representations.

NVIDIA was also named an Autonomous Grand Challenge winner at CVPR, marking the second consecutive year NVIDIA topped the leaderboard in the end-to-end category — and the third consecutive year winning an Autonomous Grand Challenge award at the conference.

Learn more about NVIDIA Research, a global team of hundreds of scientists and engineers focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics.

Explore the NVIDIA research papers to be presented at CVPR, and watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang.

Read More

NVIDIA DRIVE Full-Stack Autonomous Vehicle Software Rolls Out

NVIDIA DRIVE Full-Stack Autonomous Vehicle Software Rolls Out

NVIDIA is launching a comprehensive, industry-defining autonomous vehicle (AV) software platform to accelerate large-scale deployment of safe, intelligent transportation innovations for automakers, truck manufacturers, robotaxi companies and startups worldwide.

Announced today at NVIDIA GTC Paris at VivaTech, the full-stack NVIDIA DRIVE AV software platform is now in full production. Combined with NVIDIA accelerated compute, this provides the automotive industry with a robust foundation for AI-powered mobility — unlocking a multitrillion-dollar global opportunity in autonomous and highly automated vehicles. For consumers, this means safer journeys and enjoyable hands-free driving experiences.

Safety First: A Unified, Full-Stack Software Approach

NVIDIA DRIVE’s modular, flexible approach empowers customers to scale based on their specific needs — whether that means adopting the entire stack or a subset. NVIDIA’s robust, safety-certified AV software architecture supports real-time sensor fusion and continuous improvement through over-the-air updates.

Its scalability allows automakers to deploy a subset of advanced driver-assistance features — such as surround perception, automated lane changes, parking and active safety — for level 2++ and level 3 vehicles, with a seamless path to higher levels of automation as technologies and regulations evolve.

Augmenting NVIDIA’s full-stack, end-to-end AV software is NVIDIA’s three-computer solution, which spans the entire AV development pipeline and is designed to tackle the challenges associated with the safe deployment of autonomous vehicles at scale. The three computers include:

  • NVIDIA DGX systems and GPUs for training AI models and developing AV software.
  • The NVIDIA Omniverse and NVIDIA Cosmos platforms running on NVIDIA OVX systems for simulation and synthetic data generation, enabling the testing and validation of autonomous driving scenarios and optimization of smart factory operations.
  • The automotive-grade NVIDIA DRIVE AGX in-vehicle computer for processing real-time sensor data for safe, highly automated and autonomous driving capabilities.

Embracing Generative AI and an End-to-End Model Approach

Most traffic accidents are linked to human factors such as distraction or misjudgment, meaning there’s tremendous potential to make our roads safer. As such, the automotive industry is racing to develop AI-driven systems that improve road safety. But building an autonomous system that can safely navigate the complex physical world is extremely challenging.

AV software development has traditionally been based on a modular approach, with separate components for perception, prediction, planning and control. While there are benefits to this approach, it also opens up potential inefficiencies and errors that can hinder development at scale.

NVIDIA DRIVE AV software unifies these functions, using deep learning and foundation models trained on large datasets of human driving behavior to process sensor data and directly control vehicle actions, eliminating the need for predefined rules or traditional modular pipelines. As a result, vehicles can learn from vast amounts of real and synthetic driving behavior data to safely navigate complex environments and scenarios with human-like decision-making.

The NVIDIA Omniverse Blueprint for AV simulation can be used to further enhance the development pipeline, enabling physically accurate sensor simulation for AV training, testing and validation. By combining the blueprint with NVIDIA’s three-computer solution, developers can convert thousands of human-driven miles into billions of virtually driven miles, amplifying data quality and enabling efficient, scalable and continuously improving AV systems.

Bolstering End-to-End Safety With NVIDIA Halos

Safety is the most important component of all AVs. That’s why NVIDIA earlier this year launched NVIDIA Halos, a comprehensive end-to-end safety system integrating hardware, software, AI models and tools to ensure safe AV development and deployment from cloud to car. Halos provides guardrails for AV safety across simulation, training and deployment — and is backed by 15,000 engineering years of expertise.

A key part of this safety framework is the NVIDIA DriveOS safety-certified ASIL B/D operating system for autonomous driving, which provides a robust, reliable foundation for safe vehicle operation and meets stringent automotive safety standards.

The Future of Transportation, Here Today

With Halos and support for intelligent, adaptive sensors, NVIDIA’s AV stack delivers the tools, compute power and foundational AI models needed to accelerate safe, intelligent mobility — today.

NVIDIA has worked with the European auto industry for over a dozen years to drive automotive innovation, partnering with leading manufacturers, suppliers and mobility startups across the continent and the globe.

The work is transforming vehicle cockpits, along with automotive vehicle design, engineering and manufacturing, and enabling highly automated and self-driving vehicles with physical AI and accelerated computing.

At GTC Paris, NVIDIA also showcased how the transportation industry is using NVIDIA Omniverse and Cosmos for factory planning, vehicle design and simulation.

Plus, NVIDIA announced today at the Computer Vision and Pattern Recognition conference that it won the End-to-End Autonomous Driving Grand Challenge, recognized for creating technologies that allow the development of safer, smarter AVs using real-world and synthetic data — enabling these vehicles to handle even unexpected driving situations. This marks NVIDIA’s second consecutive year topping the leaderboard in the end-to-end category and its third straight Autonomous Grand Challenge award at CVPR.

Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.

See notice regarding software product information.

Read More

Leading European Healthcare and Life Sciences Companies Innovate With NVIDIA AI

Leading European Healthcare and Life Sciences Companies Innovate With NVIDIA AI

At NVIDIA GTC Paris, Europe-based healthcare and life sciences companies are showcasing themselves as leaders in global healthcare innovation, demonstrating their ability to deliver transformative, AI-driven impact at a time when the continent needs it most.

Aimed at addressing staffing shortages, aging populations and rising costs, their diverse efforts across biopharma, population health and digital medicine are supported by the NVIDIA Inception program for startups, which provides cutting-edge startups with benefits including access to the latest developer tools, technical training and exposure to venture capital firms.

Whether building the world’s largest biodiversity database or deploying intelligent agents and AI factories that accelerate discovery and patient care, here’s how European companies are tapping the NVIDIA BioNeMo platform, NVIDIA DGX Cloud and the NVIDIA Cloud Partner network to drive better health outcomes for the region and beyond.

Basecamp Research’s AI-Ready Genomics Database Breaks the Data Wall

London-based Inception startup Basecamp Research has unveiled BaseData, the world’s largest and most diverse biological dataset for generative AI in life sciences.

Built from samples collected at over 125 locations in 26 countries, BaseData contains more than 9.8 billion new biological sequences and over a million previously unknown species — making it 30x faster, and growing up to 1000x faster — than UniRef 50, a public database that’s been used to train more than 80% of all biological sequence models.

This resource is now being used to train next-generation foundation models using NVIDIA BioNeMo Framework on the NVIDIA DGX Cloud Lepton platform.

By collaborating with NVIDIA, Basecamp has overcome the bottlenecks of scale, diversity and data governance that have traditionally held back commercial biopharma research. Its approach — combining a new data supply chain, global partnerships and GPU-accelerated workflows — enables the retraining of new classes of biological foundation models with the goal of unlocking generalizable biological design and accelerating drug discovery.

With this milestone, Basecamp is setting a new industry benchmark for data-driven AI in biosciences and laying the groundwork for generative biology breakthroughs.

“Data-optimal scaling is the key to overcoming the real-world limitations of current biological models,” said Phil Lorenz, chief technology officer at Basecamp Research. “Combined with NVIDIA’s compute and AI stack, we’re training models that can understand and generate biology like never before.”

Intelligent Agents Transform Healthcare Delivery in UK

Guy’s and St. Thomas’ NHS Foundation Trust — the largest NHS Trust in the U.K., with over 2.8 million patient contacts a year — is launching the Proactive and Accessible Transformation of Healthcare initiative, aka PATH, in collaboration with global investment company General Catalyst and Inception startups Hippocratic AI and Sword Health.

PATH seeks to transform care delivery by integrating advanced AI agents, helping to reduce specialty care waitlists, improve pain management and streamline triage.

“PATH aims to deliver better, faster and fairer healthcare for all,” said Ian Abbs, CEO of Guy’s and St. Thomas’. “By combining cutting-edge technology, including AI, with clinical care, we can build a more proactive NHS.”

This initiative brings together Hippocratic AI’s conversational agents that automate tasks like patient outreach, history-taking and referral validation with Sword Health’s AI Care platform, which has treated over 500,000 patients globally across physical pain, pelvic health and other clinical areas, delivering 6.5 million AI sessions to date.

“Our safety-focused generative AI agents can enable healthcare abundance in the U.K.,” said Munjal Shah, founder and CEO of Hippocratic AI. “With more personalized care, patients can feel more supported and heard, improving outcomes.”

“Our AI Care platform transforms the way care is delivered, turning waiting lists into recovery journeys,” said Virgílio Bento, founder and CEO of Sword Health. “Together, we can reduce waste in healthcare and materially improve clinical outcomes.”

PATH will explore solutions to address the U.K.’s elective care crisis, with more than 53,000 patients waiting for a first appointment at Guy’s and St. Thomas’ alone. By prioritizing cases based on clinical need and enabling timely intervention, PATH aims to design a blueprint for national-scale, AI-driven healthcare transformation.

“The goal of PATH is to enable the NHS to work better for everyone,” said Chris Bischoff, managing director at General Catalyst. “By deploying applied AI to increase access, improve care, optimize resources and empower staff, we believe we can build an NHS fit for the future.”

Pangaea Data’s AI Platform Closes Care Gaps Across Hard-to-Diagnose Diseases

Pangaea Data, an Inception company based in London and San Francisco, is helping close care gaps by discovering patients who are untreated and under-treated despite available intelligence in their existing medical records.

Pangaea’s platform is powered by the NVIDIA NeMo Agent toolkit, an open-source library for profiling and optimizing connected teams of AI agents. Pangaea is also adopting NVIDIA NIM microservices to harness large language models  that can help identify patients with rare and prevalent hard-to-diagnose diseases.

These tools help calculate clinical scores required to discover patients with such diseases — involving non-specific features such as fever, nausea and headache — in a manner which emulates a clinician’s manual review process for higher accuracy.

“The NeMo Agent toolkit and NVIDIA NIM microservices enable us to reduce the time taken to configure our platform for each disease condition from weeks to a single day, helping accelerate our mission of improving patient outcomes globally,” said Vibhor Gupta, founder and CEO of Pangaea Data.

Pangaea helps global pharmaceutical providers, health systems and policymakers transform care for better outcomes and increased health equity.

Its platform is now being deployed across health systems in the U.S., U.K., Spain and Barbados as a key part of a population health initiative led by the Prime Minister of Barbados.

Sofinnova, Cure51 and Next-Generation Startups Tap NVIDIA DGX Cloud

Through NVIDIA DGX Cloud Lepton, top European healthcare and life sciences venture capital firms like Sofinnova are bolstering AI-native startups.

As part of this initiative, selected VCs can offer their top portfolio companies early access to DGX Cloud Lepton — including access to 25 NVIDIA H100 GPU nodes for three months, plus white-glove support — to help them scale into new markets requiring sovereign, localized compute.

DGX Cloud is already used by startups like Paris-based Cure51, U.K.-based Sensible Biotechnologies and U.K.-based Molecular Glue Labs (MGL) through an Inception program benefit.

Cure51, a Sofinnova portfolio company, achieved a 17x speedup in genomic analysis and 2x cost savings by shifting workloads to NVIDIA BioNeMo running on DGX Cloud.

Using NVIDIA’s sovereign AI infrastructure through the NVIDIA Cloud Partner network, Sensible reduced its optimization cycles for cell-based mRNA therapeutics design from 15 days to just one, while MGL validated new protein engineering approaches. This program demonstrates how regional innovation labs and cloud partners can empower early-stage biotech.

Learn more about NVIDIA Inception startups advancing AI for healthcare and life sciences in Europe at NVIDIA GTC Paris, taking place June 10-12 at VivaTech. 

Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.

See notice regarding software product information.

Read More

NVIDIA Releases New AI Models and Developer Tools to Advance Autonomous Vehicle Ecosystem

NVIDIA Releases New AI Models and Developer Tools to Advance Autonomous Vehicle Ecosystem

Autonomous vehicle (AV) stacks are evolving from many distinct models to a unified, end-to-end architecture that executes driving actions directly from sensor data. This transition to using larger models is drastically increasing the demand for high-quality, physically based sensor data for training, testing and validation.

To help accelerate the development of next-generation AV architectures, NVIDIA today released NVIDIA Cosmos Predict-2 — a new world foundation model with improved future world state prediction capabilities for high-quality synthetic data generation — as well as new developers tools.

Cosmos Predict-2 is part of the NVIDIA Cosmos platform, which equips developers with technologies to tackle the most complex challenges in end-to-end AV development. Industry leaders such as Oxa, Plus and Uber are using Cosmos models to rapidly scale synthetic data generation for AV development.

Cosmos Predict-2 Accelerates AV Training

Building on Cosmos Predict-1 — which was designed to predict and generate future world states using text, image and video prompts — Cosmos Predict-2 better understands context from text and visual inputs, leading to fewer hallucinations and richer details in generated videos.

Cosmos Predict-2 enhances text adherence and common sense for a stop sign at the intersection.

By using the latest optimization techniques, Cosmos Predict-2 significantly speeds up synthetic data generation on NVIDIA GB200 NVL72 systems and NVIDIA DGX Cloud.

Post-Training Cosmos Unlocks New Training Data Sources

By post-training Cosmos models on AV data, developers can generate videos that accurately match existing physical environments and vehicle trajectories, as well as generate multi-view videos from a single-view video, such as dashcam footage. The ability to turn widely available dashcam data into multi-camera data gives developers access to new troves of data for AV training. These multi-view videos can also be used to replace real camera data from broken or occluded sensors.

Post-trained Cosmos models generate multi-view videos to significantly augment AV training datasets.

The NVIDIA Research team post-trained Cosmos models on 20,000 hours of real-world driving data. Using the AV-specific models to generate multi-view video data, the team improved model performance in challenging conditions such as fog and rain.

AV Ecosystem Drives Advancements Using Cosmos Predict

AV companies have already integrated Cosmos Predict to scale and accelerate vehicle development.

Autonomous trucking leader Plus, which is building its solution with the NVIDIA DRIVE AGX platform, is post-training Cosmos Predict on trucking data to generate highly realistic synthetic driving scenarios to accelerate commercialization of their autonomous solutions at scale. AV software company Oxa is also using Cosmos Predict to support the generation of multi-camera videos with high fidelity and temporal consistency.

New NVIDIA Models and NIM Microservices Empower AV Developers

In addition to Cosmos Predict-2, NVIDIA today also announced Cosmos Transfer as an NVIDIA NIM microservice preview for easy deployment on data center GPUs.

The Cosmos Transfer NIM microservice preview augments datasets and generates photorealistic videos using structured input or ground-truth simulations from the NVIDIA Omniverse platform. And the NuRec Fixer model helps inpaint and resolve gaps in reconstructed AV data.

NuRec Fixer fills in gaps in driving data to improve neural reconstructions.

CARLA, the world’s leading open-source AV simulator, integrated Cosmos Transfer and NVIDIA NuRec — a set of application programming interfaces and tools for neural reconstruction and rendering — into its latest release. This enables CARLA’s user base of over 150,000 AV developers to render synthetic simulation scenes and viewpoints with high fidelity and to generate endless variations of lighting, weather and terrain using simple prompts.

Developers can try out this pipeline using open-source data available on the NVIDIA Physical AI Dataset. The latest dataset release includes 40,000 clips generated using Cosmos, as well as sample reconstructed scenes for neural rendering. With this latest version of CARLA, developers can author new trajectories, reposition sensors and simulate drives.

Such scalable data generation pipelines unlock the development of end-to-end AV model architectures, as recently demonstrated by NVIDIA Research’s second consecutive win at the End-to-End Autonomous Grand Challenge at CVPR.

The challenge offered researchers the opportunity to explore new ways to handle unexpected situations — beyond using only real-world human driving data — to accelerate the development of smarter AVs.

NVIDIA Halos Advances End-to-End AV Safety

To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software safety stack with state-of-the-art AI research focused on AV safety.

Bosch, Easyrain and Nuro are the latest automotive leaders to join the NVIDIA Halos AI Systems Inspection Lab to verify the safe integration of their products with NVIDIA technologies and advance AV safety. Lab members announced earlier this year include Continental, Ficosa, OMNIVISION, onsemi and Sony Semiconductor Solutions.

Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.

Read More

Go With the Flow: NVIDIA Teams With Ansys and DCAI to Advance Quantum Algorithms for Fluid Dynamics

Go With the Flow: NVIDIA Teams With Ansys and DCAI to Advance Quantum Algorithms for Fluid Dynamics

AI supercomputing is accelerating the development of new quantum applications, driving breakthroughs in critical industries such as aerospace, automotive and manufacturing.

Underscoring that opportunity, Ansys announced today it is using the NVIDIA CUDA-Q quantum computing platform running on the Gefion supercomputer to advance quantum algorithms for fluid dynamics applications.

Gefion is Denmark’s first AI supercomputer, consisting of an NVIDIA DGX SuperPOD interconnected with NVIDIA Quantum-2 InfiniBand networking. Using the open-source NVIDIA CUDA-Q software platform, Ansys drew on the power of the supercomputer to perform GPU-accelerated simulations of quantum algorithms applicable to fluid dynamics applications.

“To discover tomorrow’s practical quantum applications, researchers need to be able to run meaningfully large simulations of them today,” said Tim Costa, senior director of quantum and CUDA-X at NVIDIA. “NVIDIA is enabling collaborators like Ansys and the DCAI by providing the supercomputing platforms researchers need to increase quantum computing’s impact.”

Gefion is based in Copenhagen and operated by DCAI. It was established by the Novo Nordisk Foundation and the Export and Investment Fund of Denmark.

CUDA-Q taps into GPU-accelerated libraries, enabling Gefion to run complex simulations of a class of algorithms known as Quantum Lattice Boltzmann Methods. By simulating how these algorithms would perform on a 39-qubit quantum computer, Ansys could rapidly and cost-effectively investigate how they impact fluid dynamics applications.

“We’re seeing how CUDA-Q can unlock hybrid quantum-classical computing for researchers using Gefion,” said Nadia Carlsten, CEO of DCAI. “Partnering with NVIDIA and Ansys has allowed us to drive the convergence of quantum technologies and AI supercomputing.”

“CUDA-Q’s GPU-accelerated simulations have allowed us to study quantum applications in the regimes where we can really begin to see their effects,” said Prith Banerjee, chief technology officer of Ansys. “Working with NVIDIA and DCAI, we’re expanding the role of quantum computing in engineering disciplines like computational fluid dynamics.”

This latest work builds on NVIDIA’s recent announcements on using accelerated computing to propel quantum computing research — including the opening of Japan’s ABCI-Q, the world’s largest quantum research supercomputer, and a new NVIDIA-powered supercomputer at the National Center for High-Performance Computing in Taiwan.

Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.

Read More

European Financial Services Industry Goes All In on AI to Support Smarter Investments

European Financial Services Industry Goes All In on AI to Support Smarter Investments

AI is already driving revenue increases for financial institutions — and with new investments in AI infrastructure and development across Europe, the region’s financial services industry is poised to mint even greater value from the technology.

With sovereign AI models and agents built using AI factories, financial institutions and digital payment companies can extract powerful insights from their vast data sources to protect investments, detect fraud and offer personalized services to customers.

At the NVIDIA GTC Paris at VivaTech, one of Europe’s largest finance companies announced that it’s building an NVIDIA-powered AI factory to deploy sovereign AI for wide-ranging financial services.

Banks and online payment companies operating across the continent are harnessing NVIDIA AI and data science libraries to speed up data analysis for fraud detection and other applications. And the region’s AI platforms and service providers are helping banks and fintech companies accelerate their workflows with AI agents and models built on NVIDIA software libraries, models and blueprints.

European Bank Builds AI Factory to Develop and Scale Financial Services Applications

Across Europe, banks are building regional AI factories to enable the deployment of AI models for customer service, fraud detection, risk modeling and the automation of regulatory compliance.

In Germany, Finanz Informatik, the digital technology provider of the Savings Banks Finance Group, is scaling its on-premises AI factory and using NVIDIA AI Enterprise software for applications including an AI assistant to help its employees automate routine tasks and efficiently process the institution’s banking data.

Financial Services Companies Speed Data Science and Processing

Leading online payment and banking providers in Europe are tapping NVIDIA CUDA-X AI and data science libraries to accelerate financial data processing and analysis.

Amsterdam-based neobank bunq, which serves over 17 million users in the European Union, uses NVIDIA-accelerated XGBoost to boost fraud detection workflows.

The company’s AI-powered monitoring system is used to flag suspicious transactions that present risk of fraud or money laundering. Using NVIDIA GPUs running XGBoost and NVIDIA cuDF, bunq accelerated its model training by 100x and data processing by 5x.

The company is also using NVIDIA NIM microservices to implement and scale large language model-powered applications like its personal AI assistant, dubbed Finn. The bank uses NVIDIA NeMo Retriever, a collection of NIM microservices for extracting, embedding and reranking enterprise data so it can be semantically searched, which can help further improve Finn’s accuracy.

The recently launched NVIDIA AI Blueprint for financial fraud detection also includes XGBoost to support anomaly detection from financial data. The NVIDIA AI Blueprint is available for customers to run on Amazon Web Services and Hewlett Packard Enterprise, with availability coming soon on Dell Technologies. Customers can also adopt the blueprint through NVIDIA partners including Cloudera, EXL, Infosys and SHI International.

Checkout.com is a London-based fintech company providing digital payment solutions to enterprises around the world. The company, which operates in more than 55 countries and supports over 180 currencies, is speeding up data analysis pipelines from minutes to under 10 seconds using the NVIDIA cuDF accelerator for pandas — the go-to Python library for data handling and analysis.

Checkout.com is also exploring the use of NVIDIA cuML and the RAPIDS Accelerator for Apache Spark to further boost analysis of the company’s terabyte-scale data lake.

PayPal, based in the U.S., is another popular digital payment platform for European customers. The company used the RAPIDS Accelerator for Apache Spark to achieve a 70% cost reduction for Spark-based data pipelines running on NVIDIA accelerated computing.

Investment management firms are adopting GPU-accelerated optimization for capital allocation in dynamic markets. The NVIDIA cuFOLIO module, built on the NVIDIA cuOpt optimization engine, enables rapid portfolio adjustments that balance risk, return and investor preferences — turning time-consuming, CPU-bound workflows into scalable, real-time simulation engines.

AI Platforms, Solution Providers Offer NVIDIA-Accelerated Financial Services

European software companies and solution providers are integrating NVIDIA AI software to accelerate financial services workflows for their customers.

Dataiku, an AI platform company founded in Paris and based in New York City, announced at GTC Paris a new blueprint to help banking and insurance institutions deploy agentic AI systems at scale. The company is also integrating the NVIDIA Enterprise AI Factory validated design to accelerate AI development, and offers native integration of its LLM Mesh platform with NVIDIA NIM microservices.

KX, a financial modeling software company based in the U.K., launched an AI Banker Agent Blueprint at GTC Paris. Built with NVIDIA AI tools including the NVIDIA NeMo platform, Nemotron family of models and NIM microservices, the blueprint can be deployed by banks as an AI-powered research assistant, client relationship manager or personalized customer portfolio manager.

Temenos, a global provider of banking technology, uses NIM microservices to deploy its generative AI models to banks. The company’s generative AI solutions can be applied to use cases including credit scoring, fraud detection and customer service.

Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.

Read More

NVIDIA NVL72 GB200 Systems Accelerate the Journey to Useful Quantum Computing

NVIDIA NVL72 GB200 Systems Accelerate the Journey to Useful Quantum Computing

The integration of quantum processors into tomorrow’s supercomputers promises to dramatically expand the problems that can be addressed with compute — revolutionizing industries including drug and materials development.

In addition to being part of the vision for tomorrow’s hybrid quantum-classical supercomputers, accelerated computing is dramatically advancing the work quantum researchers and developers are already doing to achieve that vision. And in today’s development of tomorrow’s quantum technology, NVIDIA GB200 NVL72 systems and their fifth-generation multinode NVIDIA NVLink interconnect capabilities have emerged as the leading architecture.

Here are five key quantum computing workloads in development, powered by NVIDIA Blackwell architecture.

1. Developing Better Quantum Algorithms

Simulating how candidate algorithms will run on quantum computers allows researchers to discover and refine performant quantum applications. For example, large-scale simulations performed with Ansys on DCAI’s Gefion supercomputer are being used to develop new quantum algorithms for computational fluid dynamics.

But such simulations are extremely computationally intensive. GB200 NVL72’s high-bandwidth interconnect with all-to-all GPU connectivity is an important factor in allowing NVIDIA cuQuantum libraries to execute state-of-the-art simulation techniques on feasible time scales — with an 800x speedup compared with the best CPU implementations.

2. Designing Low-Noise Qubits

Conventional chip manufacturing relies heavily on detailed physics simulations to rapidly iterate toward performant processor designs. Quantum hardware designers must tap into these same simulation tools to discover low-noise qubit designs, which are crucial for quantum computing.

Simulations capable of emulating noise in potential qubit designs need to crunch through complex quantum mechanical calculations. GB200 NVL72, paired with cuQuantum’s dynamics library, provides a 1,200x speedup for these workloads — providing a valuable new tool that accelerates the design process for quantum hardware builders like Alice & Bob.

3. Generating Quantum Training Data

AI models show increasing promise for challenges in quantum computing, including performing the control operations needed to keep quantum computers running.

But in many cases, a key stumbling block for these models is obtaining the volumes of data needed to effectively train them. The necessary data would ideally come from actual quantum hardware, but this proves either expensive or simply unavailable.

Output from simulated quantum processors offers a solution. GB200 NVL72 can output quantum training data 4,000x faster than with CPU-based techniques, helping bring the latest AI advancements to quantum computing.

4. Exploring Hybrid Applications

Effective future quantum applications will lean on both quantum and classical hardware, seamlessly distributing algorithm subroutines to whichever hardware type is most appropriate.

Exploring hybrid algorithms suited to this environment requires a platform that can combine simulations of quantum hardware with access to state-of-the-art AI supercomputing, such as the capabilities offered by GB200 NVL72. NVIDIA CUDA-Q is such a platform. It can draw on GB200 NVL72 to provide an ideal hybrid computing environment for researchers to explore hybrid quantum-classical applications, speeding development by 1,300x.

5. Unlocking Quantum Error Correction

Future quantum-GPU supercomputers will rely on quantum error correction — a control process that continually processes qubit data through demanding decoding algorithms — in order to continually correct errors.

The decoding algorithms required by quantum error correction run on conventional computing hardware and must process terabytes of data every second to stay on top of qubit errors. This requires the power of accelerated computing. GB200 NVL72 demonstrates a 500x speedup in running a commonly used class of decoding algorithms — making quantum error correction a feasible prospect for the future of quantum computing.

These breakthroughs are allowing the quantum computing industry to perform the quantum-GPU integrations needed for large-scale useful quantum computing.

For example, qubit-builder Diraq announced at NVIDIA GTC Paris that it is using the NVIDIA DGX Quantum reference architecture to connect spins-in-silicon qubits to NVIDIA GPUs. Plus, the NVIDIA CUDA-Q Academic program is onboarding researchers to use GB200 NVL72 and other advanced technologies.

NVIDIA is working toward a future where all supercomputers integrate quantum hardware to solve commercially relevant problems. NVIDIA GB200 NVL72 is the platform for building this future.

Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.

Read More

Germany Builds Its AI Autobahn With NVIDIA

Germany Builds Its AI Autobahn With NVIDIA

Germany is building on a long history of engineering innovation with new AI investments poised to transform the country’s economy — including the automotive, banking, manufacturing and robotics industries.

The country is deploying tens of thousands of NVIDIA GPUs to power AI factories that generate intelligence for businesses and researchers, optimized AI software to run agentic and reasoning models for enterprises, and physical AI technologies for next-generation cars and robots.

Industry leaders, startups and research institutions are highlighting these and other initiatives at ISC High Performance and NVIDIA GTC Paris at VivaTech this week.

Advanced AI Factory Infrastructure for Researchers, Enterprises

AI factories coming online across Germany will support the development of sovereign AI applications in the public and private sectors — including for the country’s small- and medium-size companies, known as the Mittelstand. The Mittelstand accounts for 99% of all enterprises in Germany and over half of the country’s economic output.

NVIDIA is building the world’s first industrial AI cloud for European manufacturers, based in Germany. Powered by NVIDIA DGX B200 systems and NVIDIA RTX PRO Servers featuring 10,000 NVIDIA Blackwell GPUs, this AI factory will enable Europe’s industrial leaders to accelerate manufacturing applications including design, engineering, simulation, digital twins and robotics.

The AI factory will be built following the framework of the NVIDIA Omniverse Blueprint for AI factory design and operations. It will run NVIDIA CUDA-X libraries as well as NVIDIA RTX and NVIDIA Omniverse-accelerated workloads.

Also in Germany, the Jülich Supercomputing Centre hosts JUPITER, a supercomputer that will be Europe’s first exascale system. Featuring about 24,000 NVIDIA GH200 Grace Hopper Superchips and NVIDIA Quantum-2 InfiniBand networking, JUPITER will double the computing capacity of the continent’s previous most powerful publicly available supercomputer.

Using NVIDIA AI platforms, the system will enable researchers to train massive large language models (LLMs) with over 100 billion parameters, increase the spatial resolution of climate and weather simulations, advance quantum computing research and streamline the creation of AI models for drug discovery.

Another German research supercomputer, Blue Lion, will run on the NVIDIA Vera Rubin architecture, NVIDIA’s upcoming AI platform. Built by Hewlett Packard Enterprise for the Leibniz Supercomputing Center (LRZ), it’s expected to go live in the second half of 2026 to accelerate climate, physics and machine learning workflows.

Enterprises and Startups Build Accelerated AI for Every Industry 

German companies — of all sizes and in nearly every field — are using NVIDIA technologies to unlock new AI capabilities and levels of acceleration.

DeepL, in Cologne, is one of the leading language AI companies, with over 10 million monthly active users. To accelerate its AI development, the company is deploying an NVIDIA DGX SuperPOD with DGX GB200 systems, which will enable it to translate all content on the internet in just over 18 days — a task that currently takes them 194 days of nonstop data processing.

“As a leader in language AI, we rely on strong compute infrastructure to support research and development,” said Jarek Kutylowsky, CEO and founder of DeepL. “The NVIDIA DGX SuperPOD system will enable us to enhance current and future products with the latest AI advancements and unlock new generative capabilities for our customers.”

Black Forest Labs, a leading generative AI startup based in Freiburg, Germany, developed the FLUX.1 AI model suite for text-to-image generation, including the state-of-the-art models FLUX.1 Kontext [pro] and FLUX1.1 [pro]. Its open-weights FLUX.1-dev image generator is included in the NVIDIA AI Blueprint for 3D-guided generative AI.

German robotics and automation companies — including Agile Robots, idealworks, Neura Robotics and sensor solution company SICK — are integrating the NVIDIA Isaac platform for training, simulating and deploying robots and sensing solutions.

Finanz Informatik, the digitalization partner of the German Savings Banks Finance Group, is systematically continuing the expansion and further development of its AI infrastructure in collaboration with NVIDIA by using NVIDIA AI infrastructure and NVIDIA AI Enterprise software to develop an AI assistant that will help employees and efficiently process banking data.

In automotive, Mercedes-Benz is using Omniverse to create digital twins of its factories. In addition, its latest CLA sedan, launching now in Europe, is using NVIDIA’s full-stack DRIVE AV software running on the NVIDIA DRIVE AGX platform.

Others using NVIDIA technology are German automaker BMW Group and automotive supplier Continental. Motion technology company Schaeffler Group will use Omniverse to optimize robot-assisted manufacturing processes for automotive and industrial development.

German enterprises adopting NVIDIA AI also include supply chain solutions company KION Group, legal AI startup Noxtua and cybersecurity company secunet Security Networks AG.

AI Upskilling Trains Next Generation of Developers

To spark an AI transformation at every level of a country’s economy, it needs a vast community of AI developers.

That’s why Germany’s investing in AI education and upskilling through nonprofits, university and industry collaborations.

One such effort is led by appliedAI, Europe’s largest initiative for the application of trusted AI, which recently launched a dedicated program for small and medium-sized German enterprises. The initiative aims to lower the threshold for AI adoption by providing smaller companies with access to state-of-the-art NVIDIA infrastructure and software — including NVIDIA AI Enterprise — as well as strategic guidance, hands-on training and connection to a broad ecosystem of partners.

A key focus of the program is to support the scalable deployment of agentic AI systems capable of reasoning, planning and autonomous action.

“The key to scaling AI in Germany lies in enabling our small- and medium-size enterprises,” said Andreas Liebl, CEO of appliedAI. “With this new program, launched in close collaboration with NVIDIA, we are democratizing access to world-class AI technology and supporting Germany’s economic backbone in mastering the digital transformation in a way that’s sovereign, sustainable and scalable.”

In academic partnerships, NVIDIA is a technology partner for LRZ and Friedrich-Alexander University of Erlangen-Nuremberg, a research university that offers developers access to NVIDIA-accelerated infrastructure with user support, including workflow guidance and training. Both institutions have upskilled thousands of students and researchers through nearly 100 instructor-led workshops from the NVIDIA Deep Learning Institute.

NVIDIA is also establishing a research center in Germany as part of the NVIDIA AI Technology Center program. The Bavarian AI hub, intended to be established in collaboration with the BayernKI consortium, will advance research in fields including digital medicine, stable diffusion AI and open-source robotics platforms to foster global collaboration.

Germany’s enterprises and systems integrators, too, are making it easier for anyone to harness AI acceleration.

SAP is working with NVIDIA to integrate NVIDIA NIM microservices, including the new universal LLM NIM microservice, into its AI Foundation. Systems integrators including Accenture, adesso, Deloitte, Materna and T-Systems offer customers tools to support the development and deployment of AI applications using NVIDIA’s full-stack AI platforms.

Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.

Read More

IQVIA and NVIDIA Harmonize to Accelerate Clinical Research and Commercialization With AI Agents

IQVIA and NVIDIA Harmonize to Accelerate Clinical Research and Commercialization With AI Agents

Tens of billions of euros are spent each year on drug research and development across the globe — but only a few dozen new drugs make it to market.

Agentic AI is introducing a rhythm shift in the pharmaceutical industry, with IQVIA, the world’s leading provider of clinical research services, commercial insights and healthcare intelligence, as its conductor.

At NVIDIA GTC Paris at VivaTech, IQVIA announced it is launching multiple AI orchestrator agents in collaboration with NVIDIA. These specialized agentic systems are designed to manage and accelerate complex pharmaceutical development workflows for IQVIA’s thousands of pharmaceutical, biotech and medical device customers across the globe.

The AI agents act as supervisors for a group of sub-agents that each specialize in different tasks, like a conductor managing strings, woodwinds, bass and percussion sections. The orchestrator agent routes any necessary actions — like speech-to-text transcription, clinical coding, structured data extraction and data summarization — to the appropriate sub-agent, ensuring that each step in the complex workflow is accelerated and managed by human experts in the loop.

Using its vast databases comprising many petabytes of data and deep domain life sciences expertise, IQVIA can train and fine-tune these models for maximum productivity and efficiency.

Agentic AI Strikes a Chord in Clinical Trials

IQVIA’s localized expertise on regulatory requirements in different countries, including those across Europe, puts the company center stage in the clinical space.

Deployed in its healthcare-grade AI platform, IQVIA’s AI orchestrator agents are designed to accelerate every step of the pharmaceutical lifecycle, including clinical trials.

Clinical trials represent a major accomplishment in the research and development process for pharmaceutical companies, but planning and executing a trial typically takes years. The start-up process, alone, often takes about 200 days and is manually intensive.

IQVIA’s clinical trial start-up AI orchestrator agent addresses the growing need for acceleration in clinical trial timelines.

One component now accelerated by IQVIA’s AI orchestrator agents is target identification.  This agent builds a knowledge base from research articles and biomedical databases, using customized AI models to identify key relationships among the data and extract insights.

This knowledge then enables IQVIA’s pharmaceutical customers to identify emerging scientific areas for indication prioritization — like which indications they will pursue and in what order for a particular asset — and to discover new opportunities for drug repurposing, unlocking new uses that were previously unavailable..

Another agent, the clinical data review agent, uses a set of automated checks and specialized agents to catch data issues early, reducing the data review process from seven weeks to as little as two weeks.

“From molecule to market, AI promises to be transformative for life sciences and healthcare,” said Avinob Roy, vice president and general manager of product offerings for commercial analytics solutions at IQVIA.

IQVIA’s agents use NVIDIA NIM microservices, part of the NVIDIA AI Enterprise software platform, to streamline clinical site start-up. The agent directs its sub-agents to analyze clinical trial protocols and extract critical participant inclusion and exclusion criteria, using reasoning to solve these problems in phased steps.

By deploying the orchestrator agent, which works autonomously, research teams can focus on decision-making instead of time-consuming administrative tasks.

AI Agents Compose New Paths for Drug Commercialization

After a drug passes clinical trials, there’s still more work to be done before it’s accessible to patients.

Pharmaceutical companies must understand the market for their drug, landscape the disease it treats, map the patient journey and chart treatment pathways. The goal is to identify the right patient cohorts and understand how to reach them effectively.

“There are a lot of different components — market dynamics, patient behaviors, access challenges and the competitive landscape — that you need to triangulate to really understand where the bottlenecks are,” Roy said.

IQVIA orchestrator agents can provide a comprehensive understanding of how a treatment will reach patients by analyzing patient records, prescriptions and lab results in just a few days instead of weeks.

Another challenge is to capture the attention of healthcare professionals. To stand out, field teams often spend hours preparing for every interaction — crafting personalized, data-driven conversations that can deliver real value.

The IQVIA field companion orchestrator agent delivers tailored insights to pharmaceutical sales teams before each engagement with healthcare providers. By integrating physician demographics, digital behavior, prescribing patterns and patient dynamics, the agent helps field teams prepare for each meeting using near real-time insights, leading to more engaging and impactful discussions.

“The collective impact of these agents across numerous commercial workflows brings unprecedented precision and operational efficiency to the life sciences, supporting better experiences and outcomes for healthcare professionals and patients,” Roy said.

Learn more about the latest AI advancements for healthcare and other industries at NVIDIA GTC Paris, running through Thursday, June 12, at VivaTech.

Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech 2025, and explore GTC Paris sessions.

Read More