NVIDIA Expands Collaboration With Microsoft to Help Developers Build, Deploy AI Applications Faster

NVIDIA Expands Collaboration With Microsoft to Help Developers Build, Deploy AI Applications Faster

If optimized AI workflows are like a perfectly tuned orchestra — where each component, from hardware infrastructure to software libraries, hits exactly the right note — then the long-standing harmony between NVIDIA and Microsoft is music to developers’ ears.

The latest AI models developed by Microsoft, including the Phi-3 family of small language models, are being optimized to run on NVIDIA GPUs and made available as NVIDIA NIM inference microservices. Other microservices developed by NVIDIA, such as the cuOpt route optimization AI, are regularly added to Microsoft Azure Marketplace as part of the NVIDIA AI Enterprise software platform.

In addition to these AI technologies, NVIDIA and Microsoft are delivering a growing set of optimizations and integrations for developers creating high-performance AI apps for PCs powered by NVIDIA GeForce RTX and NVIDIA RTX GPUs.

Building on the progress shared at NVIDIA GTC, the two companies are furthering this ongoing collaboration at Microsoft Build, an annual developer event, taking place this year in Seattle through May 23.

Accelerating Microsoft’s Phi-3 Models 

Microsoft is expanding its family of Phi-3 open small language models, adding small (7-billion-parameter) and medium (14-billion-parameter) models similar to its Phi-3-mini, which has 3.8 billion parameters. It’s also introducing a new 4.2-billion-parameter multimodal model, Phi-3-vision, that supports images and text.

All of these models are GPU-optimized with NVIDIA TensorRT-LLM and available as NVIDIA NIMs, which are accelerated inference microservices with a standard application programming interface (API) that can be deployed anywhere.

APIs for the NIM-powered Phi-3 models are available at ai.nvidia.com and through NVIDIA AI Enterprise on the Azure Marketplace.

NVIDIA cuOpt Now Available on Azure Marketplace

NVIDIA cuOpt, a GPU-accelerated AI microservice for route optimization, is now available in Azure Marketplace via NVIDIA AI Enterprise. cuOpt features massively parallel algorithms that enable real-time logistics management for shipping services, railway systems, warehouses and factories.

The model has set two dozen world records on major routing benchmarks, demonstrating the best accuracy and fastest times. It could save billions of dollars for the logistics and supply chain industries by optimizing vehicle routes, saving travel time and minimizing idle periods.

Through Azure Marketplace, developers can easily integrate the cuOpt microservice with Azure Maps to support teal-time logistics management and other cloud-based workflows, backed by enterprise-grade management tools and security.

Optimizing AI Performance on PCs With NVIDIA RTX

The NVIDIA accelerated computing platform is the backbone of modern AI — helping developers build solutions for over 100 million Windows GeForce RTX-powered PCs and NVIDIA RTX-powered workstations worldwide.

NVIDIA and Microsoft are delivering new optimizations and integrations to Windows developers to accelerate AI in next-generation PC and workstation applications. These include:

  • Faster inference performance for large language models via the NVIDIA DirectX driver, the Generative AI ONNX Runtime extension and DirectML. These optimizations, available now in the GeForce Game Ready, NVIDIA Studio and NVIDIA RTX Enterprise Drivers, deliver up to 3x faster performance on NVIDIA and GeForce RTX GPUs.
  • Optimized performance on RTX GPUs for AI models like Stable Diffusion and Whisper via WebNN, an API that enables developers to accelerate AI models in web applications using on-device hardware.
  • With Windows set to support PyTorch through DirectML, thousands of Hugging Face models will work in Windows natively. NVIDIA and Microsoft are collaborating to scale performance on more than 100 million RTX GPUs.

Join NVIDIA at Microsoft Build 

Conference attendees can visit NVIDIA booth FP28 to meet developer experts and experience live demos of NVIDIA NIM, NVIDIA cuOpt, NVIDIA Omniverse and the NVIDIA RTX AI platform. The booth also highlights the NVIDIA MONAI platform for medical imaging workflows and NVIDIA BioNeMo generative AI platform for drug discovery — both available on Azure as part of NVIDIA AI Enterprise.

Attend sessions with NVIDIA speakers to dive into the capabilities of the NVIDIA RTX AI platform on Windows PCs and discover how to deploy generative AI and digital twin tools on Microsoft Azure.

And sign up for the Developer Showcase, taking place Wednesday, to discover how developers are building innovative generative AI using NVIDIA AI software on Azure.

Read More

New Performance Optimizations Supercharge NVIDIA RTX AI PCs for Gamers, Creators and Developers

New Performance Optimizations Supercharge NVIDIA RTX AI PCs for Gamers, Creators and Developers

NVIDIA today announced at Microsoft Build new AI performance optimizations and integrations for Windows that help deliver maximum performance on NVIDIA GeForce RTX AI PCs and NVIDIA RTX workstations.

Large language models (LLMs) power some of the most exciting new use cases in generative AI and now run up to 3x faster with ONNX Runtime (ORT) and DirectML using the new NVIDIA R555 Game Ready Driver. ORT and DirectML are high-performance tools used to run AI models locally on Windows PCs.

WebNN, an application programming interface for web developers to deploy AI models, is now accelerated with RTX via DirectML, enabling web apps to incorporate fast, AI-powered capabilities. And PyTorch will support DirectML execution backends, enabling Windows developers to train and infer complex AI models on Windows natively. NVIDIA and Microsoft are collaborating to scale performance on RTX GPUs.

These advancements build on NVIDIA’s world-leading AI platform, which accelerates more than 500 applications and games on over 100 million RTX AI PCs and workstations worldwide.

RTX AI PCs — Enhanced AI for Gamers, Creators and Developers

NVIDIA introduced the first PC GPUs with dedicated AI acceleration, the GeForce RTX 20 Series with Tensor Cores, along with the first widely adopted AI model to run on Windows, NVIDIA DLSS, in 2018. Its latest GPUs offer up to 1,300 trillion operations per second of dedicated AI performance.

In the coming months, Copilot+ PCs equipped with new power-efficient systems-on-a-chip and RTX GPUs will be released, giving gamers, creators, enthusiasts and developers increased performance to tackle demanding local AI workloads, along with Microsoft’s new Copilot+ features.

For gamers on RTX AI PCs, NVIDIA DLSS boosts frame rates by up to 4x, while NVIDIA ACE brings game characters to life with AI-driven dialogue, animation and speech.

For content creators, RTX powers AI-assisted production workflows in apps like Adobe Premiere, Blackmagic Design DaVinci Resolve and Blender to automate tedious tasks and streamline workflows. From 3D denoising and accelerated rendering to text-to-image and video generation, these tools empower artists to bring their visions to life.

For game modders, NVIDIA RTX Remix, built on the NVIDIA Omniverse platform, provides AI-accelerated tools to create RTX remasters of classic PC games. It makes it easier than ever to capture game assets, enhance materials with generative AI tools and incorporate full ray tracing.

For livestreamers, the NVIDIA Broadcast application delivers high-quality AI-powered background subtraction and noise removal, while NVIDIA RTX Video provides AI-powered upscaling and auto-high-dynamic range to enhance streamed video quality.

Enhancing productivity, LLMs powered by RTX GPUs execute AI assistants and copilots faster, and can process multiple requests simultaneously.

And RTX AI PCs allow developers to build and fine-tune AI models directly on their devices using NVIDIA’s AI developer tools, which include NVIDIA AI Workbench, NVIDIA cuDNN and CUDA on Windows Subsystem for Linux. Developers also have access to RTX-accelerated AI frameworks and software development kits like NVIDIA TensorRT, NVIDIA Maxine and RTX Video.

The combination of AI capabilities and performance deliver enhanced experiences for gamers, creators and developers.

Faster LLMs and New Capabilities for Web Developers

Microsoft recently released the generative AI extension for ORT, a cross-platform library for AI inference. The extension adds support for optimization techniques like quantization for LLMs like Phi-3, Llama 3, Gemma and Mistral. ORT supports different execution providers for inferencing via various software and hardware stacks, including DirectML.

ORT with the DirectML backend offers Windows AI developers a quick path to develop AI capabilities, with stability and production-grade support for the broad Windows PC ecosystem. NVIDIA optimizations for the generative AI extension for ORT, available now in R555 Game Ready, Studio and NVIDIA RTX Enterprise Drivers, help developers get up to 3x faster performance on RTX compared to previous drivers.

Inference performance for three LLMs using ONNX Runtime and the DirectML execution provider with the latest R555 GeForce driver compared to the previous R550 driver. INSEQ=2000 representative of document summarization workloads. All data captured with GeForce RTX 4090 GPU using batch size 1. The generative AI extension support for int4 quantization, plus the NVIDIA optimizations, result in up to 3x faster performance for LLMs.

Developers can unlock the full capabilities of RTX hardware with the new R555 driver, bringing better AI experiences to consumers, faster. It includes:

  • Support for DQ-GEMM metacommand to handle INT4 weight-only quantization for LLMs
  • New RMSNorm normalization methods for Llama 2, Llama 3, Mistral and Phi-3 models
  • Group and multi-query attention mechanisms, and sliding window attention to support Mistral
  • In-place KV updates to improve attention performance
  • Support for GEMM of non-multiple-of-8 tensors to improve context phase performance

Additionally, NVIDIA has optimized AI workflows within WebNN to deliver the powerful performance of RTX GPUs directly within browsers. The WebNN standard helps web app developers accelerate deep learning models with on-device AI accelerators, like Tensor Cores.

Now available in developer preview, WebNN uses DirectML and ORT Web, a Javascript library for in-browser model execution, to make AI applications more accessible across multiple platforms. With this acceleration, popular models like Stable Diffusion, SD Turbo and Whisper run up to 4x faster on WebNN compared to WebGPU and are now available for developers to use. Microsoft Build attendees can learn more about developing on RTX in the Accelerating development on Windows PCs with RTX AI in-person session on Wednesday, May 22, at 11 a.m. PT.

Read More

A Superbloom of Updates in the May Studio Driver Gives Fresh Life to Content Creation

A Superbloom of Updates in the May Studio Driver Gives Fresh Life to Content Creation

Editor’s note: This post is part of our In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX GPU features, technologies and resources, and how they dramatically accelerate content creation.

A superbloom of creative app updates, included in the May Studio Driver, is ready for download today.

New GPU-accelerated and AI-powered apps and features are now available, backed by the NVIDIA Studio platform.

And this week’s featured In the NVIDIA Studio artist, Yao Chan, created the whimsical, spring-inspired 3D scene By the Window using her NVIDIA RTX GPU.

May’s Creative App Rundown

RTX Video is a collection of AI enhancements that improves the quality of video played on apps like YouTube, Prime Video and Disney+. RTX Video Super Resolution (VSR) upscales video for cleaner, crisper imagery, while RTX Video HDR transforms standard dynamic range video content to high-dynamic range (HDR10), improving its visibility, details and vibrancy.

Mozilla Firefox, the third most popular PC browser, has added support for RTX VSR and HDR, including AI-enhanced upscaling, de-artifacting and HDR effects for most streamed videos.

NVIDIA RTX Remix allows modders to easily capture game assets, automatically enhance materials with generative AI tools and create stunning RTX remasters with full ray tracing. RTX Remix recently added DLSS 3.5 support featuring Ray Reconstruction, an AI model that creates higher-quality images for intensive ray-traced games and apps, to the modding toolkit.

Game developers interested in creating their own ray-traced mod for a classic game can download the RTX Remix Beta and watch tutorial videos to get a head start.

Maxon’s Cinema 4D modeling software empowers 3D video effects artists and motion designers to create complex scenes with ease. The integration of the software’s Version 2024.4 with C4D’s Unified Simulation systems now enables control of emission fields to modify behaviors more precisely.

This integration unlocks the ability to orchestrate object interactions with different simulation types, including Pyro, Cloth, soft bodies and rigid bodies. These simulations run considerably faster depending on the RTX GPU in use.

The NVIDIA Omniverse Audio2Face app for iClone 8 uses AI to produce expressive facial animations solely from audio input. In addition to generating natural lip-sync animations for multilingual dialogue, the latest standalone release supports multilingual lip-sync and singing animations, as well as full-spectrum editing with slider controls and a keyframe editor.

Along with accurate lip-sync, facial animations are significantly enhanced by nuanced facial expressions. Pairing Audio2Face with the iClone AccuFACE plug-in, powered by NVIDIA Maxine, Reallusion provides a flexible and multifaceted approach to facial animation, laying the groundwork with audio tracks and adding subtle expressions with webcams.

These latest AI-powered tools and creative app power ups are available for NVIDIA and GeForce RTX GPU owners.

All Things Small, Bright and Beautiful

China-based 3D visual effects artist Yao Chan finds inspiration and joy in the small things in life.

“As the weather gradually warms up, everything is rejuvenating and flowers are blooming,” said Chan. “I want to create an illustration that captures the warm and bright atmosphere of spring.”

Her 3D scene By the Window closely resembles a corner of her home filled with various succulent plants, pots and neatly arranged gardening tools.

“I think everyone has a place or moment that warms their heart in one way or another, and that’s an emotion I want to share with my audience,” said the artist.

Chan usually first sketches out her ideas in Adobe Photoshop, but with her real-life reference already set, she dove right into blocking out the scene in Blender.

Since she wanted to use a hand-painted texture style for modeling the vases and pots, Chan added Blender’s displace modifier and used a Voronoi texture to give the shapes a handcrafted effect.

Chan used hair from the particle system and played with roughness, kink and hair shape effects to accurately model fluffy plants like Kochia scoparia and moss.

Blender Cycles’ RTX-accelerated OptiX ray tracing in the viewport, unlocked by Chan’s GeForce RTX GPU, ensured smooth, interactive modeling throughout her creative workflow.

Modeling and mesh work — complete.

For texturing, Chan referred to former In the NVIDIA Studio featured artist SouthernShotty’s tutorial, using the precision of geometry nodes to highlight the structure of objects and gradient nodes to control the color and transparency of plants.

Chan entered the node zone in Blender.

Chan then used the “pointiness” node to simulate the material of ceramic flower pots.

The “pointiness” node helped simulate materials.

Lighting was fairly straightforward, consisting of sunlight, a warm-toned key light, a cool-toned fill light and a small light source to illuminate the area beneath the table.

Several lights added brightness to the scene.

Chan also added a few volume lights in front of the camera.

Lighting from the side.

Finally, to give the image a more vintage look, Chan added noise to the final rendered image in compositing.

Final compositing work.

Chan’s AI-powered simulations and viewport renderings were powered by her RTX GPU.

“RTX GPUs accelerate workflows and ensure fluent video editing,” she said.

Check out Chan’s latest work on Instagram.

3D artist Yao Chan.

Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter

Read More

Maximizing Training Throughput Using PyTorch FSDP and Torch.compile

Maximizing Training Throughput Using PyTorch FSDP and Torch.compile

Recently, we demonstrated how FSDP and selective activation checkpointing can be used to achieve 57% MFU (Model Flops Utilization) for training a 7B model on A100 GPUs. We also demonstrated how it can train a high quality model, which we open sourced as Granite 7B base model on Hugging Face Hub under the Apache v2.0 license.

We continued our quest to improve the utilization of GPUs by leveraging torch.compile. Using torch.compile and the selective activation checkpointing from our previous work, we achieve a MFU of 68% for the 7B model on A100 GPUs! torch.compile improves training MFU between 10% and 23% for various model sizes.

This blog is organized into three parts: (1) Challenges addressed in order to train using torch.compile, (2) Numerical parity of compile with no-compile, and (3) MFU report.

We open sourced all the code and updated it in the fms-fsdp repository. We are also working with Team PyTorch at Meta to contribute these to the newly released torch titan repository for pre-training.

Challenges of using torch.compile

torch.compile is a graph compilation technique that improves GPU utilization. For details on how torch compile works, we refer the readers to the recent PyTorch paper and associated tutorials. A key challenge in getting torch.compile to perform well is to minimize (or eliminate) graph breaks. We initially started with the Llama implementation provided by Meta, but compiling it caused too many graph breaks resulting in reduced training throughput.

Several portions of the model architecture had to be fixed, with the most important one being the positional embedding layer (RoPE). The typical RoPE implementation uses complex numbers, which was not supported in torch.compile at the time of testing. We implemented RoPE using einops while maintaining parity with the original model architecture implementation. We had to properly cache the frequencies so that we did not run into graph breaks within the RoPE implementation.

Compiling an FSDP model does result in graph breaks, which the PyTorch team at Meta is working to remove. However, these graph breaks as of PyTorch 2.3 are at FSDP unit boundaries and do not affect throughput significantly.

When using custom kernels, we need to wrap each kernel by exposing its API to torch.compile. This involves indicating what parameters are modified in-place, how they are modified, and what shapes and strides will their return values have based on the inputs. In our case, SDPA Flash attention is already integrated appropriately and we were able to get that kernel to work with torch.compile with no graph breaks.

We also noticed that when increasing the amount of data from 2T to 6T tokens, the data loader became a bottleneck. A key reason for this is the fact that previously, we implemented document shuffling in our dataloader naively, by having each worker maintain a list of shuffled document pointers.

With the larger dataset, these pointer lists were growing to hundreds of thousands of entries per worker. Maintaining pointer lists at this scale became expensive enough that cpu contention throttled our training throughput. We re-implemented document shuffling without any pointer lists using a Linear Congruential Generator. LCG is a pseudorandom number generator algorithm that implements a random walk over a population, providing sampling without replacement.

We leveraged the same idea to produce implicit bijective mappings from ordered to shuffled document indices. This enables us to shrink those annoying lists of hundreds of thousands of pointers down to a single integer state for the LCG. This eliminated 80% of the bottleneck and provided a significant boost to our performance. We will devote a separate blog to go into all the details of our performant pre-training data loader.

Numerical Parity of torch.compile and torch.no-compile

We had previously observed parity issues when training with compile and no-compile options, with one of these being related to the use of SDPA. After a few days of intense debugging sessions between the PyTorch teams at Meta and IBM, we were able to achieve parity between PyTorch compile and no-compile modes. To document and verify this parity, we take a mini-Llama model architecture of 1.4B size and train it to 100B tokens in four variations – no-compile, compile with no activation checkpointing, compile with selective activation checkpointing, and compile with full activation checkpointing.

We plot the loss curves and gradient norm for these options below:

Figure 1: Loss curve and gradient norm for various compile options

Figure 1: Loss curve and gradient norm for various compile options

Further, we run the lm-evaluation-harness and compare the various model scores on different benchmarks and observe no major differences between compile and no-compile, which is shown below.

Figure 2: lm-evaluation-harness comparison of various benchmarks between compile and no-compile

Figure 2: lm-evaluation-harness comparison of various benchmarks between compile and no-compile

We observe from all these results that compile with all its variants is equal to no-compile option, thus demonstrating parity between compile and no-compile.

MFU report

Finally, like our previous blog, we compute the MFU for four different model sizes on two clusters. One cluster is 128 A100 GPUs with 400 Gbps inter-node connectivity, and the other is 464 H100 GPUs with 3.2 Tbps inter-node connectivity. We use the selective activation checkpointing that we covered in the prior blog in addition to compile. We capture the results in the table below.

Model size Batch size MFU no-compile MFU compile Percentage gain (%)
7B 2 0.57 0.68 20
13B 2 0.51 0.60 17
34B 2 0.47 0.54 15
70B 2 0.50 0.55 10

Table 1: MFU results with compile and no compile for Llama2 model architectures on 128 A100 80GB GPUs with 400Gbps internode interconnect

Model size Batch size MFU no-compile MFU compile Percentage gain
7B 2 0.37 0.45 21
13B 2 0.35 0.43 23
34B 2 0.32 0.38 19
70B 2 0.32 0.38 19

Table 2: MFU results with compile and no compile for Llama2 model architectures on 464 H100 80GB GPUs with 3.2Tbps internode interconnect

We also had an internal production run on 448 GPUs using a Llama2 7B architecture. Using compile and selective activation checkpointing, with a global batch size of 3.7M, we trained for 4T tokens in 13 days 10 hours!

During training, the data center cooling had to kick in with extra air conditioning and our training team was alerted to this, since we were using the GPUs quite effectively ☺

One key observation from the tables 1 and 2 is that the MFU numbers do not linearly scale with model size. There are two possible explanations that we are actively investigating, one is the scalability of FSDP as model size increases and when tensor parallel needs to be enabled to more effectively use the GPU and the other is batch size, which can be increased further to get better MFU. We plan to explore FSDP v2 and selective operator checkpointing along with the tensor parallel feature to study the scaling laws of FSDP with model size.

Future Work

We plan to start testing FSDP v2 which will be released as part of PyTorch 2.4. FSDP2 provides per parameter sharding and selective operator checkpointing feature that can potentially provide even better memory-compute tradeoffs.

We have also been engaged with the PyTorch team at Meta to evaluate the new asynchronous checkpointing feature that can further improve the GPU utilization by reducing the time to write checkpoints.

We are exploring extending various Triton kernels currently used in inference to perform backward operations to gain speedups beyond inference only.

Finally, as recent work on use of fp8 is emerging, we plan to explore how we can even further accelerate model training using the new data type that promises a 2x acceleration.

Acknowledgements

There are several teams that have been involved in reaching this proof point and we would like to thank the teams across Meta and IBM. Specifically, we extend our gratitude to the Meta PyTorch distributed and compiler teams and IBM Research.

Multiple people were extensively involved in the effort of achieving torch.compile numerical parity with our models, and we wish to acknowledge the key folks involved in this effort; Animesh Jain and Less Wright at Meta, and Linsong Chu, Davis Wertheimer, Brian Vaughan, Antoni i Viros Martin, Mudhakar Srivatsa, and Raghu Ganti at IBM Research.

Special thanks to Stas Bekman, who provided extensive feedback and helped improve this blog. Their insights have been invaluable in highlighting key aspects of optimizing the training and exploring further enhancements.

Read More

Every Company to Be an ‘Intelligence Manufacturer,’ Declares NVIDIA CEO Jensen Huang at Dell Technologies World

Every Company to Be an ‘Intelligence Manufacturer,’ Declares NVIDIA CEO Jensen Huang at Dell Technologies World

AI heralds a new era of innovation for every business in every industry, NVIDIA founder and CEO Jensen Huang said Monday during an appearance at Dell Technologies World.

“We now have the ability to manufacture intelligence,” Huang said during an on-stage conversation with Dell CEO Michael Dell. “The last Industrial Revolution was the manufacturing of software; previously, it was manufacturing electricity — now we are manufacturing intelligence.”

Together with Michael Dell, ServiceNow CEO Bill McDermott and Samsung SDS President and CEO Hwang Sung-woo, Huang shared his insights on the transformative impact of generative AI on the global economy and various industries.

“Every company at its foundation is intelligence — fundamentally every company is an intelligence manufacturer,” Huang emphasized, underscoring the potential of AI to create digital intelligence.

During the keynote, Dell and NVIDIA announced a slew of updates to the Dell AI Factory.

This includes the Dell PowerEdge XE9680L server with liquid cooling and eight NVIDIA Blackwell Tensor Core GPUs, the industry’s densest, energy-efficient rack-scale solutions for large Blackwell GPU deployments.

The Dell NativeEdge platform will automate the delivery of NVIDIA AI Enterprise software, helping developers and IT operators easily deploy AI applications and solutions at the edge. Advancements also include the ability to simplify AI application development for faster time to value with the integration of NVIDIA NIM inference microservices, deployment automation and more.

Huang discussed the concept of an AI factory, likening it to the factories of the last Industrial Revolution that used water to produce electricity. In the current Industrial Revolution, data centers act as AI factories, transforming data and electricity into valuable data tokens distributed globally.

“What has happened is instead of just producing software, we’re now producing intelligence — that intelligence is formulated in the form of tokens that can then be expressed in any information modality that we’d like it to be,” Huang explained.

Huang underscored the importance of full-stack accelerated computing to enable this, noting NVIDIA’s advancements.

Together, NVIDIA and Dell are providing the world’s industries with a full-stack offering — including computing, networking, storage, services and software — that drives copilots, coding assistants, virtual customer service agents and industrial digital twins.

Michael Dell introduced the latest innovations for the Dell AI Factory with NVIDIA, emphasizing their ability to simplify and accelerate customers’ AI journeys.

“We are unleashing this super genius power. Everyone is going to have access to this technology — and it’s gonna get smarter,” Dell said.

The Dell AI Factory with NVIDIA, announced earlier this year, offers a full stack of AI solutions from data center to edge, enabling organizations to quickly adopt and deploy AI at scale.

This platform integrates Dell’s AI capabilities with NVIDIA’s cutting-edge technologies, providing customers with an expansive AI portfolio and an open ecosystem of technology partners.

The Dell AI Factory, based on the NVIDIA partnership, will help establish AI sovereignty for countries by enabling strong data security and customized AI service development.

Together, Dell and NVIDIA will bring these capabilities to companies, help stand it up, and help develop new applications that enterprises can deploy, Huang said.

“Our partnership between us is really about that, literally from the ground up building AI factories and delivering it to the world’s enterprises as a solution,” Huang said.

Read More