NVIDIA Grace CPU C1 Gains Broad Support in Edge, Telco and Storage

NVIDIA is highlighting significant momentum for its new Grace CPU C1 this week at the COMPUTEX trade show in Taipei, with a strong showing of support from key original design manufacturer partners.

The expanding NVIDIA Grace CPU lineup, including the powerful NVIDIA Grace Hopper Superchip and the flagship Grace Blackwell platform, is delivering significant efficiency and performance gains for major enterprises tackling demanding AI workloads.

As AI continues its rapid advancement, power efficiency has become a critical factor in data center design for applications ranging from large language models to complex simulations.

The NVIDIA Grace architecture is directly addressing this challenge.

NVIDIA Grace Blackwell NVL72, a rack-scale system integrating 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs, is being adopted by major cloud providers to accelerate AI training and inference, including complex reasoning and physical AI tasks.

The NVIDIA Grace architecture now comes in two key configurations: the dual-CPU Grace Superchip and the new single-CPU Grace CPU C1.

The C1 variant is gaining significant traction in edge, telco, storage and cloud deployments where maximizing performance per watt is paramount.

The Grace CPU C1 boasts a claimed 2x improvement in energy efficiency compared with traditional CPUs, a vital advantage in distributed and power-constrained environments.

Leading manufacturers like Foxconn, Jabil, Lanner, MiTAC Computing, Supermicro and Quanta Cloud Technology support this momentum, developing systems using the Grace CPU C1’s capabilities.

In the telco space, the NVIDIA Compact Aerial RAN Computer, which combines the Grace CPU C1 with an NVIDIA L4 GPU and NVIDIA ConnectX-7 SmartNIC, is gaining traction as a platform for distributed AI-RAN, meeting the power, performance and size requirements for deployment at cell sites.

NVIDIA Grace is also finding a home in storage solutions, with WEKA and Supermicro deploying it for its high performance and memory bandwidth.

Real-World Impact

NVIDIA Grace’s benefits aren’t theoretical — they’re tangible in real-world deployments:

  • ExxonMobil is using Grace Hopper for seismic imaging, crunching massive datasets to gain insights on subsurface features and geological formations.
  • Meta is deploying Grace Hopper for ad serving and filtering, using the high-bandwidth NVIDIA NVLink-C2C interconnect between the CPU and GPU to manage enormous recommendation tables.
  • High-performance computing centers such as the Texas Advanced Computing Center and Taiwan’s National Center for High-Performance Computing are using the Grace CPU in their systems for AI and simulation to advance research.

Learn more about the latest AI advancements at NVIDIA GTC Taipei, running May 21-22 at COMPUTEX.

Read More