Two years after he spoke at a conference detailing his ambitious vision for cooling tomorrow’s data centers, Ali Heydari and his team won a $5 million grant to go build it.
It was the largest of 15 awards in May from the U.S. Department of Energy. The DoE program, called COOLERCHIPS, received more than 100 applications from a who’s who list of computer architects and researchers.
“This is another example of how we’re rearchitecting the data center,” said Ali Heydari, a distinguished engineer at NVIDIA who leads the project and helped deploy more than a million servers in previous roles at Baidu, Twitter and Facebook.
“We celebrated on Slack because the team is all over the U.S.,” said Jeremy Rodriguez, who once built hyperscale liquid-cooling systems and now manages NVIDIA’s data center engineering team.
A Historic Shift
The project is ambitious and comes at a critical moment in the history of computing.
Processors are expected to generate up to an order of magnitude more heat as Moore’s law hits the limits of physics, but the demands on data centers continue to soar.
Soon, today’s air-cooled systems won’t be able to keep up. Current liquid-cooling techniques won’t be able to handle the more than 40 watts per square centimeter researchers expect future silicon in data centers will need to dissipate.
So, Heydari’s group defined an advanced liquid-cooling system.
Their approach promises to cool a data center packed into a mobile container, even when it’s placed in an environment up to 40 degrees Celsius and is drawing 200kW — 25x the power of today’s server racks.
It will cost at least 5% less and run 20% more efficiently than today’s air-cooled approaches. It’s much quieter and has a smaller carbon footprint, too.
“That’s a great achievement for our engineers who are very smart folks,” he said, noting part of their mission is to make people aware of the changes ahead.
A Radical Proposal
The team’s solution combines two technologies never before deployed in tandem.
First, chips will be cooled with cold plates whose coolant evaporates like sweat on the foreheads of hard-working processors, then cools to condense and re-form as liquid. Second, entire servers, with their lower power components, will be encased in hermetically sealed containers and immersed in coolant.
Novel solution: Servers will be bathed in coolants as part of the project.
They will use a liquid common in refrigerators and car air conditioners, but not yet used in data centers.
Three Giant Steps
The three-year project sets annual milestones — component tests next year, a partial rack test a year later, and a full system tested and delivered at the end.
Icing the cake, the team will create a full digital twin of the system using NVIDIA Omniverse, an open development platform for building and operating metaverse applications.
The NVIDIA team consists of about a dozen thermal, power, mechanical and systems engineers, some dedicated to creating the digital twin. They have help from seven partners:
Binghamton and Villanova universities in analysis, testing and simulation
BOYD Corp. for the cold plates
Durbin Group for the pumping system
Honeywell to help select the refrigerant
Sandia National Laboratory in reliability assessment, and
Vertiv Corp. in heat rejection
“We’re extending relationships we’ve built for years, and each group brings an array of engineers,” said Heydari.
Of course, it’s hard work, too.
For instance, Mohammed Tradat, a former Binghamton researcher who now heads an NVIDIA data center mechanical engineering group, “had a sleepless night working on the grant application, but it’s a labor of love for all of us,” he said.
Heydari said he never imagined the team would be bringing its ideas to life when he delivered a talk on them in late 2021.
“No other company would allow us to build an organization that could do this kind of work — we’re making history and that’s amazing,” said Rodriguez.
See how digital twins, built in Omniverse, help optimize the design of a data center in the video below.
Picture at top: Gathered recently at NVIDIA headquarters are (from left) Scott Wallace (NVIDIA), Greg Strover (Vertiv), Vivien Lecoustre (DoE), Vladimir Troy (NVIDIA), Peter Debock (COOLERCHIPS program director), Rakesh Radhakrishnan (DoE), Joseph Marsala (Durbin Group), Nigel Gore (Vertiv), and Jeremy Rodriguez, Bahareh Eslami, Manthos Economou, Harold Miyamura and Ali Heydari (all of NVIDIA).
For about six years, AI has been an integral part of the artwork of Dominic Harris, a London-based digital artist who’s about to launch his biggest exhibition to date.
“I use it for things like giving butterflies a natural sense of movement,” said Harris, whose typical canvas is an interactive computer display.
Using a rack of NVIDIA’s latest GPUs in his studio, Harris works with his team of more than 20 designers, developers and other specialists to create artworks like Unseen. It renders a real-time collage of 13,000 butterflies — some fanciful, each unique, but none real. Exhibit-goers can make them flutter or change color with a gesture.
The Unseen exhibit includes a library of 13,000 digital butterflies.
The work attracted experts from natural history museums worldwide. Many were fascinated by the way it helps people appreciate the beauty and fragility of nature by inviting them to interact with creatures not yet discovered or yet to be born.
“AI is a tool in my palette that supports the ways I try to create a poignant human connection,” he said.
An Artist’s View of AI
Harris welcomes the public fascination with generative AI that sprang up in the past year, though it took him by surprise.
“It’s funny that AI in art has become such a huge topic because, even a year ago, if I told someone there’s AI in my art, they would’ve had a blank face,” he said.
Looking forward, AI will assist, not replace, creative people, Harris said.
“With each performance increase from NVIDIA’s products, I’m able to augment what I can express in a way that lets me create increasingly incredible original artworks,” he said.
A Living Stock Exchange
Combining touchscreens, cameras and other sensors, he aims to create connections between his artwork and people who view and interact with them.
For instance, Limitless creates an eight-foot interactive tower made up of gold blocks animated by a live data feed from the London Stock Exchange. Each block represents a company, shining or tarnished, by its current rising or falling valuation. Touching a tile reveals the face of the company’s CEO, a reminder that human beings drive the economy.
Harris with “Limitless,” a living artwork animated in part with financial market data.
It’s one work in Feeding Consciousness, Harris’ largest exhibition to date, opening Thursday, May 25, at London’s Halcyon Gallery.
Booting Up Invitations
“Before the show even opened, it got extended,” he said, showing invitations that went out on small tablets loaded with video previews.
The NVIDIA Jetson platform for edge AI and robotics “features prominently in the event and has become a bit of a workhorse for me in many of my artworks,” he said.
An immersive space in the “Feeding Consciousness” exhibit relies on NVIDIA’s state-of-the-art graphics.
Three years in the making, the new exhibit includes one work that uses 180 displays. It also sports an immersive space created with eight cameras, four laser rangefinders and four 4K video projectors.
“I like building unique canvases to tell stories,” he said.
Harris puts the viewer in control of Antarctic landscapes in “Endurance.”
For example, Endurance depicts polar scenes Sir Ernest Shackleton’s expedition trekked through when their ship got trapped in the ice pack off Antarctica in 1915. All 28 men survived, and the sunken ship was discovered last year while Harris was working on his piece.
Harris encounters a baby polar bear from an artwork.
“I was inspired by men who must have felt miniscule before the forces of nature, and the role reversal, 110 years later, now that we know how fragile these environments really are,” he said.
Writing Software at Six
Harris started coding at age six. When his final project in architecture school — an immersive installation with virtual sound — won awards at University College London, it set the stage for his career as a digital artist.
Along the way, “NVIDIA was a name I grew up with, and graphics cards became a part of my palette that I’ve come to lean on more and more — I use a phenomenal amount of processing power rendering some of my works,” he said.
For example, next month he’ll install Every Wing Has a Silver Lining, a 16-meter-long work that displays 30,000 x 2,000 pixels, created in part with GeForce RTX 4090 GPUs.
“We use the highest-end hardware to achieve an unbelievable level of detail,” he said.
He shares his passion in school programs, giving children a template which they can use to draw butterflies that he later brings to life on a website.
“It’s a way to get them to see and embrace art in the technology they’re growing up with,” he said, comparing it to NVIDIA Canvas, a digital drawing tool his six- and 12-year-old daughters love to use.
The Feeding Consciousness exhibition, previewed in the video below, runs from May 25 to August 13 at London’s Halcyon Gallery.
Keep the NVIDIA and Microsoft party going this GFN Thursday with Grounded, Deathloop and Pentiment now available to stream for GeForce NOW members this week.
These three Xbox Game Studio titles are part of the dozen additions to the GeForce NOW library.
Triple Threat
NVIDIA and Microsoft’s partnership continues to flourish with this week’s game additions.
What is this, a game for ants?!
Who shrunk the kids? Grounded from Obsidian Entertainment is an exhilarating, cooperative survival-adventure. The world of Grounded is a vast, beautiful and dangerous place — especially when you’ve been shrunken to the size of an ant. Explore, build and thrive together alongside the hordes of giant insects, fighting to survive the perils of a vast and treacherous backyard.
Unravel a web of deceit.
Also from Obsidian is historical narrative-focused Pentiment, the critically acclaimed role-playing game featured on multiple Game of the Year lists in 2022. Step into a living illustrated world inspired by illuminated manuscripts — when Europe is at a crossroads of great religious and political change. Walk in the footsteps of Andreas Maler, a master artist amidst murders, scandals and intrigue in the Bavarian Alps. Impact a changing world and see the consequences of your decisions in this narrative adventure.
If at first you don’t succeed, die, die and die again.
DEATHLOOP is a next-gen first-person shooter from ArkaneLyon, the award-winning studio behind the Dishonored franchise. In DEATHLOOP, two rival assassins are trapped in a time loop on the island of Blackreef, doomed to repeat the same day for eternity. The only chance for escape is to end the cycle by assassinating eight key targets before the day resets. Learn from each cycle, try new approaches and break the loop. The game also includes support for RTX ray tracing for Ultimate and Priority members.
These three Xbox titles join Gears 5 as supported games on GeForce NOW. Members can stream these or more than 1,600 others in the GeForce NOW library.
Priority members can play at up to 1080p 60 frames per second and skip the waiting lines, and Ultimate members can play at up to 4K 120 fps on PC and Mac.
Middle-earth calls, as The Lord of the Rings: Gollum comes to GeForce NOW. Embark on a captivating interactive experience in this action-adventure game that unfolds parallel to the events of The Fellowship of the Ring. Assume the role of the enigmatic Gollum on a treacherous journey, discovering how he outsmarted the most formidable characters in Middle-earth. Priority and Ultimate members can experience the epic story with support for RTX ray tracing and DLSS technology.
In addition, members can look for the following:
Blooming Business: Casino (New release on Steam, May 23)
The Warhammer Skulls Festival is live today. Check it out for information about upcoming games in the Warhammer franchise, plus discounts on Warhammer titles on Steam and Epic Games Store. Stay up to date on these and other discounts through the GeForce NOW app.
Finally, we’ve got a question for you this week. Let us know what mischief you’d be up to on Twitter or in the comments below.
Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.
The GeForce RTX 4060 Ti 8GB GPU — part of the GeForce RTX 4060 family announced last week — is now available, starting at $399, from top add-in card providers including ASUS, Colorful, Galax, GIGABYTE, INNO3D, MSI, Palit, PNY and ZOTAC, as well as from system integrators and builders worldwide.
GeForce RTX 4060 Ti 8GB is available now from a range of providers.
GeForce RTX 40 Series GPUs come backed by NVIDIA Studio technologies, including hardware acceleration for 3D, video and AI workflows; optimizations for RTX hardware in over 110 of the most popular creative apps; and exclusive Studio apps like Omniverse, Broadcast and Canvas.
Plus, enhancements for NVIDIA Studio-powered creator apps keep coming in. MAGIX VEGAS Pro software for video editing is receiving a major AI overhaul that will boost performance for all GeForce RTX users.
And prepare to be inspired by U.K.-based livestreamer Warwick, equal parts insightful and inspirational, as they share their AI-based workflow powered by a GeForce RTX GPU and the NVIDIA Broadcast app, this week In the NVIDIA Studio.
At the Microsoft Build conference today NVIDIA unveiled new tools for developers that will make it easier and faster to train and deploy advanced AI on Windows 11 PCs with RTX GPUs.
In addition, the Studio team wants to see how creators #SetTheScene, whether for an uncharted virtual world or a small interior diorama of a room.
— NVIDIA Omniverse (@nvidiaomniverse) May 11, 2023
Enter the #SetTheScene Studio community challenge. Post original environment art on Facebook, Twitter or Instagram, and use the hashtag #SetTheScene for a chance to be featured on the @NVIDIAStudio or @NVIDIAOmniverse social channels.
VEGAS Pro Gets an AI Assist Powered by RTX
NVIDIA Studio collaborated with MAGIX VEGAS Pro to accelerate AI model performance on Windows PCs with extraordinary results.
VEGAS Pro 20 update 3, released this month, increases the speed of AI effects — such as style transfer, AI upscaling and colorization — with NVIDIA RTX GPUs.
Shorter times are better. Tested on GeForce RTX 4090 GPU, Intel Core i9-12900K with UHD 770.
Style transfer, for example, uses AI to instantly bring to pieces the style of famous artists such as Picasso or van Gogh with a staggering 219% performance increase over the previous version.
Warwick’s World
As this week’s featured In the NVIDIA Studio artist would say, “Welcome to the channnnnnnel!” Warwick is a U.K.-based content streamer who enjoys coffee, Daft Punk, tabletop role-playing games and cats. Alongside their immense talent and wildly entertaining persona lies an extraordinary superpower: empathy.
Warwick, like the rest of the world, had to find new ways to connect with people during the pandemic. They decided to pursue streaming as a way to build a community. Their vision was to create a channel that provides laughter and joy, escapism during stressful times and a safe haven for love and expression.
“It’s okay not to be okay,” stressed Warwick. “I’ve lived a lot of my life being told I couldn’t feel a certain way, show emotion or let things get me down. I was told that those were weaknesses that I needed to fight, when in reality they’re our truest strengths: being true to ourselves, feeling and being honest with our emotions.”
Warwick finds inspiration in making a positive contribution to other people’s lives. The thousands of subs speak for themselves.
But there are always ways to improve the quality of streams — plus, working and streaming full time can be challenging, as “it can be tough to get all your ideas to completion,” Warwick said.
For maximum efficiency, Warwick deploys their GeForce RTX 3080 GPU, taking advantage of the seventh-generation NVIDIA encoder (NVENC) to independently encode video, which frees up the graphics card to focus on livestreaming.
“NVIDIA is highly regarded in content-creation circles. Using OBS, Adobe Photoshop and Premiere Pro is made better by GeForce GPUs!” — Warwick
“I honestly can’t get enough of it!” said the streamer. “Being able to stream with OBS Studio software using NVENC lets me play the games I want at the quality I want, with other programs running to offer quality content to my community.”
Warwick has also experimented with the NVIDIA Broadcast app, which magically transforms dorms, home offices and more into home studios. They said the Eye Contact effect had “near-magical” results.
“Whenever I need to do ad reads, I find it incredible how well Eye Contact works, considering it’s in beta!” said Warwick. “I love the other Broadcast features that are offered for content creators and beyond.”
Warwick will be a panelist on an event hosted by Top Tier Queer (TTQ), an initiative that celebrates queer advocates in the creator space.
Sponsored by NVIDIA Studio and organized by In the NVIDIA Studio artist WATCHOLLIE, the TTQ event in June will serve as an avenue for queer visibility and advocacy, as well as an opportunity to award one participant with prizes, including a GeForce RTX 3090 GPU, to help amplify their voice even further. Apply for the TTQ initiative now.
Meet a Judge for Top Tier Queer!!
What does TTQ mean to you
“It means so much! To be a judge of this competition that aims to uplift and showcase underated and underappreciated LGBTQ+ voices and celebrate them is truly an honour!” @WarwickZeropic.twitter.com/XT9PVCPSy1
Streaming is deeply personal for Warwick. “In my streams and everything I create, I aim to inspire others to know their feelings are valid,” they said. “And because of that, I feel the community that I have really appreciates me and the space that I give them.”
Generative AI — in the form of large language model (LLM) applications like ChatGPT, image generators such as Stable Diffusion and Adobe Firefly, and game rendering techniques like NVIDIA DLSS 3 Frame Generation — is rapidly ushering in a new era of computing for productivity, content creation, gaming and more.
At the Microsoft Build developer conference, NVIDIA and Microsoft today showcased a suite of advancements in Windows 11 PCs and workstations with NVIDIA RTX GPUs to meet the demands of generative AI.
More than 400 Windows apps and games already employ AI technology, accelerated by dedicated processors on RTX GPUs called Tensor Cores. Today’s announcements, which include tools to develop AI on Windows PCs, frameworks to optimize and deploy AI, and driver performance and efficiency improvements, will empower developers to build the next generation of Windows apps with generative AI at their core.
“AI will be the single largest driver of innovation for Windows customers in the coming years,” said Pavan Davuluri, corporate vice president of Windows silicon and system integration at Microsoft. “By working in concert with NVIDIA on hardware and software optimizations, we’re equipping developers with a transformative, high-performance, easy-to-deploy experience.”
Develop Models With Windows Subsystem for Linux
AI development has traditionally taken place on Linux, requiring developers to either dual-boot their systems or use multiple PCs to work in their AI development OS while still accessing the breadth and depth of the Windows ecosystem.
Over the past few years, Microsoft has been building a powerful capability to run Linux directly within the Windows OS, called Windows Subsystem for Linux (WSL). NVIDIA has been working closely with Microsoft to deliver GPU acceleration and support for the entire NVIDIA AI software stack inside WSL. Now developers can use Windows PC for all their local AI development needs with support for GPU-accelerated deep learning frameworks on WSL.
With NVIDIA RTX GPUs delivering up to 48GB of RAM in desktop workstations, developers can now work with models on Windows that were previously only available on servers. The large memory also improves the performance and quality for local fine-tuning of AI models, enabling designers to customize them to their own style or content. And because the same NVIDIA AI software stack runs on NVIDIA data center GPUs, it’s easy for developers to push their models to Microsoft Azure Cloud for large training runs.
Rapidly Optimize and Deploy Models
With trained models in hand, developers need to optimize and deploy AI for target devices.
Microsoft released the Microsoft Olive toolchain for optimization and conversion of PyTorch models to ONNX, enabling developers to automatically tap into GPU hardware acceleration such as RTX Tensor Cores. Developers can optimize models via Olive and ONNX, and deploy Tensor Core-accelerated models to PC or cloud. Microsoft continues to invest in making PyTorch and related tools and frameworks work seamlessly with WSL to provide the best AI model development experience.
Improved AI Performance, Power Efficiency
Once deployed, generative AI models demand incredible inference performance. RTX Tensor Cores deliver up to 1,400 Tensor TFLOPS for AI inferencing. Over the last year, NVIDIA has worked to improve DirectML performance to take full advantage of RTX hardware.
On May 24, we’ll release our latest optimizations in Release 532.03 drivers that combine with Olive-optimized models to deliver big boosts in AI performance. Using an Olive-optimized version of the Stable Diffusion text-to-image generator with the popular Automatic1111 distribution, performance is improved over 2x with the new driver.
Stable Diffusion performance tested on GeForce RTX 4090 using Automatic1111 and Text-to-Image function.
With AI coming to nearly every Windows application, efficiently delivering inference performance is critical — especially for laptops. Coming soon, NVIDIA will introduce new Max-Q low-power inferencing for AI-only workloads on RTX GPUs. It optimizes Tensor Core performance while keeping power consumption of the GPU as low as possible, extending battery life and maintaining a cool, quiet system. The GPU can then dynamically scale up for maximum AI performance when the workload demands it.
Join the PC AI Revolution Now
Top software developers — like Adobe, DxO, ON1 and Topaz — have already incorporated NVIDIA AI technology with more than 400 Windows applications and games optimized for RTX Tensor Cores.
“AI, machine learning and deep learning power all Adobe applications and drive the future of creativity. Working with NVIDIA we continuously optimize AI model performance to deliver the best possible experience for our Windows users on RTX GPUs.” — Ely Greenfield, CTO of digital media at Adobe
“NVIDIA is helping to optimize our WinML model performance on RTX GPUs, which is accelerating the AI in DxO DeepPRIME, as well as providing better denoising and demosaicing, faster.” — Renaud Capolunghi, senior vice president of engineering at DxO
“Working with NVIDIA and Microsoft to accelerate our AI models running in Windows on RTX GPUs is providing a huge benefit to our audience. We’re already seeing 1.5x performance gains in our suite of AI-powered photography editing software.” — Dan Harlacher, vice president of products at ON1
“Our extensive work with NVIDIA has led to improvements across our suite of photo- and video-editing applications. With RTX GPUs, AI performance has improved drastically, enhancing the experience for users on Windows PCs.” — Suraj Raghuraman, head of AI engine development at Topaz Labs
NVIDIA and Microsoft are making several resources available for developers to test drive top generative AI models on Windows PCs. An Olive-optimized version of the Dolly 2.0 large language model is available on Hugging Face. And a PC-optimized version of NVIDIA NeMo large language model for conversational AI is coming soon to Hugging Face.
The complementary technologies behind Microsoft’s Windows platform and NVIDIA’s dynamic AI hardware and software stack will help developers quickly and easily develop and deploy generative AI on Windows 11.
Robotics hardware traditionally requires programmers to deploy it. READY Robotics wants to change that with its “no code” software aimed at people working in manufacturing who haven’t got programming skills.
The Columbus, Ohio, startup is a spinout of robotics research from Johns Hopkins University. Kel Guerin was a PhD candidate there leading this research when he partnered with Benjamin Gibbs, who was at Johns Hopkins Technology Ventures, to land funding and pursue the company, now led by Gibbs as CEO.
“There was this a-ha moment where we figured out that we could take these types of visual languages that are very easy to understand and use them for robotics,” said Guerin, who’s now chief innovation officer at the startup.
READY’s “no code” ForgeOS operating system is designed to enable anyone to program any type of robot hardware or automation device. ForgeOS works seamlessly with plug-ins for most major robot hardware, and similar to other operating systems, like Android, it allows running third-party apps and plugins, providing a robust ecosystem of partners and developers working to make robots more capable, says Guerin.
Implementing apps in robotics allows for new capabilities to be added to a robotic system in a few clicks, improving user experience and usability. Users can install their own apps, such as Task Canvas, which provides an intuitive building block programming interface similar to Scratch, a simple block-based visual language for kids developed at MIT Media Lab, which was influential in its design.
Task Canvas allows users to show the actions of the robot, as well as all the other devices in an automation cell (such as grippers, programmable logic controllers, and machine tools) as blocks in a flow chart. The user can easily create powerful logic by tying these blocks together — without writing a single line of code. The interface offers nonprogrammers a more “drag-and-drop” experience for programming and deploying robots, whether working directly on the factory floor with real robots on a tablet device or with access to simulation from Isaac Sim, powered by NVIDIA Omniverse.
Robot System Design in Simulation for Real-World Deployments
READY is making robotics system design easier for nonprogrammers, helping to validate robots and systems for accelerated deployments.
The company is developing Omniverse Extensions — Omniverse kit applications based on Isaac Sim — and can deploy them on the cloud. It uses Omniverse Nucleus — the platform’s database and collaboration engine — in the cloud as well.
Isaac Sim is an application framework that enables simulation training for testing out robots in virtual manufacturing lines before deployment into the real world.
“Bigger companies are moving to a sim-first approach to automation because these systems cost a lot of money to install. They want to simulate them first to make sure it’s worth the investment,” said Guerin.
The startup charges users of its platform licensing per software seat and also offers support services to help roll out and develop systems.
It’s a huge opportunity. Roughly 90 percent of the world’s factories haven’t yet embraced automation, which is a trillion-dollar market.
READY is a member of NVIDIA Inception, a free program that provides startups with technical training, go-to-market support and AI platform guidance.
From Industrial Automation Giants to Stanley Black & Decker
The startup operates in an ecosystem of world-leading industrial automation providers, and these global partners are actively developing integrations with platforms like NVIDIA Omniverse and are investing in READY, said Guerin.
“Right now we are starting to work with large enterprise customers who want to automate but they can’t find the expertise to do it,” he said.
Stanley Black & Decker, a global supplier of tools, is relying on READY to automate machines, including CNC lathes and mills.
Robotic automation had been hard to deploy in their factory until Stanley Black & Decker started using READY’s ForgeOS with its Station setup, which makes it possible to deploy robots in a day.
Creating Drag-and-Drop Robotic Systems in Simulation
READY is putting simulation capabilities into the hands of nonprogrammers, who can learn its Task Canvas interface for drag-and-drop programming of industrial robots in about an hour, according to the company.
The company also runs READY Academy, which offers a catalog of free training for manufacturing professionals to learn the skills to design, deploy, manage and troubleshoot robotic automation systems.
“For potential customers interested in our technology, being able to try it out with a robot simulated in Omniverse before they get their hands on the real thing — that’s something we’re really excited about,” said Guerin.
In this episode of the NVIDIA AI Podcast, host Noah Kravitz dives into an illuminating conversation with Alex Fielding, co-founder and CEO of Privateer Space.
Fielding is a tech industry veteran, having previously worked alongside Apple co-founder Steve Wozniak on several projects, and holds a deep expertise in engineering, robotics, machine learning and AI.
Privateer Space, Fielding’s latest venture, aims to address one of the most daunting challenges facing our world today: space debris.
The company is creating a data infrastructure to monitor and clean up space debris, ensuring sustainable growth for the budding space economy. In essence, they’re the sanitation engineers of the cosmos.
Privateer Space is also a part of NVIDIA Inception, a free program that offers go-to-market support, expertise and technology for AI startups.
During the podcast, Fielding shares the genesis of Privateer Space, his journey from Apple to the space industry, and his subsequent work on communication between satellites at different altitudes.
He also addresses the severity of space debris, explaining how every launch adds more debris, including minute yet potentially dangerous fragments like frozen propellant and paint chips.
Tune in to the podcast for more on what the future holds for the intersection of AI and space.
A postdoctoral researcher at the University of Minnesota discusses his efforts to allow amputees to control their prosthetic limb — right down to the finger motions — with their minds.
Overjet, a member of NVIDIA Inception, is moving fast to bring AI to dentists’ offices. Dr. Wardah Inam, CEO of the company, discusses using AI to improve patient care.
Luis Voloch, co-founder and chief technology officer of Immunai, talks about tackling the challenges of the immune system with a machine learning and data science mindset.
Subscribe to the AI Podcast: Now Available on Amazon Music
The National Energy Research Scientific Computing Center (NERSC), the U.S. Department of Energy’s lead facility for open science, measured results across four of its key high performance computing and AI applications.
They clocked how fast the applications ran and how much energy they consumed on CPU-only and GPU-accelerated nodes on Perlmutter, one of the world’s largest supercomputers using NVIDIA GPUs.
The results were clear. Accelerated with NVIDIA A100 Tensor Core GPUs, energy efficiency rose 5x on average. An application for weather forecasting logged gains of 9.8x.
GPUs Save Megawatts
On a server with four A100 GPUs, NERSC got up to 12x speedups over a dual-socket x86 server.
That means, at the same performance level, the GPU-accelerated system would consume 588 megawatt-hours less energy per month than a CPU-only system. Running the same workload on a four-way NVIDIA A100 cloud instance for a month, researchers could save more than $4 million compared to a CPU-only instance.
Measuring Real-World Applications
The results are significant because they’re based on measurements of real-world applications, not synthetic benchmarks.
The gains mean that the 8,000+ scientists using Perlmutter can tackle bigger challenges, opening the door to more breakthroughs.
Among the many use cases for the more than 7,100 A100 GPUs on Perlmutter, scientists are probing subatomic interactions to find new green energy sources.
Advancing Science at Every Scale
The applications NERSC tested span molecular dynamics, material science and weather forecasting.
For example, MILC simulates the fundamental forces that hold particles together in an atom. It’s used to advance quantum computing, study dark matter and search for the origins of the universe.
BerkeleyGW helps simulate and predict optical properties of materials and nanostructures, a key step toward developing more efficient batteries and electronic devices.
NERSC apps get efficiency gains with accelerated computing.
EXAALT, which got an 8.5x efficiency gain on A100 GPUs, solves a fundamental challenge in molecular dynamics. It lets researchers simulate the equivalent of short videos of atomic movements rather than the sequences of snapshots other tools provide.
The fourth application in the tests, DeepCAM, is used to detect hurricanes and atmospheric rivers in climate data. It got a 9.8x gain in energy efficiency when accelerated with A100 GPUs.
The overall 5x speedup is based on a mix of HPC and AI applications.
Savings With Accelerated Computing
The NERSC results echo earlier calculations of the potential savings with accelerated computing. For example, in a separate analysis NVIDIA conducted, GPUs delivered 42x better energy efficiency on AI inference than CPUs.
That means switching all the CPU-only servers running AI worldwide to GPU-accelerated systems could save a whopping 10 trillion watt-hours of energy a year. That’s like saving the energy 1.4 million homes consume in a year.
Accelerating the Enterprise
You don’t have to be a scientist to get gains in energy efficiency with accelerated computing.
Pharmaceutical companies are using GPU-accelerated simulation and AI to speed the process of drug discovery. Carmakers like BMW Group are using it to model entire factories.
They’re among the growing ranks of enterprises at the forefront of what NVIDIA founder and CEO Jensen Huang calls an industrial HPC revolution, fueled by accelerated computing and AI.
Scientific researchers need massive computational resources that can support exploration wherever it happens. Whether they’re conducting groundbreaking pharmaceutical research, exploring alternative energy sources or discovering new ways to prevent financial fraud, accessible state-of-the-art AI computing resources are key to driving innovation. This new model of computing can solve the challenges of generative AI and power the next wave of innovation.
Cambridge-1, a supercomputer NVIDIA launched in the U.K. during the pandemic, has powered discoveries from some of the country’s top healthcare researchers. The system is now becoming part of NVIDIA DGX Cloud to accelerate the pace of scientific innovation and discovery — across almost every industry.
As a cloud-based resource, it will broaden access to AI supercomputing for researchers in climate science, autonomous machines, worker safety and other areas, delivered with the simplicity and speed of the cloud, ideally located for the U.K. and European access.
DGX Cloud is a multinode AI training service that makes it possible for any enterprise to access leading-edge supercomputing resources from a browser. The original Cambridge-1 infrastructure included 80 NVIDIA DGX systems; now it will join with DGX Cloud, to allow customers access to world-class infrastructure.
History of Healthcare Insights
Academia, startups and the UK’s large pharma ecosystem used the Cambridge-1 supercomputing resource to accelerate research and design new approaches to drug discovery, genomics and medical imaging with generative AI in some of the following ways:
InstaDeep, in collaboration with NVIDIA and the Technical University of Munich Lab, developed a 2.5 billion-parameter LLM for genomics on Cambridge-1. This project aimed to create a more accurate model for predicting the properties of DNA sequences.
King’s College London used Cambridge-1 to create 100,000 synthetic brain images — and made them available for free to healthcare researchers. Using the open-source AI imaging platform MONAI, the researchers at King’s created realistic, high-resolution 3D images of human brains, training in weeks versus months.
Oxford Nanopore used Cambridge-1 to quickly develop highly accurate, efficient models for base calling in DNA sequencing. The company also used the supercomputer to support inference for the ORG.one project, which aims to enable DNA sequencing of critically endangered species
Peptone, in collaboration with a pharma partner, used Cambridge-1 to run physics-based simulations to evaluate the effect of mutations on protein dynamics with the goal of better understanding why specific antibodies work efficiently. This research could improve antibody development and biologics discovery.
Relation Therapeutics developed a large language model which reads DNA to better understand genes, which is a key step to creating new medicines. Their research takes us a step closer to understanding how genes are controlled in certain diseases.
Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.
The GeForce RTX 4060 family will be available starting next week, bringing massive creator benefits to the popular 60-class GPUs.
The latest GPUs in the 40 Series come backed by NVIDIA Studio technologies, including hardware acceleration for 3D, video and AI workflows; optimizations for RTX hardware in over 110 of the most popular creative apps; and exclusive Studio apps like Omniverse, Broadcast and Canvas.
Real-time ray-tracing renderer D5 Render introduced support for NVIDIA DLSS 3 technology, enabling super smooth real-time rendering experiences, so creators can work with larger scenes without sacrificing speed or interactivity.
Plus, the new Into the Omniverse series highlights the latest advancements to NVIDIA Omniverse, a platform furthering the evolution of the metaverse with the OpenUSD framework. The series showcases how artists, developers and enterprises can use the open development platform to transform their 3D workflows. The first installment highlights an update coming soon to the Adobe Substance 3D Painter Connector.
In addition, NVIDIA 3D artist Daniel Barnes returns this week In the NVIDIA Studio to share his mesmerizing, whimsical animation, Wormhole 00527.
Beyond Fast
The GeForce RTX 4060 family is powered by the ultra-efficient NVIDIA Ada Lovelace architecture with fourth-generation Tensor Cores for AI content creation, third-generation RT Cores and compatibility with DLSS 3 for ultra-fast 3D rendering, as well as the eighth-generation NVIDIA encoder (NVENC), now with support for AV1.
The GeForce RTX 4060 Ti GPU.
3D modelers can build and edit realistic 3D models in real time, up to 45% faster than the previous generation, thanks to third-generation RT Cores, DLSS 3 and the NVIDIA Omniverse platform.
Tested on GeForce RTX 4060 and 3060 GPUs. Maya with Arnold 2022 (7.1.1) measures render time of NVIDIA SOL 3D model. DaVinci Resolve measures FPS applying Magic Mask effect “Faster” quality setting to 4K resolution. ON1 Resize AI measures time required to apply effect to batch of 10 photos. Time measurement is normalized for easier comparison across tests.
Video editors specializing in Adobe Premiere Pro, Blackmagic Design’s DaVinci Resolve and more have at their disposal a variety of AI-powered effects, such as auto-reframe, magic mask and depth estimation. Fourth-generation Tensor Cores seamlessly hyper-accelerate these effects, so creators can stay in their flow states.
Broadcasters can jump into next-generation livestreaming with the eighth-generation NVENC with support for AV1. The new encoder is 40% more efficient, making livestreams appear as if there were a 40% increase in bitrate — a big boost in image quality that enables 4K streaming on apps like OBS Studio and platforms such as YouTube and Discord.
10 Mbps with default OBS streaming settings.
NVENC boasts the most efficient hardware encoding available, providing significantly better quality than other GPUs. At the same bitrate, images will look better, sharper and have less artifacts, like in the example above.
Encode quality comparison, measured with BD-BR.
Creators are embracing AI en masse. DLSS 3 multiplies frame rates in popular 3D apps. ON1 ResizeAI, software that enables high-quality photo enlargement, is sped up 24% compared with last-generation hardware. DaVinci Resolve’s AI Magic Mask feature saves video editors considerable time automating the highly manual process of rotoscoping, carried out 20% faster than the previous generation.
The GeForce RTX 4060 Ti (8GB) will be available starting Wednesday, May 24, at $399. The GeForce RTX 4060 Ti (16GB) will be available in July, starting at $499. GeForce RTX 4060 will also be available in July, starting at $299.
Visit the Studio Shop for GeForce RTX 4060-powered NVIDIA Studio systems when available, and explore the range of high-performance Studio products.
D5 Render, DLSS 3 Combine to Beautiful Effect
D5 Render adds support for NVIDIA DLSS 3, bringing a vastly improved real-time experience to architects, designers, interior designers and 3D artists.
Such professionals want to navigate scenes smoothly while editing, and demonstrate their creations to clients in the highest quality. Scenes can be incredibly detailed and complex, making it difficult to maintain high real-time viewport frame rates and present in original quality.
D5 is coveted by many artists for its global illumination technology, called D5 GI, which delivers high-quality lighting and shading effects in real time, without sacrificing workflow efficiency.
D5 Render and DLSS 3 work brilliantly to create photorealistic imagery.
By integrating DLSS 3, which combines AI-powered DLSS Frame Generation and Super Resolution technologies, real-time viewport frame rates increase up to 3x, making creator experiences buttery smooth. This allows designers to deal with larger scenes, higher-quality models and textures — all in real time — while maintaining a smooth, interactive viewport.
NVIDIA Omniverse is a key component of the NVIDIA Studio platform and the future of collaborative 3D content creation.
A new monthly blog series, Into the Omniverse, showcases how artists, developers and enterprises can transform their creative workflows using the latest Omniverse advancements.
This month, 3D creators across industries are set to benefit from the pairing of Omniverse and the Adobe Substance 3D suite of creative tools.
“End of Summer,” created by the Adobe Substance 3D art and development team, built in Omniverse.
An upcoming update to the Omniverse Connector for Adobe Substance 3D Painter will dramatically increase flexibility for users, with new capabilities including an export feature using Universal Scene Description (OpenUSD), an open, extensible file framework enabling non-destructive workflows and collaboration in scene creation.
Find details in the blog and check in every month for more Omniverse news.
Your Last Worm-ing
NVIDIA 3D artist Daniel Barnes has a simple initial approach to his work: sketch until something seems cool enough to act on. While his piece Wormhole 00527 was no exception to this usual process, an emotional component made a significant impact on it.
“After the pandemic and various global events, I took even more interest in spaceships and escape pods,” said Barnes. “It was just an abstract form of escapism that really played on the idea of ‘get me out of here,’ which I think we all experienced at one point, being inside so much.”
Barnes imagined Wormhole 00527 to comprise each blur one might pass by as an alternate star system — a place on the other side of the galaxy where things are really similar but more peaceful, he said. “An alternate Earth of sorts,” the artist added.
Sculpting on his tablet one night in the Nomad app, Barnes imported a primitive model into Autodesk Maya for further refinement. He retopologized the scene, converting high-resolution models into much smaller files that can be used for animation.
Modeling in Autodesk Maya.
“I’ve been creating in 3D for over a decade now, and GeForce RTX graphics cards have been able to power multiple displays smoothly and run my 3D software viewports at great speeds. Plus, rendering in real time on some projects is great for fast development.” — Daniel Barnes
Barnes then took a screenshot, further sketched out his modeling edits and made lighting decisions in Adobe Photoshop.
His GeForce RTX 4090 GPU gives him access to over 30 GPU-accelerated features for quickly, smoothly modifying and adjusting images. These features include blur gallery, object selection and perspective warp.
Back in Autodesk Maya, Barnes used the quad-draw tool — a streamlined, one-tool workflow for retopologizing meshes — to create geometry, adding break-in panels that would be advantageous for animating.
So this is what a wormhole looks like.
Barnes used Chaos V-Ray with Autodesk Maya’s Z-depth feature, which provides information about each object’s distance from the camera in its current view. Each pixel representing the object is evaluated for distance individually — meaning different pixels for the same object can have varying grayscale values. This made it far easier for Barnes to tweak depth of field and add motion-blur effects.
Example of Z-depth. Image courtesy of Chaos V-Ray with Autodesk Maya.
He also added a combination of lights and applied materials with ease. Deploying RTX-accelerated ray tracing and AI denoising with the default Autodesk Arnold renderer enabled smooth movement in the viewport, resulting in beautifully photorealistic renders.
The Z-depth feature made it easier to apply motion-blur effects.
He finished the project by compositing in Adobe After Effects, using GPU-accelerated features for faster rendering with NVIDIA CUDA technology.
3D artist Daniel Barnes.
When asked what his favorite creative tools are, Barnes didn’t hesitate. “Definitely my RTX cards and nice large displays!” he said.