What’s Up? Watts Down — More Science, Less Energy

What’s Up? Watts Down — More Science, Less Energy

People agree: accelerated computing is energy-efficient computing.

The National Energy Research Scientific Computing Center (NERSC), the U.S. Department of Energy’s lead facility for open science, measured results across four of its key high performance computing and AI applications.

They clocked how fast the applications ran and how much energy they consumed on CPU-only and GPU-accelerated nodes on Perlmutter, one of the world’s largest supercomputers using NVIDIA GPUs.

The results were clear. Accelerated with NVIDIA A100 Tensor Core GPUs, energy efficiency rose 5x on average. An application for weather forecasting logged gains of 9.8x.

GPUs Save Megawatts

On a server with four A100 GPUs, NERSC got up to 12x speedups over a dual-socket x86 server.

That means, at the same performance level, the GPU-accelerated system would consume 588 megawatt-hours less energy per month than a CPU-only system. Running the same workload on a four-way NVIDIA A100 cloud instance for a month, researchers could save more than $4 million compared to a CPU-only instance.

Measuring Real-World Applications

The results are significant because they’re based on measurements of real-world applications, not synthetic benchmarks.

The gains mean that the 8,000+ scientists using Perlmutter can tackle bigger challenges, opening the door to more breakthroughs.

Among the many use cases for the more than 7,100 A100 GPUs on Perlmutter, scientists are probing subatomic interactions to find new green energy sources.

Advancing Science at Every Scale

The applications NERSC tested span molecular dynamics, material science and weather forecasting.

For example, MILC simulates the fundamental forces that hold particles together in an atom. It’s used to advance quantum computing, study dark matter and search for the origins of the universe.

BerkeleyGW helps simulate and predict optical properties of materials and nanostructures, a key step toward developing more efficient batteries and electronic devices.

NERSC apps get efficiency gains with accelerated computing.
NERSC apps get efficiency gains with accelerated computing.

EXAALT, which got an 8.5x efficiency gain on A100 GPUs, solves a fundamental challenge in molecular dynamics. It lets researchers simulate the equivalent of short videos of atomic movements rather than the sequences of snapshots other tools provide.

The fourth application in the tests, DeepCAM, is used to detect hurricanes and atmospheric rivers in climate data. It got a 9.8x gain in energy efficiency when accelerated with A100 GPUs.

The overall 5x speedup is based on a mix of HPC and AI applications.
The overall 5x speedup is based on a mix of HPC and AI applications.

Savings With Accelerated Computing

The NERSC results echo earlier calculations of the potential savings with accelerated computing. For example, in a separate analysis NVIDIA conducted, GPUs delivered 42x better energy efficiency on AI inference than CPUs.

That means switching all the CPU-only servers running AI worldwide to GPU-accelerated systems could save a whopping 10 trillion watt-hours of energy a year. That’s like saving the energy 1.4 million homes consume in a year.

Accelerating the Enterprise

You don’t have to be a scientist to get gains in energy efficiency with accelerated computing.

Pharmaceutical companies are using GPU-accelerated simulation and AI to speed the process of drug discovery. Carmakers like BMW Group are using it to model entire factories.

They’re among the growing ranks of enterprises at the forefront of what NVIDIA founder and CEO Jensen Huang calls an industrial HPC revolution, fueled by accelerated computing and AI.

 

Read More

NVIDIA Cambridge-1 AI Supercomputer Expands Reach to Researchers via the Cloud

NVIDIA Cambridge-1 AI Supercomputer Expands Reach to Researchers via the Cloud

Scientific researchers need massive computational resources that can support exploration wherever it happens. Whether they’re conducting groundbreaking pharmaceutical research, exploring alternative  energy sources or discovering new ways to prevent financial fraud, accessible state-of-the-art AI computing resources are key to driving innovation. This new model of computing can solve the challenges of generative AI and power the next wave of innovation.

Cambridge-1, a supercomputer NVIDIA launched in the U.K. during the pandemic, has powered discoveries from some of the country’s top healthcare researchers. The system is now becoming part of NVIDIA DGX Cloud to accelerate the pace of scientific innovation and discovery — across almost every industry.

As a cloud-based resource, it will broaden access to AI supercomputing for researchers in climate science, autonomous machines, worker safety and other areas, delivered with the simplicity and speed of the cloud, ideally located for the U.K. and European access.

DGX Cloud is a multinode AI training service that makes it possible for any enterprise to access leading-edge supercomputing resources from a browser. The original Cambridge-1 infrastructure included 80 NVIDIA DGX systems; now it will join with DGX Cloud, to allow customers access to world-class infrastructure.

History of Healthcare Insights

Academia, startups and the UK’s large pharma ecosystem used the Cambridge-1 supercomputing resource to accelerate research and design new approaches to drug discovery, genomics and medical imaging with generative AI in some of the following ways:

  • InstaDeep, in collaboration with NVIDIA and the Technical University of Munich Lab, developed a 2.5 billion-parameter LLM for genomics on Cambridge-1. This project aimed to create a more accurate model for predicting the properties of DNA sequences.
  • King’s College London used Cambridge-1 to create 100,000 synthetic brain images — and made them available for free to healthcare researchers. Using the open-source AI imaging platform MONAI, the researchers at King’s created realistic, high-resolution 3D images of human brains, training in weeks versus months.
  • Oxford Nanopore used Cambridge-1 to quickly develop highly accurate, efficient models for base calling in DNA sequencing. The company also used the supercomputer to support inference for the ORG.one project, which aims to enable DNA sequencing of critically endangered species
  • Peptone, in collaboration with a pharma partner, used Cambridge-1 to run physics-based simulations to evaluate the effect of mutations on protein dynamics with the goal of better understanding why specific antibodies work efficiently. This research could improve antibody development and biologics discovery.
  • Relation Therapeutics developed a large language model which reads DNA to better understand genes, which is a key step to creating new medicines. Their research takes us a step closer to understanding how genes are controlled in certain diseases.

Read More

Beyond Fast: GeForce RTX 4060 GPU Family Gives Creators More Options to Accelerate Workflows, Starting at $299

Beyond Fast: GeForce RTX 4060 GPU Family Gives Creators More Options to Accelerate Workflows, Starting at $299

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

The GeForce RTX 4060 family will be available starting next week, bringing massive creator benefits to the popular 60-class GPUs.

The latest GPUs in the 40 Series come backed by NVIDIA Studio technologies, including hardware acceleration for 3D, video and AI workflows; optimizations for RTX hardware in over 110 of the most popular creative apps; and exclusive Studio apps like Omniverse, Broadcast and Canvas.

Real-time ray-tracing renderer D5 Render introduced support for NVIDIA DLSS 3 technology, enabling super smooth real-time rendering experiences, so creators can work with larger scenes without sacrificing speed or interactivity.

Plus, the new Into the Omniverse series highlights the latest advancements to NVIDIA Omniverse, a platform furthering the evolution of the metaverse with the OpenUSD framework. The series showcases how artists, developers and enterprises can use the open development platform to transform their 3D workflows. The first installment highlights an update coming soon to the Adobe Substance 3D Painter Connector.

In addition, NVIDIA 3D artist Daniel Barnes returns this week In the NVIDIA Studio to share his mesmerizing, whimsical animation, Wormhole 00527.

Beyond Fast

The GeForce RTX 4060 family is powered by the ultra-efficient NVIDIA Ada Lovelace architecture with fourth-generation Tensor Cores for AI content creation, third-generation RT Cores and compatibility with DLSS 3 for ultra-fast 3D rendering, as well as the eighth-generation NVIDIA encoder (NVENC), now with support for AV1.

The GeForce RTX 4060 Ti GPU.

3D modelers can build and edit realistic 3D models in real time, up to 45% faster than the previous generation, thanks to third-generation RT Cores, DLSS 3 and the NVIDIA Omniverse platform.

Tested on GeForce RTX 4060 and 3060 GPUs. Maya with Arnold 2022 (7.1.1) measures render time of NVIDIA SOL 3D model. DaVinci Resolve measures FPS applying Magic Mask effect “Faster” quality setting to 4K resolution. ON1 Resize AI measures time required to apply effect to batch of 10 photos. Time measurement is normalized for easier comparison across tests.

Video editors specializing in Adobe Premiere Pro, Blackmagic Design’s DaVinci Resolve and more have at their disposal a variety of AI-powered effects, such as auto-reframe, magic mask and depth estimation. Fourth-generation Tensor Cores seamlessly hyper-accelerate these effects, so creators can stay in their flow states.

Broadcasters can jump into next-generation livestreaming with the eighth-generation NVENC with support for AV1. The new encoder is 40% more efficient, making livestreams appear as if there were a 40% increase in bitrate — a big boost in image quality that enables 4K streaming on apps like OBS Studio and platforms such as YouTube and Discord.

10 Mbps with default OBS streaming settings.

NVENC boasts the most efficient hardware encoding available, providing significantly better quality than other GPUs. At the same bitrate, images will look better, sharper and have less artifacts, like in the example above.

Encode quality comparison, measured with BD-BR.

Creators are embracing AI en masse. DLSS 3 multiplies frame rates in popular 3D apps. ON1 ResizeAI, software that enables high-quality photo enlargement, is sped up 24% compared with last-generation hardware. DaVinci Resolve’s AI Magic Mask feature saves video editors considerable time automating the highly manual process of rotoscoping, carried out 20% faster than the previous generation.

The GeForce RTX 4060 Ti (8GB) will be available starting Wednesday, May 24, at $399. The GeForce RTX 4060 Ti (16GB) will be available in July, starting at $499. GeForce RTX 4060 will also be available in July, starting at $299.

Visit the Studio Shop for GeForce RTX 4060-powered NVIDIA Studio systems when available, and explore the range of high-performance Studio products.

D5 Render, DLSS 3 Combine to Beautiful Effect

D5 Render adds support for NVIDIA DLSS 3, bringing a vastly improved real-time experience to architects, designers, interior designers and 3D artists.

Such professionals want to navigate scenes smoothly while editing, and demonstrate their creations to clients in the highest quality. Scenes can be incredibly detailed and complex, making it difficult to maintain high real-time viewport frame rates and present in original quality.

D5 is coveted by many artists for its global illumination technology, called D5 GI, which delivers high-quality lighting and shading effects in real time, without sacrificing workflow efficiency.

D5 Render and DLSS 3 work brilliantly to create photorealistic imagery.

By integrating DLSS 3, which combines AI-powered DLSS Frame Generation and Super Resolution technologies, real-time viewport frame rates increase up to 3x, making creator experiences buttery smooth. This allows designers to deal with larger scenes, higher-quality models and textures — all in real time — while maintaining a smooth, interactive viewport.

Learn more about the update.

Venture ‘Into the Omniverse’

NVIDIA Omniverse is a key component of the NVIDIA Studio platform and the future of collaborative 3D content creation.

A new monthly blog series, Into the Omniverse, showcases how artists, developers and enterprises can transform their creative workflows using the latest Omniverse advancements.

This month, 3D creators across industries are set to benefit from the pairing of Omniverse and the Adobe Substance 3D suite of creative tools.

“End of Summer,” created by the Adobe Substance 3D art and development team, built in Omniverse.

An upcoming update to the Omniverse Connector for Adobe Substance 3D Painter will dramatically increase flexibility for users, with new capabilities including an export feature using Universal Scene Description (OpenUSD), an open, extensible file framework enabling non-destructive workflows and collaboration in scene creation.

Find details in the blog and check in every month for more Omniverse news.

Your Last Worm-ing

NVIDIA 3D artist Daniel Barnes has a simple initial approach to his work: sketch until something seems cool enough to act on. While his piece Wormhole 00527 was no exception to this usual process, an emotional component made a significant impact on it.

 

“After the pandemic and various global events, I took even more interest in spaceships and escape pods,” said Barnes. “It was just an abstract form of escapism that really played on the idea of ‘get me out of here,’ which I think we all experienced at one point, being inside so much.”

Barnes imagined Wormhole 00527 to comprise each blur one might pass by as an alternate star system — a place on the other side of the galaxy where things are really similar but more peaceful, he said. “An alternate Earth of sorts,” the artist added.

Sculpting on his tablet one night in the Nomad app, Barnes imported a primitive model into Autodesk Maya for further refinement. He retopologized the scene, converting high-resolution models into much smaller files that can be used for animation.

Modeling in Autodesk Maya.

“I’ve been creating in 3D for over a decade now, and GeForce RTX graphics cards have been able to power multiple displays smoothly and run my 3D software viewports at great speeds. Plus, rendering in real time on some projects is great for fast development.” — Daniel Barnes

Barnes then took a screenshot, further sketched out his modeling edits and made lighting decisions in Adobe Photoshop.

His GeForce RTX 4090 GPU gives him access to over 30 GPU-accelerated features for quickly, smoothly modifying and adjusting images. These features include blur gallery, object selection and perspective warp.

Back in Autodesk Maya, Barnes used the quad-draw tool — a streamlined, one-tool workflow for retopologizing meshes — to create geometry, adding break-in panels that would be advantageous for animating.

So this is what a wormhole looks like.

Barnes used Chaos V-Ray with Autodesk Maya’s Z-depth feature, which provides information about each object’s distance from the camera in its current view. Each pixel representing the object is evaluated for distance individually — meaning different pixels for the same object can have varying grayscale values. This made it far easier for Barnes to tweak depth of field and add motion-blur effects.

Example of Z-depth. Image courtesy of Chaos V-Ray with Autodesk Maya.

He also added a combination of lights and applied materials with ease. Deploying RTX-accelerated ray tracing and AI denoising with the default Autodesk Arnold renderer enabled smooth movement in the viewport, resulting in beautifully photorealistic renders.

The Z-depth feature made it easier to apply motion-blur effects.

He finished the project by compositing in Adobe After Effects, using GPU-accelerated features for faster rendering with NVIDIA CUDA technology.

3D artist Daniel Barnes.

When asked what his favorite creative tools are, Barnes didn’t hesitate. “Definitely my RTX cards and nice large displays!” he said.

Check out Barnes’ portfolio on Instagram.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Get started with NVIDIA Omniverse by downloading the standard license free, or learn how Omniverse Enterprise can connect your team. Developers can get started with Omniverse resources. Stay up to date on the platform by subscribing to the newsletter, and follow NVIDIA Omniverse on Instagram, Medium and Twitter

For more, join the Omniverse community and check out the Omniverse forums, Discord server, Twitch and YouTube channels.

Read More

First Xbox Title Joins GeForce NOW

First Xbox Title Joins GeForce NOW

Get ready for action — the first Xbox game title is now streaming from GeForce GPUs in the cloud directly to GeForce NOW members, with more to come later this month.

Gears 5 comes to the service this GFN Thursday. Keep reading to find out what other entries from the Xbox library will be streaming on GeForce NOW soon.

Also, time’s almost up on an exclusive discount for six-month GeForce NOW Priority memberships. Sign up today to save 40% before the offer ends on Sunday, May 21.

All Geared Up

Gears 5 on GeForce NOW
The gang’s all here.

NVIDIA and Microsoft have been working together to bring the first Xbox PC titles to the GeForce NOW library. With their gaming fueled by GeForce GPU servers in the cloud, members can access the best of Xbox Game Studios and Bethesda titles across nearly any device, including underpowered PCs, Macs, iOS and Android mobile devices, NVIDIA SHIELD TV, supported smart TVs and more.

Gears 5 from The Coalition is the first PC title from Xbox Game Studios to hit GeForce NOW. The latest entry in the Gears saga includes an acclaimed campaign playable solo or cooperatively, plus a variety of PvE and PvP modes to team up and battle in.

More Microsoft titles will follow shortly, starting with Deathloop, Grounded and Pentiment on Thursday, May 25.

Members will be able to stream these Xbox PC hits purchased through Steam on PCs, macOS devices, Chromebooks, smartphones and other devices. Support for Microsoft Store will become available in the coming months. Learn more about Xbox PC game support on GeForce NOW.

GeForce NOW Priority members can skip the wait and play Gears 5 or one of the other 1,600+ supported titles at 1080p 60 frames per second. Or go Ultimate for an upgraded experience, playing at up to 4K 120 fps for gorgeous graphics, or up to 240 fps for ultra-low latency that gives the competitive edge.

Microsoft on GeForce NOW
Like peanut butter and jelly.

GeForce NOW members will see more PC games from Xbox added regularly and can keep up with the latest news and release dates through GFN Thursday updates.

Green Light Special

The latest GeForce NOW app updates are rolling out now. Version 2.0.52 brings a few fit-and-finish updates for members, including a new way to easily catch game discounts, content and more.

Wall of Games GeForce NOW
Look for the latest deals, downloadable content and more in the latest GeForce NOW app update.

Promotional tags can be found on featured games throughout the app on PC and macOS. The tags are curated to highlight the most compelling offers available on the 1,600+ GeForce NOW-supported games. Keep an eye out for these promotional tags, which showcase new downloadable content, discounts, free games and more.

The update also includes in-app search improvements, surround-sound support in the browser experience on Windows and macOS, updated in-game button prompts for members using DualShock 4 and DualSense controllers, and more. Check out the in-app release highlights for more info.

Play for Today

Outlast Trials on GeForce NOW
They say things aren’t so scary when you’re with friends. ‘The Outlast Trials’ aims to prove them wrong.

Don’t get spooked in The Outlast Trials, newly supported this week on GeForce NOW. Go it alone or team up in this multiplayer edition of the survival horror franchise. Avoid the monstrosities waiting in the Murkoff experiments while using new tools to aid stealth, create opportunities to flee, slow enemies and more.

With support for more games every week, there’s always a new adventure around the corner. Here’s this week’s additions:

  • Tin Hearts (New release on Steam, May 16)
  • The Outlast Trials (New release on Steam, May 18)
  • Gears 5 (Steam)

With the weekend kicking off, what are you gearing up to play? Let us know on Twitter or in the comments below.

Read More

Into the Omniverse: Adobe Substance 3D, NVIDIA Omniverse Enhance Creative Freedom Within 3D Workflows

Into the Omniverse: Adobe Substance 3D, NVIDIA Omniverse Enhance Creative Freedom Within 3D Workflows

Editor’s note: This is the first installment of our monthly Into the Omniverse series, which highlights the latest advancements to NVIDIA Omniverse furthering the evolution of the metaverse with the OpenUSD framework, and showcases how artists, developers and enterprises can transform their workflows with the platform.

An update to the Omniverse Connector for Adobe Substance 3D Painter will save 3D creators across industries significant time and effort. New capabilities include an export feature using Universal Scene Description (OpenUSD), an open, extensible file framework enabling non-destructive workflows and collaboration in scene creation.

Benjamin Samar, technical director of video production company Elara Systems, is using the Adobe Substance 3D Painter Connector to provide a “uniquely human approach to an otherwise clinical discussion,” he said.

Samar and his team tapped the Connector to create an animated public-awareness video for sickle cell disease. The video aims to help adolescents experiencing sickle cell disease understand the importance of quickly telling an adult or a medical professional if they’re experiencing symptoms.

According to Samar, the Adobe Substance 3D Painter Connector for Omniverse was especially useful for setting up all of the video’s environments and characters — before bringing them into the USD Composer app for scene composition and real-time RTX rendering of the high-quality visuals.

“By using this Connector, materials were automatically imported, converted to Material Definition Language and ready to go inside USD Composer with a single click,” he said.

The Adobe Substance 3D Art and Development team itself uses Omniverse in their workflows. Their End of Summer project fostered collaboration and creativity among the Adobe artists in Omniverse, and resulted in stunningly rich and realistic visuals.

Learn more about how they used Adobe Substance 3D tools with Unreal Engine 5 and Omniverse in this on-demand NVIDIA GTC session, and get an exclusive behind-the-scenes look at Adobe’s NVIDIA Studio-accelerated workflows in the making of this project.

Plus, technical artists are using Adobe Substance 3D and Omniverse to create scratches and other defects on 3D objects to train vision AI models.

 

Adobe and Omniverse workflows offer creators improved efficiency and flexibility — whether they’re training AI models, animating an educational video to improve medical knowledge or bringing a warm summer scene to life.

And soon, the next release of the Adobe Substance 3D Painter Connector for Omniverse will further streamline their creative processes.

Connecting the Dots for a More Seamless Workflow

Version 203.0 of the Adobe Substance 3D Painter Connector for Omniverse, coming mid-June, will offer new capabilities that enable more seamless workflows.

Substance 3D Painter’s new OpenUSD export feature, compatible with version 8.3.0 of the app and above, allows users to export textures using any built-in or user-defined preset to dynamically build OmniPBR shaders — programs that calculate the appropriate levels of light, darkness and color during 3D rendering — in USD Composer.

To further speed and ease workflows, the Connector update will remove “rotating texture folders,” uniquely generated temporary directories that textures were exported to with each brush stroke.

With each change the artist makes, textures will now save over the same path, greatly speeding the process for locally saved projects.

Get Plugged Into the Omniverse

Discover the latest in AI, graphics and more by watching NVIDIA founder and CEO Jensen Huang’s COMPUTEX keynote on Sunday, May 28, at 8 p.m. PT.

#SetTheScene for your Adobe and Omniverse workflow by joining the latest community challenge. Share your best 3D environments on social media with the #SetTheScene hashtag for a chance to be featured on channels for NVIDIA Omniverse (Twitter, LinkedIn, Instagram) and NVIDIA Studio (Twitter, Facebook, Instagram).

Get started with NVIDIA Omniverse by downloading the standard license free, or learn how Omniverse Enterprise can connect your team. Developers can get started with Omniverse resources.

Stay up to date on the platform by subscribing to the newsletter, and follow NVIDIA Omniverse on Instagram, Medium and Twitter. For more, join the Omniverse community and check out the Omniverse forums, Discord server, Twitch and YouTube channels.

Featured image courtesy of Adobe Substance 3D art and development team.

Read More

Mammoth Mission: How Colossal Biosciences Aims to ‘De-Extinct’ the Woolly Mammoth

Mammoth Mission: How Colossal Biosciences Aims to ‘De-Extinct’ the Woolly Mammoth

Ten thousand years after the last woolly mammoths vanished with the last Ice Age, a team of computational biologists is on a mission to bring them back within five years.

Led by synthetic biology pioneer George Church, Colossal Biosciences is also seeking to return the dodo bird and Tasmanian tiger, as well as help save current-day endangered species.

“The woolly mammoth is a very iconic species to bring back,” said Eriona Hysolli, head of biological sciences at Colossal Biosciences, which is based in Austin, Texas. “In addition, we see that pipeline as a proxy for conservation, given that elephants are endangered and much of this work directly benefits them.”

There’s plenty of work to be done on endangered species, as well.

Critically endangered, the African forest elephant has declined by nearly 90% in the past three decades, according to Colossal. Poaching took more than 100,000 African elephants between 2010 and 2012 alone, according to the company.

“We might lose these elephant species in our lifetime if their numbers continue to dwindle,” said Hysolli.

Humans caused the extinction of many species, but computational biologists are now trying to bring them back with CRISPR and other gene-editing technologies, leaps in AI, and bioinformatics tools and technology, such as the NVIDIA Parabricks software suite for genomic analysis.

To bring back a woolly mammoth, scientists at Colossal start with mammoth and elephant genome sequencing and identify what makes them similar and different. Then they use Asian elephant cells to engineer mammoth changes responsible for cold adaptation traits, transferring the nuclei of edited cells into elephant enucleated eggs before implanting them into a healthy Asian elephant surrogate.

Tech Advances Drive Genomics Leaps 

It took enormous effort over two decades, not to mention $3 billion in funding, to first sequence the human genome. But that’s now been reduced to mere hours and under $200 per whole genome, thanks to the transformative impact of AI and accelerated computing.

It’s a story well known to Colossal co-founder Church. The Harvard Medical School professor and co-founder of roughly 50 biotech startups has been at the forefront of genetics research for decades.

“There’s been about a 20 millionfold reduction in price, and a similar improvement in quality in a little over a decade, or a decade and a half,” Church said in a recent interview on the TWiT podcast.

Research to Complete Reference Genome Puzzle

Colossal’s work to build a reference genome of the woolly mammoth is similar to trying to complete a puzzle.

DNA sequences from bone samples are assembled in silico. But degradation of the DNA over time means that not all the pieces are there. The gaps to be filled can be guided with the genome from an Asian elephant, the closest living relative for the mammoth.

Once a rough representative genome sequence is configured, secondary analysis takes place, which is where GPU acceleration with Parabricks comes in.

The suite of bioinformatic tools in Parabricks can provide more than 100x acceleration of industry-standard tools used for alignment and variant calling. In the alignment step, the short fragments, or reads, from the sequenced sample are aligned in the correct order, using the reference genome, which in this case is the genome of the Asian elephant. Then, in the variant-calling step, Parabricks tools identify the variants, or differences, between the sequenced whole genome mammoth samples and the Asian elephant reference.

In September, Colossal Biosciences spun out Form Bio, which offers a breakthrough computational life sciences platform, to aid its efforts and commercialize scientific innovations. Form Bio is a member of NVIDIA Inception, a program that provides companies with technology support and AI platforms guidance.

Parabricks includes some of the same tools as the open-source ones that Form Bio was using, making it easy to replace them with NVIDIA GPU-accelerated versions of those tools, said Brandi Cantarel, vice president of bioinformatics at Form Bio.

Compared with the open-source software on CPUs, Parabricks running on GPUs enables Colossal to complete their end-to-end sequence analysis 12x faster and at one-quarter the cost, accelerating the research.

“We’re getting very comparable or exactly the same outputs, and it was faster and cheaper,” said Cantarel.

Analysis Targeting Cold Tolerance for Woolly Mammoth 

A lot is at stake in the sequencing and analysis.

The Form Bio platform hosts tools that can assess whether researchers make the right CRISPR edits and assist in analysis for whether cells are edited.

“Can we identify what are the targets that we need to actually go after and edit and engineer? The answer is absolutely yes, and we’ve gotten very good at selecting impactful genetic differences,” said Hysolli.

Another factor to consider is human contamination to samples. So for each sample researchers examine, they must do analysis against human cell references to discard those contaminants.

Scientists have gathered multiple specimens of woolly mammoths over the years, and the best are tooth or bone samples found in permafrost. “We benefit from the fact that woolly mammoths were well-preserved because they lived in an Arctic environment,” said Hysolli.

An Asian elephant is 99.6% the same as a mammoth genetically, according Ben Lamm,  Colossal CEO and co-founder.

“We’re just targeting about 65 genes that represent the cold tolerance, the core phenotypes that we’re looking for,” he recently said on stage at South by Southwest in Austin.

Benefits to Biodiversity, Conservation and Humanity

Colossal aims to create reference genomes for species, like the mammoth, that represent broad population samples. They’re looking at mammoths from different regions of the globe and periods in time. And it’s necessary to parse the biodiversity and do more sequencing, according to researchers at the company.

“As we lose biodiversity, it’s important to bring back or restore species and their ecosystems, which in turn positively impacts ecology and supports conservation,” said Hysolli.

Population genetics is important. Researchers need to understand how different and similar these animals are to each other so that in the future they can create thriving populations, she said.

That ensures better chances of survival. “We need to make sure — that’s what makes a thriving population when you rewild,” said Hysolli, referring to when the team introduces the species back into an Arctic habitat.

It’s also been discovered that elephants are more resistant to cancer — so researchers are looking at the genetic factors and how that might translate for humans.

“This work does not only benefit Colossal’s de-extinction efforts and conservation, but these technologies we build can be applied to bettering human health and treating diseases,” said Hysolli.

Learn more about NVIDIA Parabricks for accelerated genomic sequencing analysis.

Read More

Chip Manufacturing ‘Ideal Application’ for AI, NVIDIA CEO Says

Chip Manufacturing ‘Ideal Application’ for AI, NVIDIA CEO Says

Chip manufacturing is an “ideal application” for NVIDIA accelerated and AI computing, NVIDIA founder and CEO Jensen Huang said Tuesday.

Detailing how the latest advancements in computing are accelerating “the world’s most important industry,” Huang spoke at ITF World 2023 semiconductor conference in Antwerp, Belgium.

Huang delivered his remarks via video to a gathering of leaders from across the semiconductor, technology and communications industries.

“I am thrilled to see NVIDIA accelerated computing and AI in service of the world’s chipmaking industry,” Huang said as he detailed how advancements in accelerated computing, AI and semiconductor manufacturing intersect.

AI, Accelerated Computing Step Up

The exponential performance increase of the CPU has been the governing dynamic of the technology industry for nearly four decades, Huang said.

But over the past few years CPU design has matured, he said. The rate at which semiconductors become more powerful and efficient is slowing, even as demand for computing capability soars.

“As a result, global demand for cloud computing is causing data center power consumption to skyrocket,” Huang said.

Huang said that striving for net zero while supporting the “invaluable benefits” of more computing power requires a new approach.

The challenge is a natural fit for NVIDIA, which pioneered accelerated computing, coupling the parallel processing capabilities of GPUs with CPUs.

This acceleration, in turn, sparked the AI revolution. A decade ago, deep learning researchers such as Alex Krizhevsky, Ilya Sutskever and Geoffrey Hinton discovered that GPUs could be cost-effective supercomputers.

Since then, NVIDIA reinvented its computing stack for deep learning, opening up “multi trillion-dollar opportunities in robotics, autonomous vehicles and manufacturing,” Huang said.

By offloading and accelerating compute-intensive algorithms, NVIDIA routinely speeds up applications by 10-100x while reducing power and cost by an order of magnitude, Huang explained.

Together, AI and accelerated computing are transforming the technology industry. “We are experiencing two simultaneous platform transitions — accelerated computing and generative AI,” Huang said.

AI, Accelerated Computing Come to Chip Manufacturing

Huang explained that advanced chip manufacturing requires over 1,000 steps, producing features the size of a biomolecule. Each step must be nearly perfect to yield functional output.

“Sophisticated computational sciences are performed at every stage to compute the features to be patterned and to do defect detection for in-line process control,” Huang said. “Chip manufacturing is an ideal application for NVIDIA accelerated and AI computing.”

Huang outlined several examples of how NVIDIA GPUs are becoming increasingly integral to chip manufacturing.

Companies like D2S, IMS Nanofabrication, and NuFlare build mask writers — machines that create photomasks, stencils that transfer patterns onto wafers — using electron beams. NVIDIA GPUs accelerate the computationally demanding tasks of pattern rendering and mask process correction for these mask writers.

Semiconductor manufacturer TSMC and equipment providers KLA and Lasertech use extreme ultraviolet light, known as EUV, and deep ultraviolet light, or DUV, for mask inspection. NVIDIA GPUs play a crucial role here, too, in processing classical physics modeling and deep learning to generate synthetic reference images and detect defects.

KLA, Applied Materials, and Hitachi High-Tech use NVIDIA GPUs in their e-beam and optical wafer inspection and review systems.

And in March, NVIDIA announced that it is working with TSMC, ASML and Synopsys to accelerate computational lithography.

Computational lithography simulates Maxwell’s equations of light behavior passing through optics and interacting with photoresists, Huang explained.

Computational lithography is the largest computational workload in chip design and manufacturing, consuming tens of billions of CPU hours annually. Massive data centers run 24/7 to create reticles for new chips.

Introduced in March, NVIDIA cuLitho is a software library with optimized tools and algorithms for GPU-accelerated computational lithography.

“We have already accelerated the processing by 50 times,” Huang said. “Tens of thousands of CPU servers can be replaced by a few hundred NVIDIA DGX systems, reducing power and cost by an order of magnitude.”

The savings will reduce carbon emissions or enable new algorithms to push beyond 2 nanometers, Huang said.

What’s Next?

What’s the next wave of AI? Huang described a new kind of AI —  “embodied AI,” or intelligent systems that can understand, reason about and interact with the physical world.

He said examples include robotics, autonomous vehicles and even chatbots that are smarter because they understand the physical world.

Huang offered his audience a look at NVIDIA VIMA, a multimodal embodied AI. VIMA, Huang said, can perform tasks from visual text prompts, such as “rearranging objects to match this scene.”

It can learn concepts and act accordingly, such as “This is a widget,” “That’s a thing” and then “Put this widget in that thing.” It can also learn from demonstrations and stay within specified boundaries, Huang said.

VIMA runs on NVIDIA AI, and its digital twin runs in NVIDIA Omniverse, a 3D development and simulation platform. Huang said that physics-informed AI could learn to emulate physics and make predictions that obey physical laws.

Researchers are building systems that mesh information from real and virtual worlds on a vast scale.

NVIDIA is building a digital twin of our planet, called Earth-2, which will first predict the weather, then long-range weather, and eventually climate. NVIDIA’s Earth-2 team has created FourCastNet, a physics-AI model that emulates global weather patterns 50-100,000x faster.

FourCastNet runs on NVIDIA AI, and the Earth-2 digital twin is built in NVIDIA Omniverse.

Such systems promise to address the greatest challenge of our time, such as the need for cheap, clean energy.

For example, researchers at the U.K.’s Atomic Energy Authority and the University of Manchester are creating a digital twin of their fusion reactor, using physics-AI to emulate plasma physics and robotics to control the reactions and sustain the burning plasma.

Huang said scientists could explore hypotheses by testing them in the digital twin before activating the physical reactor, improving energy yield, predictive maintenance and reducing downtime. “The reactor plasma physics-AI runs on NVIDIA AI, and its digital twin runs in NVIDIA Omniverse,“ Huang said.

Such systems hold promise for further advancements in the semiconductor industry. “I look forward to physics-AI, robotics and Omniverse-based digital twins helping to advance the future of chip manufacturing,” Huang said.

Read More

Startup’s AI Slashes Paperwork for Doctors Across Africa

Startup’s AI Slashes Paperwork for Doctors Across Africa

As a medical doctor in Nigeria, Tobi Olatunji knows the stress of practicing in Africa’s busy hospitals. As a machine-learning scientist, he has a prescription for it.

“I worked at one of West Africa’s largest hospitals, where I would routinely see more than 30 patients a day —  it’s a very hard job,” said Olatunji.

The need to write detailed patient notes and fill out forms makes it even harder. Paper records slowed the pace of medical research, too.

In his first years of practice, Olatunji imagined a program to plow through the mounds of paperwork, freeing doctors to help more patients.

It’s been a journey, but that software is available today from his company, Intron Health, a member of the NVIDIA Inception program, which nurtures cutting-edge startups.

A Side Trip in Tech

With encouragement from med school mentors, Olatunji got a master’s degree in medical informatics from the University of San Francisco and another in computer science at Georgia Tech. He started working as a machine-learning scientist in the U.S. by day and writing code on nights and weekends to help digitize Africa’s hospitals.

A pilot test during the pandemic hit a snag.

The first few doctors to use the code took 45 minutes to finish their patient notes. Feeling awkward in front of a keyboard, some health workers said they prefer pen and paper.

“We made a hard decision to invest in natural language processing and speech recognition,” he said. It’s technology he was already familiar with in his day job.

Building AI Models

“The combination of medical terminology and thick African accents produced horrible results with most existing speech-to-text software, so we knew there would be no shortcut to training our own models,” he said.

Tobi Olatunji, CEO of Intron Health
Tobi Olatunji

The Intron team evaluated several commercial and open-source speech recognition frameworks and large language models before choosing to build with NVIDIA NeMo, a software framework for text-based generative AI. In addition, the resulting models were trained on NVIDIA GPUs in the cloud.

“We initially tried to train with CPUs as the cheapest option, but it took forever, so we started with a single GPU and eventually grew to using several of them in the cloud,” he said.

The resulting Transcribe app captures doctors’ dictated messages with more than 92% accuracy across more than 200 African accents. It slashes the time they spend on paperwork by 6x on average, according to an ongoing study Intron is conducting across hospitals in four African countries.

“Even the doctor with the fastest typing skills in the study got a 40% speedup,” he said of the software now in use at several hospitals across Africa.

Listening to Africa’s Voices

Olatunji knew his models needed high quality audio data. So, the company created an app to capture sound bites of medical terms spoken in different accents.

To date, the app’s gathered more than a million clips from more than 7,000 people across 24 countries, including 13 African nations. It’s one of the largest datasets of its type, parts of which have been released as open source to support African speech research.

Today, Intron refreshes its models every other month as more data comes in.

Nurturing Diversity in Medtech

Very little research exists on speech recognition for African accents in a clinical setting. So, working with Africa’s tech communities like DSN, Masakhane and Zindi, Intron launched AfriSpeech-200, a developer challenge to kickstart research using its data.

Similarly, for all its sophistication, medtech lags in diversity and inclusion, so Olatunji recently launched an effort that addresses that issue, too.

Bio-RAMP Lab is a global community of minority researchers working on problems they care about at the intersection of AI and healthcare. The group already has a half dozen papers under review at major conferences.

Olatunji presents his ideas at NVIDIA’s Santa Clara campus in a meeting kicking off an initiative to make AI accessible for all.
Olatunji presents his ideas at NVIDIA’s Santa Clara campus in a meeting kicking off an initiative to make AI accessible for all.

“For seven years, I was the only Black person on every team I worked on,” he said. “There were no Black scientists or managers, even in my job interviews.”

Meanwhile, Intron is even helping hospitals in Africa find creative ways to acquire the hardware they need. It’s another challenge on the way to opening up huge opportunities.

“Once healthcare data gets digitized, you unlock a whole new world for research into areas like predictive models that can be early warning systems for epidemics — we can’t do it without data,” Olatunji said.

Watch a masterclass (starting at 20:30) with Olatunji, HuggingFace and NVIDIA on AI for speech recognition.

Read More

Time to Prioritize: Upgrade to Priority at 40% Off This GFN Thursday

Time to Prioritize: Upgrade to Priority at 40% Off This GFN Thursday

Make gaming a priority this GFN Thursday — time’s running out to upgrade to a GeForce NOW Priority six-month membership at 40% off the normal price. Find out how new Priority members are using the cloud to get their game on.

Plus, the week brings updates for some of the hottest games in the GeForce NOW library, and four more titles join the list.

GeForce NOW RTX 4080 SuperPODs are now live for Ultimate members in Atlanta, where the gamers game. Follow along with the server rollout, and upgrade today for the Ultimate cloud gaming experience.

Priority Check

Through Sunday, May 21, save 40% on a six-month Priority membership for $29.99, normally $49.99.

Priority memberships are perfect for those looking to try GeForce NOW or lock in a lower price for a half-year. Priority members get higher access to GeForce gaming servers, meaning less wait times than free members.

Members who claimed this offer in its first week alone played over 1,000 different titles in the GeForce NOW library, for 30,000+ streamed hours. That means these Priority members skipped the line by more than 500 hours.

They also played the best of PC gaming across multiple devices — PCs, Macs, mobile devices and smart TVs, plus new categories of devices made possible by the cloud, like gaming Chromebooks and cloud gaming handheld devices. And they experienced the cinematic quality of RTX ON in supported titles.

With more than 1,600 titles in the GeForce NOW library, there’s something for everyone to play. Jump into squad-based action in Fortnite or Destiny 2, bring home the victory League of Legends or Counter-Strike: Global Offensive, and explore in open-world role-playing games like Genshin Impact and Cyberpunk 2077. With GeForce NOW Priority, members can get straight into the action.

But don’t wait: This offer ends on Sunday, May 21, so make it a priority to upgrade today.

Game On

GFN Thursday means more games for more gamers. This week brings new additions to the GeForce NOW library, and new updates for the hottest games.

Apex Legends Season 17 on GeForce NOW
It’s not the years, it’s the mileage. 

Apex Legends: Arsenal, the latest season in EA and Respawn Entertainment’s battle royale FPS, is available this week for GeForce NOW members. Meet the newest playable Legend, Ballistic, who’s come out of retirement to teach the young pups some respect. Battle through an updated World’s Edge map, hone your skills in the newly updated Firing Range and progress through the new Weapon Mastery system.

Occupy Mars on GeForce NOW
They say once you grow crops somewhere, you’ve officially “colonized” it.

In addition, Occupy Mars, the latest open-world sandbox game from Pyramid Games, joins the GeForce NOW library this week. Explore and colonize Mars, building a home base and discovering new regions. Grow crops, conduct mining operations and survive on an unforgiving planet. As all sci-fi films that take place on Mars have shown, things don’t always go as planned. Players must learn to cope and survive on the red planet.

For more action, take a look at what’s joining the GeForce NOW library this week:

  • Voidtrain (New release on Steam, May 9)
  • Occupy Mars: The Game (New release on Steam, May 10)
  • Far Cry 6 (New Release Steam, May 11)
  • TT Isle of Man: Ride on the Edge 3 (New release on Steam, May 11)

Ultimate members can now enable real-time ray tracing in Fortnite. The island’s never looked so good.

What are you playing this weekend? We’ve got a little challenge for you this week. Let us know your response on Twitter or in the comments below.

Read More

Living on the Edge: Singtel, Microsoft and NVIDIA Dial Up AI Over 5G

Living on the Edge: Singtel, Microsoft and NVIDIA Dial Up AI Over 5G

For telcos around the world, one of the biggest challenges to upgrading networks has always been the question, “If you build it, will they come?”

Asia’s leading telco, Singtel, believes the key to helping customers innovate with AI across industries — for everything from traffic and video analytics to conversational AI avatars powered by large language models (LLMs) — is to offer multi-access edge compute services on its high-speed, ultra-low-latency 5G network.

Multi-access edge computing, or MEC, moves the computing of traffic and services from a centralized cloud to the edge of the network, where it’s closer to the customer. Doing so reduces network latency and lowers costs through sharing of network resources.

Singtel is collaborating with Microsoft and NVIDIA to combine AI and 5G, so enterprises can boost their innovation and productivity. Using NVIDIA’s full-stack accelerated computing platform optimized for Microsoft Azure Public MEC, the telco is creating solutions that enable customers to leverage AI video analytics for multiple use cases and to deploy 5G conversational avatars powered by LLMs.

From Sea to Shore

Singtel has been rolling out enterprise 5G and MEC across ports, airports, manufacturing facilities and other locations. In addition to running low-latency applications at the edge using Singtel’s 5G network, the solution has the potential to transform operations in sectors such as public safety, urban planning, healthcare, banking, civil service, transportation and logistics. It also offers high security for public sector customers and better performance for end users, enabling new intelligent edge scenarios.

Customers can use these capabilities through Microsoft Azure, only paying for the amount of compute and storage they use for the duration in which they use it. This replicates the cloud consumption model at the network edge and lets users save on additional operational overhead.

Edge Technologies

Singtel is working with video analytics software-makers participating in NVIDIA Inception, a free program that offers startups go-to-market support, expertise and technology. These ISVs will be able to use the NVIDIA Jetson Orin module for edge AI and robotics in conjunction with Microsoft MEC to identify traffic flows at airports and other high-population areas, retail video analytics and other use cases.

Singtel and NVIDIA are also showcasing their technology and solutions, including a real-time LLM-powered avatar developed by system integrator Quantiphi and based on NVIDIA Omniverse digital twin technology, at a May 11 launch event in Singapore. The avatar, built with NVIDIA Riva speech AI and the NeMo Megatron transformer model, enables people to interact in natural language on any topic of interest. Businesses can deploy these avatars anywhere over 5G.

Using Singtel’s high-speed, low-latency 5G — combined with NVIDIA AI accelerated infrastructure and capabilities — enterprises can explore use cases on everything from computer vision and mixed reality to autonomous guided vehicles.

Singtel plans to expand these new capabilities beyond Singapore to other countries and affiliated telcos, as well. This collaboration will help redefine what’s possible through the powerful combination of compute and next-generation networks, unlocking new operational efficiencies, revenue streams and customer experiences.

Read More