A Revolution in the Making: How AI and Science Can Mitigate Climate Change

A partial differential equation is “the most powerful tool humanity has ever created,” Cornell University mathematician Steven Strogatz wrote in a 2009 New York Times opinion piece.

This quote opened last week’s GTC talk AI4Science: The Convergence of AI and Scientific Computing, presented by Anima Anandkumar, director of machine learning research at NVIDIA and professor of computing at the California Institute of Technology.

Anandkumar explained that partial differential equations are the foundation for most scientific simulations. And she showcased how this historic tool is now being made all the more powerful with AI.

“The convergence of AI and scientific computing is a revolution in the making,” she said.

Using new neural operator-based frameworks to learn and solve partial differential equations, AI can help us model weather forecasting 100,000x quicker — and carbon dioxide sequestration 60,000x quicker — than traditional models.

Speeding Up the Calculations

Anandkumar and her team developed the Fourier Neural Operator (FNO), a framework that allows AI to learn and solve an entire family of partial differential equations, rather than a single instance.

It’s the first machine learning method to successfully model turbulent flows with zero-shot super-resolution — which means that FNOs enable AI to make high-resolution inferences without high-resolution training data, which would be necessary for standard neural networks.

FNO-based machine learning greatly reduces the costs of obtaining information for AI models, improves their accuracy and speeds up inference by three orders of magnitude compared with traditional methods.

Mitigating Climate Change

FNOs can be applied to make real-world impact in countless ways.

For one, they offer a 100,000x speedup over numerical methods and unprecedented fine-scale resolution for weather prediction models. By accurately simulating and predicting extreme weather events, the AI models can allow planning to mitigate the effects of such disasters.

The FNO model, for example, was able to accurately predict the trajectory and magnitude of Hurricane Matthew from 2016.

In the video below, the red line represents the observed track of the hurricane. The white cones show the National Oceanic and Atmospheric Administration’s hurricane forecasts based on traditional models. The purple contours mark the FNO-based AI forecasts.

As shown, the FNO model follows the trajectory of the hurricane with improved accuracy compared with the traditional method — and the high-resolution simulation of this weather event took just a quarter of a second to process on NVIDIA GPUs.

In addition, Anandkumar’s talk covered how FNO-based AI can be used to model carbon dioxide sequestration — capturing carbon dioxide from the atmosphere and storing it underground, which scientists have said can help mitigate climate change.

Researchers can model and study how carbon dioxide would interact with materials underground using FNOs 60,000x faster than with traditional methods.

Anandkumar said the FNO model is also a significant step toward building a digital twin of Earth.

The new NVIDIA Modulus framework for training physics-informed machine learning models and NVIDIA Quantum-2 InfiniBand networking platform equip researchers and developers with the tools to combine the powers of AI, physics and supercomputing — to help solve the world’s toughest problems.

“I strongly believe this is the future of science,” Anandkumar said.

She’ll delve into these topics further at a SC21 plenary talk, taking place on Nov. 18 at 10:30 a.m. Central time.

Watch her full GTC session on demand, here.

Watch NVIDIA founder and CEO Jensen Huang’s GTC keynote below.

The post A Revolution in the Making: How AI and Science Can Mitigate Climate Change appeared first on The Official NVIDIA Blog.

Read More

World’s Fastest Supercomputers Changing Fast

Modern computing workloads — including scientific simulations, visualization, data analytics, and machine learning — are pushing supercomputing centers, cloud providers and enterprises to rethink their computing architecture.

The processor or the network or the software optimizations alone can’t address the latest needs of researchers, engineers and data scientists. Instead, the data center is the new unit of computing, and organizations have to look at the full technology stack.

The latest rankings of the world’s most powerful systems show continued momentum for this full-stack approach in the latest generation of supercomputers.

NVIDIA technologies accelerate over 70 percent, or 355, of the systems on the TOP500 list released at the SC21 high performance computing conference this week, including over 90 percent of all new systems. That’s up from 342 systems, or 68 percent, of the machines on the TOP500 list released in June.

NVIDIA also continues to have a strong presence on the Green500 list of the most energy-efficient systems, powering 23 of the top 25 systems on the list, unchanged from June. On average, NVIDIA GPU-powered systems deliver 3.5x higher power efficiency than non-GPU systems on the list.

Highlighting the emergence of a new generation of cloud-native systems, Microsoft’s GPU-accelerated Azure supercomputer ranked 10th on the list, the first top 10 showing for a cloud-based system.

AI is revolutionizing scientific computing.  The number of research papers leveraging HPC and machine learning has skyrocketed in recent years; growing from roughly 600 ML + HPC papers submitted in 2018 to nearly 5,000 in 2020.

The ongoing convergence of HPC and AI workloads is also underscored by new benchmarks such as HPL-AI and MLPerf HPC.

HPL-AI is an emerging benchmark of converged HPC and AI workloads that uses mixed-precision math — the basis of deep learning and many scientific and commercial jobs — while still delivering the full accuracy of double-precision math, which is the standard  measuring stick for traditional HPC benchmarks.

And MLPerf HPC addresses a style of computing that speeds and augments simulations on supercomputers with AI, with the benchmark measuring performance on three key workloads for HPC centers: astrophysics (Cosmoflow), weather (Deepcam) and molecular dynamics (Opencatalyst).

NVIDIA addresses the full stack with GPU-accelerated processing, smart networking, GPU-optimized applications, and libraries that support the convergence of AI and HPC. This approach has supercharged workloads and enabled scientific breakthroughs.

Let’s look more closely at how NVIDIA is supercharging supercomputers.

Accelerated Computing

The combined power of the GPU’s parallel processing capabilities and over 2,500 GPU-optimized applications allows users to speed up their HPC jobs, in many cases from weeks to hours.

We’re constantly optimizing the CUDA-X libraries and the GPU-accelerated applications, so it’s not unusual for users to see an x-factor performance gain on the same GPU architecture.

As a result, the performance of the most widely used scientific applications — which we call the “golden suite” — has improved 16x over the past six years, with more advances on the way.

16x performance on top HPC, AI and ML apps from full-stack innovation.**

And to help users quickly take advantage of higher performance, we offer the latest versions of the AI and HPC software through containers from the NGC catalog. Users simply pull and run the application on their supercomputer, in the data center or the cloud.

Convergence of HPC and AI 

The infusion of AI in HPC helps researchers speed up their simulations while achieving the accuracy they’d get with the traditional simulation approach.

That’s why an increasing number of researchers are taking advantage of AI to speed up their discoveries.

That includes four of the finalists for this year’s Gordon Bell prize, the most prestigious award in supercomputing. Organizations are racing to build exascale AI computers to support this new model, which combines HPC and AI.

That strength is underscored by relatively new benchmarks, such as HPL-AI and MLPerf HPC, highlighting the ongoing convergence of HPC and AI workloads.

To fuel this trend, last week NVIDIA announced a broad range of advanced new libraries and software development kits for HPC.

Graphs — a key data structure in modern data science — can now be projected into deep-neural network frameworks with Deep Graph Library, or DGL, a new Python package.

NVIDIA Modulus builds and trains physics-informed machine learning models that can learn and obey the laws of physics.

And NVIDIA introduced three new libraries:

  • ReOpt – to increase operational efficiency for the $10 trillion logistics industry.
  • cuQuantum – to accelerate quantum computing research.
  • cuNumeric – to accelerate NumPy for scientists, data scientists, and machine learning and AI researchers in the Python community.

Weaving it all together is NVIDIA Omniverse — the company’s virtual world simulation and collaboration platform for 3D workflows.

Omniverse is used to simulate digital twins of warehouses, plants and factories, of physical and biological systems, of the 5G edge, robots, self-driving cars and even avatars.

Using Omniverse, NVIDIA announced last week that it will build a supercomputer, called Earth-2, devoted to predicting climate change by creating a digital twin of the planet.

Cloud-Native Supercomputing

As supercomputers take on more workloads across data analytics, AI, simulation and visualization, CPUs are stretched to support a growing number of communication tasks needed to operate large and complex systems.

Data processing units alleviate this stress by offloading some of these processes.

As a fully integrated data-center-on-a-chip platform, NVIDIA BlueField DPUs can offload and manage data center infrastructure tasks instead of making the host processor do the work, enabling stronger security and more efficient orchestration of the supercomputer.

Combined with NVIDIA Quantum InfiniBand platform, this architecture delivers optimal bare-metal performance while natively supporting multinode tenant isolation.

NVIDIA’s Quantum InfiniBand platform provides predictive, bare-metal performance isolation.

Thanks to a zero-trust approach, these new systems are also more secure.

BlueField DPUs isolate applications from infrastructure. NVIDIA DOCA 1.2 — the latest BlueField software platform — enables next-generation distributed firewalls and wider use of line-rate data encryption. And NVIDIA Morpheus, assuming an interloper is already inside the data center, uses deep learning-powered data science to detect intruder activities in real time.

And all of the trends outlined above will be accelerated by new networking technology.

NVIDIA Quantum-2, also announced last week, is a 400Gbps InfiniBand platform and consists of the Quantum-2 switch, the ConnectX-7 NIC, the BlueField-3 DPU, as well as new software for the new networking architecture.

NVIDIA Quantum-2 offers the benefits of bare-metal high performance and secure multi-tenancy, allowing the next generation of supercomputers to be secure, cloud-native and better utilized.

 

** Benchmark applications: Amber, Chroma, GROMACS, MILC, NAMD, PyTorch, Quantum Espresso; Random Forest FP32 , TensorFlow, VASP | GPU node: dual-socket CPUs with 4x P100, V100, or A100 GPUs.

The post World’s Fastest Supercomputers Changing Fast appeared first on The Official NVIDIA Blog.

Read More

Siemens Energy Taps NVIDIA to Develop Industrial Digital Twin of Power Plant in Omniverse

Siemens Energy, a leading supplier of power plant technology in the trillion-dollar worldwide energy market, is relying on the NVIDIA Omniverse platform to create digital twins to support predictive maintenance of power plants.

In doing so, Siemens Energy joins a wave of companies across various industries that are using digital twins to enhance their operations. Among them, BMW Group, which has 31 factories around the world, is building multiple industrial digital twins of its operations; and Ericsson is adopting Omniverse to build digital twins of urban areas to help determine how to construct 5G networks.

Indeed, the worldwide market for digital twin platforms is forecast to reach $86 billion by 2028, according to Grand View Research.

“NVIDIA’s open platforms along with physics-infused neural networks bring great value to Siemens Energy,” said Stefan Lichtenberger, technical portfolio manager at Siemens Energy.

Siemens Energy builds and services combined cycle power plants, which include large gas turbines and steam turbines. Heat recovery steam generators (HRSGs) use the exhaust heat from the gas turbine to create steam used to drive the steam turbine. This improves the thermodynamic efficiency of the power plant by more than 60 percent, according to Siemens Energy.

At some sections of an HRSG, a steam and water mixture can cause corrosion that might impact the lifetime of the HRSG’s parts. Downtime for maintenance and repairs leads to lost revenue opportunities for utility companies.

Siemens Energy estimates that a 10 percent reduction in the industry’s average planned downtime of 5.5 days for HRSGs — required among others to check wall loss thickness of pipes due to corrosion —  would save $1.7 billion a year.

Simulations for Industrial Applications

Siemens Energy is enlisting NVIDIA technology to develop a new workflow to reduce the frequency of planned shutdowns while maintaining safety. Real-time data — water inlet temperature, pressure, pH, gas turbine power and temperature — is preprocessed to compute pressure, temperature and velocity of both water and steam. The pressure, temperature and velocity are fed into a physics-ML model created with the NVIDIA Modulus framework to simulate precisely how steam and water flow through the pipes in real time.

The flow conditions in the pipes are then visualized with NVIDIA Omniverse, a virtual world simulation and collaboration platform for 3D workflows. Omniverse scales across multi-GPUs to help Siemens Energy understand and predict the aggregated effects of corrosion in real time.

Accelerating Digital Twin Development

Using NVIDIA software frameworks, running on NVIDIA A100 Tensor Core GPUs, Siemens Energy is simulating the corrosive effects of heat, water and other conditions on metal over time to fine-tune maintenance needs. Predicting maintenance more accurately with machine learning models can help reduce the frequency of maintenance checks without running the risk of failure. The scaled Modulus PINN model was run on AWS Elastic Kubernetes Service (EKS) backed by P4d EC2 instances with A100 GPUs.

Building computational fluid dynamics models for each HRSG, takes as long as eight weeks each to estimate corrosion within pipes at HRSGs plants. This process is required for a portfolio of more than 600 units. Faster workflow using NVIDIA technologies can enable Siemens Energy to accelerate corrosion estimation from weeks to hours.

NVIDIA Omniverse provides a highly scalable platform that lets Siemens Energy replicate and deploy digital twins worldwide, accessing potentially thousands of NVIDIA GPUs as needed.

“NVIDIA’s work as the pioneer in accelerated computing, AI software platforms and simulation offer the scale and flexibility needed for industrial digital twins at Siemen Energy,” said Lichtenberger.

Learn more about Omniverse for virtual simulations and digital twins.

The post Siemens Energy Taps NVIDIA to Develop Industrial Digital Twin of Power Plant in Omniverse appeared first on The Official NVIDIA Blog.

Read More

Gordon Bell Finalists Fight COVID, Advance Science With NVIDIA Technologies

Two simulations of a billion atoms, two fresh insights into how the SARS-CoV-2 virus works, and a new AI model to speed drug discovery.

Those are results from finalists for Gordon Bell awards, considered a Nobel prize in high performance computing. They used AI, accelerated computing or both to advance science with NVIDIA’s technologies.

A finalist for the special prize for COVID-19 research used AI to link multiple simulations, showing at a new level of clarity how the virus replicates inside a host.

The research — led by Arvind Ramanathan, a computational biologist at the Argonne National Laboratory — provides a way to improve the resolution of traditional tools used to explore protein structures. That could provide fresh insights into ways to arrest the spread of a virus.

The team, drawn from a dozen organizations in the U.S. and the U.K., designed a workflow that ran across systems including Perlmutter, an NVIDIA A100-powered system, built by Hewlett Packard Enterprise, and Argonne’s NVIDIA DGX A100 systems.

“The capability to perform multisite data analysis and simulations for integrative biology will be invaluable for making use of large experimental data that are difficult to transfer,” the paper said.

As part of its work, the team developed a technique to speed molecular dynamics research using the popular NAMD program on GPUs. They also leveraged NVIDIA NVLink to speed data “far beyond what is currently possible with a conventional HPC network interconnect, or … PCIe transfers.”

A Billion Atoms in High Fidelity

Ivan Oleynik, a professor of physics at the University of South Florida, led a team named a finalist for the standard Gordon Bell award for their work producing the first highly accurate simulation of a billion atoms. It broke by 23x a record set by a Gordon Bell winner last year.

“It’s a joy to uncover phenomena never seen before, it’s a really big achievement we’re proud of,” said Oleynik.

The simulation of carbon atoms under extreme temperature and pressure could open doors to new energy sources and help describe the makeup of distant planets. It’s especially stunning because the simulation has quantum-level accuracy, faithfully reflecting the forces among the atoms.

“It’s accuracy we could only achieve by applying machine learning techniques on a powerful GPU supercomputer — AI is creating a revolution in how science is done,” said Oleynik.

The team exercised 4,608 IBM Power AC922 servers and 27,900 NVIDIA GPUs on the U.S. Department of Energy’s Summit supercomputer, built by IBM, one of the world’s most powerful supercomputers. It demonstrated their code could scale with almost 100-percent efficiency to simulations of 20 billion atoms or more.

That code is available to any researcher who wants to push the boundaries of materials science.

Inside a Deadly Droplet

In another billion-atom simulation, a second finalist for the COVID-19 prize showed the Delta variant in an airborne droplet (below). It reveals biological forces that spread COVID and other diseases, providing a first atomic-level look at aerosols.

The work has “far reaching … implications for viral binding in the deep lung, and for the study of other airborne pathogens,” according to the paper from a team led by last year’s winner of the special prize, researcher Rommie Amaro from the University of California San Diego.

Gordon Bell finalist COVID droplet simulation
The team led by Amaro simulated the Delta SARS-CoV-2 virus in a respiratory droplet with more than a billion atoms.

“We demonstrate how AI coupled to HPC at multiple levels can result in significantly improved effective performance, enabling new ways to understand and interrogate complex biological systems,” Amaro said.

Researchers used NVIDIA GPUs on Summit, the Longhorn supercomputer built by Dell Technologies for the Texas Advanced Computing Center and commercial systems in Oracle Cloud Infrastructure (OCI).

“HPC and cloud resources can be used to significantly drive down time-to-solution for major scientific efforts as well as connect researchers and greatly enable complex collaborative interactions,” the team concluded.

The Language of Drug Discovery

Finalists for the COVID prize at Oak Ridge National Laboratory (ORNL) applied natural language processing (NLP) to the problem of screening chemical compounds for new drugs.

They used a dataset containing 9.6 billion molecules — the largest dataset applied to this task to date — to train in two hours a BERT NLP model that can speed discovery of new drugs. Previous best efforts took four days to train a model using a dataset with 1.1 billion molecules.

The work exercised more than 24,000 NVIDIA GPUs on the Summit supercomputer to deliver a whopping 603 petaflops. Now that the training is done, the model can run on a single GPU to help researchers find chemical compounds that could inhibit COVID and other diseases.

“We have collaborators here who want to apply the model to cancer signaling pathways,” said Jens Glaser, a computational scientist at ORNL.

“We’re just scratching the surface of training data sizes — we hope to use a trillion molecules soon,” said Andrew Blanchard, a research scientist who led the team.

Relying on a Full-Stack Solution

NVIDIA software libraries for AI and accelerated computing helped the team complete its work in what one observer called a surprisingly short time.

“We didn’t need to fully optimize our work for the GPU’s tensor cores because you don’t need specialized code, you can just use the standard stack,” said Glaser.

He summed up what many finalists felt: “Having a chance to be part of meaningful research with potential impact on people’s lives is something that’s very satisfying for a scientist.”

Tune in to our special address at SC21 either live on Monday, Nov. 15 at 3 pm PST or later on demand. NVIDIA’s Marc Hamilton will provide an overview of our latest news, innovations and technologies, followed by a live Q&A panel with NVIDIA experts.

The post Gordon Bell Finalists Fight COVID, Advance Science With NVIDIA Technologies appeared first on The Official NVIDIA Blog.

Read More

Universities Expand Research Horizons with NVIDIA Systems, Networks

Just as the Dallas/Fort Worth airport became a hub for travelers crisscrossing America, the north Texas region will be a gateway to AI if folks at Southern Methodist University have their way.

SMU is installing an NVIDIA DGX SuperPOD, an accelerated supercomputer it expects will power projects in machine learning for its sprawling metro community with more than 12,000 students and 2,400 faculty and staff.

It’s one of three universities in the south-central U.S. announcing plans to use NVIDIA technologies to shift research into high gear.

Texas A&M and Mississippi State University are adopting NVIDIA Quantum-2, our 400 Gbit/second InfiniBand networking platform, as the backbone for their latest high-performance computers. In addition, a supercomputer in the U.K. has upgraded its InfiniBand network.

Texas Lassos a SuperPOD

“We’re the second university in America to get a DGX SuperPOD and that will put this community ahead in AI capabilities to fuel our degree programs and corporate partnerships,” said Michael Hites, chief information officer of SMU, referring to a system installed earlier this year at the University of Florida.

A September report called the Dallas area “hobbled” by a lack of major AI research. Ironically, the story hit the local newspaper just as SMU was buttoning up its plans for its DGX SuperPOD.

Previewing its initiative, an SMU report in March said AI is “at the heart of digital transformation … and no sector of society will remain untouched” by the technology. “The potential for dramatic improvements in K-12 education and workforce development is enormous and will contribute to the sustained economic growth of the region,” it added.

SMU Ignite, a $1.5 billion fundraiser kicked off in September, will fuel the AI initiative, helping propel Southern Methodist into the top ranks of university research nationally. The university is hiring a chief innovation officer to help guide the effort.

Crafting a Computational Crucible

It’s all about the people, says Jason Warner, who manages the IT teams that support SMU’s researchers. So, he hired a seminal group of data science specialists to staff a new center at SMU’s Ford Hall for Research and Innovation, a hub Warner calls SMU’s “computational crucible.”

Eric Godat leads that team. He earned his Ph.D. in particle physics at SMU modeling nuclear structure using data from the Large Hadron Collider.

Now he’s helping fire up SMU’s students about opportunities on the DGX SuperPOD. As a first step, he asked two SMU students to build a miniature model of a DGX SuperPOD using NVIDIA Jetson modules.

“We wanted to give people — especially those in nontechnical fields who haven’t done AI — a sense of what’s coming,” Godat said.

SMU's Jetson SuperPOD
SMU undergrad Connor Ozenne helped build a miniature DGX SuperPOD that was featured in SMU’s annual report. It uses 16 Jetson modules in a cluster students will benchmark as if it were a TOP500 system.

The full-sized supercomputer, made up of 20 NVIDIA DGX A100 systems on an NVIDIA Quantum InfiniBand network, could be up and running as early as January thanks to its Lego-like, modular architecture. It will deliver a whopping 100 petaflops of computing power, enough to give it a respectable slot on the TOP500 list of the world’s fastest supercomputers.

Aggies Tap NVIDIA Quantum-2 InfiniBand for ACES

About 200 miles south, the high performance computing center at Texas A&M will be among the first to plug into the NVIDIA Quantum-2 InfiniBand platform. Its ACES supercomputer, built by Dell Technologies, will use the 400G InfiniBand network to connect researchers to a mix of five accelerators from four vendors.

NVIDIA Quantum-2 ensures “that a single job on ACES can scale up using all the computing cores and accelerators.  Besides the obvious 2x jump in throughput from NVIDIA Quantum-1 InfiniBand at 200G, it will provide improved total cost of ownership, beefed up in-network computing features and increased scaling,” said Honggao Liu, ACES’s principal investigator and project director.

Texas A&M already gives researchers access to accelerated computing in four systems that include more than 600 NVIDIA A100 Tensor Core and prior-generation GPUs. Two of the four systems use an earlier version of NVIDIA’s InfiniBand technology.

MSU Rides a 400G Train

Mississippi State University will also tap the NVIDIA Quantum-2 InfiniBand platform. It’s the network of choice for a new system that supplements Orion, the largest of four clusters MSU manages, all using earlier versions of InfiniBand.

Both Orion and the new system are funded by the U.S. National Oceanic and Atmospheric Administration (NOAA) and built by Dell. They conduct work for NOAA’s missions as well as research for MSU.

Orion was listed as the fourth largest academic supercomputer in America when it debuted on the TOP500 list in June 2019.

“We’re using InfiniBand in four generations of supercomputers here at MSU so we know it’s both powerful and mature to run our big jobs reliably,” said Trey Breckenridge, director of high performance computing at MSU.

“We’re adding a new system with NVIDIA Quantum-2 to stay at the leading edge in HPC,” he added.

Quantum Nets Cover the UK

Across the pond in the U.K., the Data Intensive supercomputer at the University of Leicester, known as the DIaL system, has upgraded to NVIDIA Quantum, the 200G version of InfiniBand.

“DIaL is specifically designed to tackle the complex, data-intensive questions which must be answered to evolve our understanding of the universe around us,” said Mark Wilkinson, professor of theoretical astrophysics at the University of Leicester and director of its HPC center.

“The intense requirements of these specialist workloads rely on the unparalleled bandwidth and latency that only InfiniBand can provide to make the research possible,” he said.

DIaL is one of four supercomputers in the U.K.’s DiRAC facility using InfiniBand, including the Tursa system at the University of Edinburgh.

InfiniBand Shines in Evaluation

In a technical evaluation, researchers found Tursa with NVIDIA GPU accelerators on a Quantum network delivered 5x the performance of their CPU-only Tesseract system using an alternative interconnect.

Application benchmarks show 16 nodes of Tursa have twice the performance of 512 nodes of Tesseract. Tursa delivers 10 teraflops/node using 90 percent of the network’s bandwidth at a significant improvement in performance per kilowatt over Tesseract.

It’s another example of why most of the world’s TOP500 systems are using NVIDIA technologies.

For more, watch our special address at SC21 either live on Monday, Nov. 15 at 3 pm PST or later on demand. NVIDIA’s Marc Hamilton will provide an overview of our latest news, innovations and technologies, followed by a live Q&A panel with NVIDIA experts.

The post Universities Expand Research Horizons with NVIDIA Systems, Networks appeared first on The Official NVIDIA Blog.

Read More

NVIDIA GTC Sees Spike in Developers From Africa

The appetite for AI and data science is increasing, and nowhere is that more prevalent than in emerging markets.

Registrations for this week’s GTC from African nations tripled compared with the spring edition of the event.

Indeed, Nigeria had the third most registered attendees for countries in the EMEA region, ahead of France, Italy and Spain. Five other African nations were among the region’s top 15 for registrants: Egypt (No. 6), Tunisia (No. 7), Ghana (No. 9), South Africa (No. 11) and Kenya (No. 12)

The numbers demonstrate the growing interest among Africa-based developers to access content, information and expertise centered around AI, data science, robotics and high performance computing. Developers are using these technologies as a platform to create innovative applications that address local challenges, such as healthcare and climate change.

Global Conference, Localized Content

Among the speakers at GTC were several members of NVIDIA Emerging Chapters, a new program that enables local communities in emerging economies to build and scale their AI, data science and graphics projects. Such highly localized content empowers developers from these areas and raises awareness of their unique challenges and needs.

For example, tinyML Kenya, a community of machine learning researchers and practitioners, spoke on the impacts of healthcare, education, conservation and climate change as a force for good in emerging markets. Zindi, Africa’s first data science competition platform, participated in a session about bridging the AI education gap among developers, IT professionals and students on the continent.

Multiple African organizations and universities also spoke at GTC about how developers in the region and emerging markets are using AI to build innovations that address local challenges. Among them were Kenya Adanian Labs, Cadi Ayyad University of Morocco, Data Science Africa, Python Ghana, and Nairobi Women in Machine Learning & Data Science.

Several Africa-based members of NVIDIA Inception, a free program designed to empower cutting-edge startups, spoke about the AI revolution underway in the continent and other emerging areas. Cyst.ai, minoHealth, Fastagger and ARMA were among the 70+ Inception startups who presented at the conference.

AI was not the only innovation topic for local developers. The top African gaming and animation companies Usiku Games, Leti Arts, NETINFO 3D and HeroSmashers TV also joined the party to discuss how the continent’s burgeoning gaming industry continues to thrive and the tools game developers need to be successful in an area of the world where access to compute resources is often limited.

Engaging Developers Everywhere

While AI developers and startup founders come from all over the world, developers in emerging areas face unique circumstances and opportunities. This means global representation and localized access become even more important to bolster developer ecosystems in emerging markets.

Through NVIDIA Emerging Chapters, grassroots organizations and communities can provide developers access to the NVIDIA Developer Program and course credits for the NVIDIA Deep Learning Institute, helping bridge new paths to AI development in the region.

Learn more about AI in emerging markets today.

Watch NVIDIA CEO Jensen Huang’s GTC keynote address:

The post NVIDIA GTC Sees Spike in Developers From Africa appeared first on The Official NVIDIA Blog.

Read More

NVIDIA to Build Earth-2 Supercomputer to See Our Future

The earth is warming. The past seven years are on track to be the seven warmest on record. The emissions of greenhouse gases from human activities are responsible for approximately 1.1°C of average warming since the period 1850-1900.

What we’re experiencing is very different from the global average. We experience extreme weather — historic droughts, unprecedented heatwaves, intense hurricanes, violent storms and catastrophic floods. Climate disasters are the new norm.

We need to confront climate change now. Yet, we won’t feel the impact of our efforts for decades. It’s hard to mobilize action for something so far in the future. But we must know our future today — see it and feel it — so we can act with urgency.

To make our future a reality today, simulation is the answer.

To develop the best strategies for mitigation and adaptation, we need climate models that can predict the climate in different regions of the globe over decades.

Unlike predicting the weather, which primarily models atmospheric physics, climate models are multidecade simulations that model the physics, chemistry and biology of the atmosphere, waters, ice, land and human activities.

Climate simulations are configured today at 10- to 100-kilometer resolutions.

But greater resolution is needed to model changes in the global water cycle — water movement from the ocean, sea ice, land surface and groundwater through the atmosphere and clouds. Changes in this system lead to intensifying storms and droughts.

Meter-scale resolution is needed to simulate clouds that reflect sunlight back to space. Scientists estimate that these resolutions will demand millions to billions of times more computing power than what’s currently available. It would take decades to achieve that through the ordinary course of computing advances, which accelerate 10x every five years.

For the first time, we have the technology to do ultra-high-resolution climate modeling, to jump to lightspeed and predict changes in regional extreme weather decades out.

We can achieve million-x speedups by combining three technologies: GPU-accelerated computing; deep learning and breakthroughs in physics-informed neural networks; and AI supercomputers, along with vast quantities of observed and model data to learn from.

And with super-resolution techniques, we may have within our grasp the billion-x leap needed to do ultra-high-resolution climate modeling. Countries, cities and towns can get early warnings to adapt and make infrastructures more resilient. And with more accurate predictions, people and nations will act with more urgency.

So, we will dedicate ourselves and our significant resources to direct NVIDIA’s scale and expertise in computational sciences, to join with the world’s climate science community.

NVIDIA this week revealed plans to build the world’s most powerful AI supercomputer dedicated to predicting climate change. Named Earth-2, or E-2, the system would create a digital twin of Earth in Omniverse.

The system would be the climate change counterpart to Cambridge-1, the world’s most powerful AI supercomputer for healthcare research. We unveiled Cambridge-1 earlier this year in the U.K. and it’s being used by a number of leading healthcare companies.

All the technologies we’ve invented up to this moment are needed to make Earth-2 possible. I can’t imagine a greater or more important use.

The post NVIDIA to Build Earth-2 Supercomputer to See Our Future appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Omniverse Enterprise Delivers the Future of 3D Design and Real-Time Collaboration

For millions of professionals around the world, 3D workflows are essential.

Everything they build, from cars to products to buildings, must first be designed or simulated in a virtual world. At the same time, more organizations are tackling complex designs while adjusting to a hybrid work environment.

As a result, design teams need a solution that helps them improve remote collaboration while managing 3D production pipelines. And NVIDIA Omniverse is the answer.

NVIDIA Omniverse Enterprise, now available, helps professionals across industries transform complex 3D design workflows. The groundbreaking platform lets global teams working across multiple software suites collaborate in real time in a shared virtual space.

Designed for the Present, Built for the Future

With Omniverse Enterprise, professionals gain new capabilities to boost traditional visualization workflows. It’s a newly launched subscription that brings fully supported software to 3D organizations of any scale.

The foundation of Omniverse is Pixar’s Universal Scene Description, an open-source file format that enables users to enhance their design process with real-time interoperability across applications. Additionally, the platform is built on NVIDIA RTX technology, so creators can render faster, do multiple iterations at no opportunity cost, and quickly achieve their final designs with stunning, photorealistic detail.

Ericsson, a leading telecommunications company, is using Omniverse Enterprise to create a digital twin of a 5G radio network to simulate and visualize signal propagation and performance. Within Omniverse, Ericsson has built a true-to-reality city-scale simulation environment, bringing in scenes, models and datasets from Esri CityEngine.

A New Experience for 3D Design

Omniverse Enterprise is available worldwide through global computer makers BOXX Technologies, Dell Technologies, HP, Lenovo and Supermicro. Many companies have already experienced the advanced capabilities of the platform.

Epigraph, a leading provider for companies such as Black & Decker, Yamaha and Wayfair, creates physically accurate 3D assets and product experiences for e-commerce. BOXX Technologies helped Epigraph achieve faster rendering with Omniverse Enterprise and NVIDIA RTX A6000 graphics. The advanced RTX Renderer in Omniverse enabled Epigraph to render images at final-frame quality faster, while significantly reducing the amount of computational resources needed.

Media.Monks is exploring ways to enhance and extend their workflows in a virtual world with Omniverse Enterprise, together with HP. The combination of remote computing and collocated workstations enables the Media.Monks design, creative and solutions teams to accelerate their clients’ digital transformation toward a more decentralized future. In collaboration with NVIDIA and HP, Media.Monks is exploring new approaches and the convergence of collaboration, real-time graphics, and live broadcast for a new era of brand virtualization.

Dell Technologies is presenting at GTC to show how Omniverse is advancing the hybrid workforce with Dell Precision workstations, Dell EMC PowerEdge servers and Dell Technologies Validated Designs. The interactive panel discussion will dive into why users need Omniverse today, and how Dell is helping more professionals adopt this solution, from the desktop to the data center.

And Lenovo is showcasing how advanced technologies like Omniverse are making remote collaboration seamless. Whether it’s connecting to a powerful mobile workstation on the go, a physical workstation back in the office, or a virtual workstation in the data center, Lenovo, TGX and NVIDIA are providing remote workers with the same experience they get at the office.

These systems manufacturers have also enabled other Omniverse Enterprise customers such as Kohn Pedersen Fox, Woods Bagot and WPP to improve their efficiency and productivity with real-time collaboration.

Experience Virtual Worlds With NVIDIA Omniverse

NVIDIA Omniverse Enterprise is now generally available by subscription from BOXX Technologies, Dell Technologies, HP, Lenovo and Supermicro.

The platform is optimized and certified to run on NVIDIA RTX professional mobile workstations and NVIDIA-Certified Systems, including desktops and servers on the NVIDIA EGX platform.

With Omniverse Enterprise, creative and design teams can connect their Autodesk 3ds Max, Maya and Revit, Epic Games’ Unreal Engine, McNeel & Associates Rhino, Grasshopper and Trimble SketchUp workflows through live-edit collaboration. Learn more about NVIDIA Omniverse Enterprise and our 30-day evaluation program. For individual artists, there’s also a free beta version of the platform available for download.

Watch NVIDIA founder and CEO Jensen Huang’s GTC keynote address below:

The post NVIDIA Omniverse Enterprise Delivers the Future of 3D Design and Real-Time Collaboration appeared first on The Official NVIDIA Blog.

Read More

Catch Some Rays This GFN Thursday With ‘Jurassic World Evolution 2’ and ‘Bright Memory: Infinite’ Game Launches

This week’s GFN Thursday packs a prehistoric punch with the release of Jurassic World Evolution 2. It also gets infinitely brighter with the release of Bright Memory: Infinite.

Both games feature NVIDIA RTX technologies and are part of the six titles joining the GeForce NOW library this week.

GeForce NOW RTX 3080 members will get the peak cloud gaming experience in these titles and more. In addition to RTX ON, they’ll stream both games at up to 1440p and 120 frames per second on PC and Mac; and up to 4K on SHIELD.

Preorders for six-month GeForce NOW RTX 3080 memberships are currently available in North America and Europe for $99.99. Sign up today to be among the first to experience next-generation gaming.

The Latest Tech, Streaming From the Cloud

GeForce RTX GPUs give PC gamers the best visual quality and highest frame rates. They also power NVIDIA RTX technologies. And with GeForce RTX 3080-class GPUs making their way to the cloud in the GeForce NOW SuperPOD, the most advanced platform for ray tracing and AI is now available across nearly any low-powered device.

GeForce NOW SuperPOD
The next generation of cloud gaming is powered by the GeForce NOW SuperPOD, built on the second-gen RTX, NVIDIA Ampere architecture.

Real-time ray tracing creates the most realistic and immersive graphics in supported games, rendering environments in cinematic quality. NVIDIA DLSS gives games a speed boost with uncompromised image quality, thanks to advanced AI.

With GeForce NOW’s Priority and RTX 3080 memberships, gamers can take advantage of these features in numerous top games, including new releases like Jurassic World Evolution 2 and Bright Memory: Infinite.

The added performance from the latest generation of NVIDIA GPUs also means GeForce NOW RTX 3080 members have exclusive access to stream at up to 1440p at 120 FPS on PC, 1600p at 120 FPS on most MacBooks, 1440p at 120 FPS on most iMacs, 4K HDR at 60 FPS on NVIDIA SHIELD TV and up to 120 FPS on select Android devices.

Welcome to …

Immerse yourself in a world evolved in a compelling, original story, experience the chaos of “what-if” scenarios from the iconic Jurassic World and Jurassic Park films and discover over 75 awe-inspiring dinosaurs, including brand-new flying and marine reptiles. Play with support for NVIDIA DLSS this week on GeForce NOW.

GeForce NOW gives your low-end rig the power to play Jurassic World Evolution 2 with even higher graphics settings thanks to NVIDIA DLSS, streaming from the cloud.

Blinded by the (Ray-Traced) Light

FYQD-studio, a one-man development team that released Bright Memory in 2020, is back with a full-length sequel, Bright Memory: Infinite, streaming from the cloud with RTX ON.

Bright Memory: Infinite combines the FPS and action genres with dazzling visuals, amazing set pieces and exciting action. Mix and match available skills and abilities to unleash magnificent combos on enemies. Cut through the opposing forces with your sword, or lock and load with ranged weaponry, customized with a variety of ammunition. The choice is yours.

Priority and GeForce NOW RTX 3080 members can experience every moment of the action the way FYQD-studio intended, gorgeously rendered with ray-traced reflections, ray-traced shadows, ray-traced caustics and dazzling RTX Global Illumination. And GeForce NOW RTX 3080 members can play at up to 1440p and 120 FPS on PC and Mac.

Never Run Out of Gaming

GFN Thursday always means more games.

Members can find these six new games streaming on the cloud this week:

  • Bright Memory: Infinite (new game launch on Steam)
  • Epic Chef (new game launch on Steam)
  • Jurassic World Evolution 2 (new game on launch on Steam and Epic Games Store)
  • MapleStory (Steam)
  • Severed Steel (Steam)
  • Tale of Immortal (Steam)

We make every effort to launch games on GeForce NOW as close to their release as possible, but, in some instances, games may not be available immediately.

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

The post Catch Some Rays This GFN Thursday With ‘Jurassic World Evolution 2’ and ‘Bright Memory: Infinite’ Game Launches appeared first on The Official NVIDIA Blog.

Read More

How Researchers Use NVIDIA AI to Help Mitigate Misinformation

Researchers tackling the challenge of visual misinformation — think the TikTok video of Tom Cruise supposedly golfing in Italy during the pandemic — must continuously advance their tools to identify AI-generated images.

NVIDIA is furthering this effort by collaborating with researchers to support the development and testing of detector algorithms on our state-of-the-art image-generation models.

By crafting a dataset of highly realistic images with StyleGAN3 — our latest, state-of-the-art media generation algorithm — NVIDIA provided crucial information to researchers testing how well their detector algorithms work when tested on AI-generated images created by previously unseen techniques. These detectors help experts identify and analyze synthetic images to combat visual misinformation.

At this week’s NVIDIA GTC, this work was shared in a session titled “Alias-Free Generative Adversarial Networks,” which provided an overview of StyleGAN3. To watch on demand,  register free for GTC.

“This has been a unique situation in that people doing image generation detection have worked closely with the people at NVIDIA doing image generation,” said Edward Delp, a professor at Purdue University and principal investigator of one of the research teams. “This collaboration with NVIDIA has allowed us to build even better and more robust detectors. The ‘early access’ approach used by NVIDIA is an excellent way to further forensics research.”

Advancing Media Forensics With StyleGAN3 Images

When researchers know the underlying code or neural network of an image-generation technique, developing a detector that can identify images created by that AI model is a comparatively straightforward task.

It’s more challenging — and useful — to build a detector that can spot images generated by brand-new AI models.

StyleGAN3, a model developed by NVIDIA Research that will be presented at the NeurIPS 2021 AI conference in December, advances the state of the art in generative adversarial networks used to synthesize images. The breakthrough brings graphics principles in signal processing and image processing to GANs to avoid aliasing: a kind of image corruption often visible when images are rotated, scaled or translated.

NVIDIA researchers developed StyleGAN3 using a publicly released dataset of 70,000 images. Another 27,000 unreleased images from that collection, alongside AI-generated images from StyleGAN3, were shared with forensic research collaborators as a test dataset.

The collaboration with researchers enabled the community to assess how a diversity of different detector approaches performs in identifying images synthesized by StyleGAN3 — before the generator’s code was publicly released.

These detectors work in many different ways: Some may look for telltale correlations among groups of pixels produced by the neural network, while others might look for inconsistencies or asymmetries that give away synthetic images. Yet others attempt to reverse engineer the synthesis approach to estimate if a particular neural network could have created the image.

One of these detectors, GAN-Scanner, reaches up to 95 percent accuracy in identifying synthetic images generated with StyleGAN3, despite never having seen an image created by that model during training. Another detector, created by Politecnico di Milano, achieves an area under the curve of .999 (where a perfect classifier would achieve an AUC of 1.0).

Our work with researchers on StyleGAN3 showcases and supports the important, cutting-edge research done by media forensics groups. We hope it inspires others in the image-synthesis research community to participate in forensics research as well.

Source code for NVIDIA StyleGAN3 is available on GitHub, as well as results and links for the detector collaboration discussed here. The paper behind the research can be found on arXiv.

The GAN detector collaboration is part of Semantic Forensics (SemaFor), a program focused on forensic analysis of media organized by DARPA, the U.S. federal agency for technology research and development.

To learn more about the latest in AI research, watch NVIDIA CEO Jensen Huang’s keynote presentation at GTC below.

The post How Researchers Use NVIDIA AI to Help Mitigate Misinformation appeared first on The Official NVIDIA Blog.

Read More