Keeping a Watchful AI: NASA Project Aims to Predict Space Weather Events

While a thunderstorm could knock out your neighborhood’s power for a few hours, a solar storm could knock out electricity grids across all of Earth, possibly taking weeks to recover from.

To try to predict solar storms — which are disturbances on the sun — and their potential effects on Earth, NASA’s Frontier Development Lab (FDL) is running what it calls a geoeffectiveness challenge.

It uses datasets of tracked changes in the magnetosphere — where the Earth’s magnetic field interacts with solar wind — to train AI-powered models that can detect patterns of space weather events and predict their Earth-related impacts.

The training of the models is optimized on NVIDIA GPUs available on Google Cloud, and data exploration is done on RAPIDS, NVIDIA’s open-source suite of software libraries built to execute data science and analytics pipelines entirely on GPUs.

Siddha Ganju, a solutions architect at NVIDIA who was named to Forbes’ 30 under 30 list in 2018, is advising NASA on the AI-related aspects of the challenge.

A deep learning expert, Ganju grew up going to hackathons. She says she’s always been fascinated by how an algorithm can read in between the lines of code.

Now, she’s applying her knowledge to NVIDIA’s automotive and healthcare businesses, as well NASA’s AI technical steering committee. She’s also written a book on practical uses of deep learning, published last October.

Modeling Space Weather Impacts with AI

Ganju’s work with the FDL began in 2017, when its founder, James Parr, asked her to start advising the organization. Her current task, advising the geoeffectiveness challenge, seeks to use machine learning to characterize magnetic field perturbations and model the impact of space weather events.

In addition to solar storms, space weather events can include such activities as solar flares, which are sudden flashes of increased brightness on the sun, and solar wind, a stream of charged particles released from it.

Not all space weather events impact the Earth, said Ganju, but in case one does, we need to be prepared. For example, a single powerful solar storm could knock out our planet’s telephone networks.

“Even if we’re able to predict the impact of an event just 15 minutes in advance, that gives us enough time to sound the alarm and prepare for potential connectivity loss,” said Ganju. “This data can also be useful for satellites to communicate in a better way.”

Exploring Spatial and Temporal Patterns

Solar events can impact parts of the Earth differently due to a variety of factors, Ganju said. With the help of machine learning, the FDL is trying to find spatial and temporal patterns of the effects.

“The datasets we’re working with are huge, since magnetometers collect data on the changes of a magnetic field at a particular location every second,” said Ganju. “Parallel processing using RAPIDS really accelerates our exploration.”

In addition to Ganju, researchers Asti Bhatt, Mark Cheung and Ryan McGranaghan, as well as NASA’s Lika Guhathakurta, are advising the geoeffectiveness challenge team. Its members include Téo Bloch, Banafsheh Ferdousi, Panos Tigas and Vishal Upendran.

The researchers use RAPIDS to explore the data quickly. Then, using the PyTorch and TensorFlow software libraries, they train the models for experiments to identify how the latitude of a location, the atmosphere above it, or the way sun rays hit it affect the consequences of a space weather event.

They’re also studying whether an earthly impact happens immediately as the space event occurs, or if it has a delayed effect, as an impact could depend on time-related factors, such as the Earth’s revolutions around the sun or its rotation about its own axis.

To detect such patterns, the team will continue to train the model and analyze data throughout the duration of FDL’s eight-week research sprint, which concludes later this month.

Other FDL projects participating in the sprint, according to Ganju, include the moon for good challenge, which aims to discover the best landing position on the moon. Another is the astronaut health challenge, which is investigating how high-radiation environments can affect an astronaut’s well-being.

The FDL is holding a virtual U.S. Space Science & AI showcase, on August 14, where the 2020 challenges will be presented. Register for the event here.

Feature image courtesy of NASA.

The post Keeping a Watchful AI: NASA Project Aims to Predict Space Weather Events appeared first on The Official NVIDIA Blog.

Read More

Teen’s Gambit: 15-Year-Old Chess Master Puts Blundering Laptop in Check with Jetson Platform

Only 846 people in the world hold the title of Woman International Master of chess. Evelyn Zhu, age 15, is one of them.

A rising high school junior in Long Island, outside New York City, Zhu began playing chess competitively at the age of seven and has worked her way up to being one of the top players of her age.

Before COVID-19 limited in-person gatherings, Zhu typically spent two to three hours a day practicing online for an upcoming tournament — if only her laptop could keep up.

Chess engines like Leela Chess Zero — Zhu’s go-to practice partner, which recently beat all others at the 17th season of the Top Chess Engine Championship — use artificial neural network algorithms to mimic the human brain and make moves.

It takes a lot of processing power to take full advantage of such algorithms, so Zhu’s two-year-old laptop would often crash from overheating.

Zhu turned to the NVIDIA Jetson Xavier NX module to solve the issue. She connected the module to her laptop with a MicroUSB-to-USB cable and launched the engine on it. The engine ran smoothly. She also noted that doing the same with the NVIDIA Jetson AGX Xavier module doubled the speed at which the engine analyzed chess positions.

This solution is game-changing, said Zhu, as running Leela Chess Zero on her laptop allows her to improve her skills even while on the go.

AI-based chess engines allow players like Zhu to perform opening preparation, the process of figuring out new lines of moves to be made during the beginning stage of the game. Engines also help with game analysis, as they point out subtle mistakes that a player makes during gameplay.

Opening New Moves Between Chess and Computer Science

“My favorite thing about chess is the peace that comes from being deep in your thoughts when playing or studying a game,” said Zhu. “And getting to meet friends at various tournaments.”

One of her favorite memories is from the 2020 U.S. Team East Tournament, the last she competed at before the COVID-19 outbreak. Instead of the usual competition where one wins or loses as an individual, this was a tournament where players scored points for their teams by winning individual matches.

Zhu’s squad, comprising three other girls around her age, placed second out of 318 teams of all ages.

“Nobody expected that, especially because we were a young all-girls team,” she said. “It was so memorable.”

Besides chess, Zhu has a passion for computer science and hopes to study it in college.

“What excites me most about CS is that it’s so futuristic,” she said. “It seems like we’re making progress in AI on a daily basis, and I really think that it’s the route to advancing society.”

Working with the Jetson platform has opened up a pathway for Zhu to combine her passions for chess and AI. After she posted online instructions on how she supercharged her crashing laptop with NVIDIA technology, Zhu heard from people all around the world.

Her post even sparked discussion of chess in the context of AI, she said, showing her that there’s a global community interested in the topic.

Find out more about Zhu’s chess and tech endeavors.

Learn more about the Jetson platform.

The post Teen’s Gambit: 15-Year-Old Chess Master Puts Blundering Laptop in Check with Jetson Platform appeared first on The Official NVIDIA Blog.

Read More

Non-Stop Shopping: Startup’s AI Let’s Supermarkets Skip the Line

Non-Stop Shopping: Startup’s AI Let’s Supermarkets Skip the Line

Eli Gorovici loves to take friends sailing on the Mediterranean. As the new pilot of Trigo, a Tel Aviv-based startup, he’s inviting the whole retail industry on a cruise to a future with AI.

“We aim to bring the e-commerce experience into the brick-and-mortar supermarket,” said Gorovici, who joined the company as its chief business officer in May.

The journey starts with the sort of shopping anyone who’s waited in a long checkout line has longed for.

You fill up your bags at the market and just walk out. Magically, the store knows what you bought, bills your account and sends you a digital receipt, all while preserving your privacy.

Trigo is building that experience and more. Its magic is an AI engine linked to cameras and a few weighted shelves for small items a shopper’s hand might completely cover.

With these sensors, Trigo builds a 3D model of the store. Neural networks recognize products customers put in their bags.

When shoppers leave, the system sends the grocer the tally and a number it randomly associated with them when they chose to swipe their smartphone as they entered the store. The grocer matches the number with a shopper’s account, charges it and sends off a digital bill.

And that’s just the start.

An Online Experience in the Aisles

Shoppers get the same personalized recommendation systems they’re used to seeing online.

“If I’m standing in front of pasta, I may see on my handset a related coupon or a nice Italian recipe tailored for me,” said Gorovici. “There’s so much you can do with data, it’s mind blowing.”

The system lets stores fine-tune their inventory management systems in real time. Typical shrinkage rates from shoplifting or human error could sink to nearly zero.

AI Turns Images into Insights

Making magic is hard work. Trigo’s system gathers a petabyte of video data a day for an average-size supermarket.

It uses as many as four neural networks to process that data at mind-melting rates of up to a few hundred frames per second. (By contrast, your TV displays high-definition movies at 60 fps.)

Trigo used a dataset of up to 500,000 2D product images to train its neural networks. In their daily operations, the system uses those models to run millions of inference tasks with help from NVIDIA TensorRT software.

The AI work requires plenty of processing muscle. A supermarket outside London testing the Trigo system uses servers in its back room with 40-50 NVIDIA RTX GPUs. To boost efficiency, Trigo plans to deliver edge servers using NVIDIA T4 Tensor Core GPUs and join the NVIDIA Metropolis ecosystem starting next year.

Trigo got early access to the T4 GPUs thanks to its participation in NVIDIA Inception, a program that gives AI startups traction with tools, expertise and go-to-market support. The program also aims to introduce Trigo to NVIDIA’s retail partners in Europe.

In 2021, Trigo aims to move some of the GPU processing to Google, Microsoft and other cloud services, keeping some latency- or privacy-sensitive uses inside the store. It’s the kind of distributed architecture businesses are just starting to adopt, thanks in part to edge computing systems such as NVIDIA’s EGX platform.

Big Supermarkets Plug into AI

Tesco, the largest grocer in the U.K., has plans to open its first market using Trigo’s system. “We’ve vetted the main players in the industry and Trigo is the best by a mile,” said Tesco CEO Dave Lewis.

Israel’s largest grocer, Shufersal, also is piloting Trigo’s system, as are other retailers around the world.

Trigo was founded in 2018 by brothers Michael and Daniel Gabay, leveraging tech and operational experience from their time in elite units of the Israeli military.

Seeking his next big opportunity in his field of video technology, Gorovici asked friends who were venture capitalists for advice. “They said Trigo was the future of retail,” Gorovici said.

Like sailing in the aqua-blue Mediterranean, AI in retail is a compelling opportunity.

“It’s a trillion-dollar market — grocery stores are among the biggest employers in the world. They are all being digitized, and selling more online now given the pandemic, so maybe this next stage of digital innovation for retail will now move even faster,” he said.

Read More

Taking the Heat Off: AI Temperature Screening Aids Businesses Amid Pandemic

Taking the Heat Off: AI Temperature Screening Aids Businesses Amid Pandemic

As businesses and schools consider reopening around the world, they’re taking safety precautions to mitigate the lingering threat of COVID-19 — often taking the temperature of each individual entering their facilities.

Fever is a common warning sign for the virus (and the seasonal flu), but manual temperature-taking with infrared thermometers takes time and requires workers stationed at a building’s entrances to collect temperature readings. AI solutions can speed the process and make it contactless, sending real-time alerts to facilities management teams when visitors with elevated temperatures are detected.

Central California-based IntelliSite Corp. and its recently acquired startup, Deep Vision AI, have developed a temperature screening application that can scan over 100 people a minute. Temperature readings are accurate within a tenth of a degree Celcius. And customers can get up and running with the app within a few hours, with an AI platform running on NVIDIA GPUs on premises or in the cloud for inference.

“Our software platform has multiple AI modules, including foot traffic counting and occupancy monitoring, as well as vehicle recognition,” said Agustin Caverzasi, co-founder of Deep Vision AI, and now president of IntelliSite’s AI business unit. “Adding temperature detection was a natural, easy step for us.”

The temperature screening tool has been deployed in several healthcare facilities and is being tested at U.S. airports, amusement parks and education facilities. Deep Vision is part of NVIDIA Inception, a program that helps startups working in AI and data science get to market faster.

“Deep Vision AI joined Inception at the very beginning, and our engineering and research teams received support with resources like GPUs for training,” Caverzasi said. “It was really helpful for our company’s initial development.”

COVID Risk or Coffee Cup? Building AI for Temperature Tracking

As the pandemic took hold, and social distancing became essential, Caverzasi’s team saw that the technology they’d spent years developing was more relevant than ever.

“The need to protect people from harmful viruses has never been greater,” he said. “With our preexisting AI modules, we can monitor in real time the occupancy levels in a store or a hospital’s waiting room, and trigger alerts before the maximum occupancy is reached in a given area.”

With governments and health organizations advising temperature checking, the startup applied its existing AI capabilities to thermal cameras for the first time. In doing so, they had to fine-tune the model so it wouldn’t be fooled by false positives — for example, when a person shows up red on a thermal camera because of their cup of hot coffee..

This AI model is paired with one of IntelliSite’s IoT solutions called human-based monitoring, or hBM. The hBM platform includes a hardware component: a mobile cart mounted with a thermal camera, monitor and Dell Precision tower workstation for inference. The temperature detection algorithms can now scan five people at the same time.

Double Quick: Faster, Easier Screening

The workstation uses the NVIDIA Quadro RTX 4000 GPU for real-time inference on thermal data from the live camera view. This reduces manual scanning time for healthcare customers by 80 percent, and drops the total cost of conducting temperature scans by 70 percent.

Facilities using hBM can also choose to access data remotely and monitor multiple sites, using either an on-premises Dell PowerEdge R740 server with NVIDIA T4 Tensor Core GPUs, or GPU resources through the IntelliSite Cloud Engine.

If businesses and hospitals are also taking a second temperature measurement with a thermometer, these readings can be logged in the hBM system, which can maintain records for over a million screenings. Facilities managers can configure alerts via text message or email when high temperatures are detected.

The Deep Vision developer team, based in Córdoba, Argentina, also had to adapt their AI models that use regular camera data to detect people wearing face masks. They use the NVIDIA Metropolis application framework for smart cities, including the NVIDIA DeepStream SDK for intelligent video analytics and NVIDIA TensorRT to accelerate inference.

Deep Vision and IntelliSite next plan to integrate the temperature screening AI with facial recognition models, so customers can use the application for employee registration once their temperature has been checked.

IntelliSite is a member of the NVIDIA Clara Guardian ecosystem, bringing edge AI to healthcare facilities. Visit our COVID page to explore how other startups are using AI and accelerated computing to fight the pandemic.

FDA disclaimer: Thermal measurements are designed as a triage tool and should not be the sole means of diagnosing high-risk individuals for any viral threat. Elevated thermal readings should be confirmed with a secondary, clinical-grade evaluation tool. FDA recommends screening individuals one at a time, not in groups.

Read More

HPE’s Jared Dame on How AI, Data Science Driving Demand for Powerful New Workstations

HPE’s Jared Dame on How AI, Data Science Driving Demand for Powerful New Workstations

Smart phones, smart devices, the cloud — if it seems like AI is everywhere, that’s because it is.

That makes more essential than ever the powerful workstations able to crunch the ever growing quantities of data on which modern AI is built.

Jared Dame, Hewlett Packard Enterprise’s director of business development and strategy for AI, data science and edge technologies, spoke to AI Podcast host Noah Kravitz about the role HPE’s workstations play in cutting-edge AI and data science.

In the AI pipeline, Dame explained, workstations can do just about everything — from training to inference. The biggest demand for workstations is now coming from biopharmaceutical companies, the oil and gas industry and the federal government.

Key Points From This Episode:

  • Z by HP workstations feature hundreds of thousands of sensors that predict problems within a machine up to a month in advance, so customers don’t experience a loss of data or time.
  • The newest Z Book Studio, equipped with NVIDIA Quadro graphics, will be launching this fall.

Tweetables:

“Z by HP is selling literally everywhere. Every vertical market does data science, every vertical market is adopting various types of AI.” — Jared Dame [5:47]

“We’re drinking our own Kool Aid — we use our own machines. And we’re using the latest and greatest technologies from CUDA TensorFlow to traditional programming languages.” — Jared Dame [18:36]

You Might Also Like

Lenovo’s Mike Leach on the Role of the Workstation in Modern AI

Whether it’s the latest generation of AI-enabled mobile apps or robust business systems powered on banks of powerful servers, chances are the technology was built first on a workstation. Lenovo’s Mike Leach describes how these workhorses are adapting to support a plethora of new kinds of AI applications.

Serkan Piantino’s Company Makes AI for Everyone

Spell, founded by Serkan Piantino, is making machine learning as easy as ABC. Piantino, CEO of the New York-based startup, explained how he’s bringing compute power to those who don’t have easy access to GPU clusters.

SAS Chief Operating Officer Oliver Schabenberger

SAS Chief Operating Officer Oliver Schabenberger spoke about how organizations can use AI and related technologies.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

The post HPE’s Jared Dame on How AI, Data Science Driving Demand for Powerful New Workstations appeared first on The Official NVIDIA Blog.

Read More

It’s Not Pocket Science: Undergrads at Hackathon Create App to Evaluate At-Home Physical Therapy Exercises

It’s Not Pocket Science: Undergrads at Hackathon Create App to Evaluate At-Home Physical Therapy Exercises

The four undergrads met for the first at the Stanford TreeHacks hackathon, became close friends, and developed an AI-powered app to help physical therapy patients ensure correct posture for their at-home exercises — all within 36 hours.

Back in February, just before the lockdown, Shachi Champaneri, Lilliana de Souza, Riley Howk and Deepa Marti happened to sit across from each other at the event’s introductory session and almost immediately decided to form a team for the competition.

Together, they created PocketPT, an app that lets users know whether they’re completing a physical therapy exercise with the correct posture and form. It captured two prizes against a crowded field, and inspired them to continue using AI to help others.

The app’s AI model uses the NVIDIA Jetson Nano developer kit to detect a user doing the tree pose, a position known to increase shoulder muscle strength and improve balance. The Jetson Nano performs image classification so the model can tell whether the pose is being done correctly based on 100+ images it was trained on, which the team took of themselves. Then, it provides feedback to the user, letting them know if they should adjust their form.

“It can be taxing for patients to go to the physical therapist often, both financially and physically,” said Howk.

Continuing exercises at home is a crucial part of recovery for physical therapy patients, but doing them incorrectly can actually hinder progress, she explained.

Bringing the Idea to Life

In the months leading up to the hackathon, Howk, a rising senior at the University of Alabama, was interning in Los Angeles, where a yoga studio is virtually on every corner. She’d arrived at the competition with the idea to create some kind of yoga app, but it wasn’t until the team came across the NVIDIA table at the hackathon’s sponsor fair that they realized the idea’s potential to expand and help those in need.

“A demo of the Jetson Nano displayed how the system can track bodily movement down to the joint,” said Marti, a rising sophomore at UC Davis. “That’s what sparked the possibility of making a physical therapy app, rather than limiting it to yoga.”

None of the team members had prior experience working with deep learning and computer vision, so they faced the challenge of learning how to implement the model in such a short period of time.

“The NVIDIA mentors were really helpful,” said Champaneri, a rising senior at UC Davis. “They put together a tutorial guide on how to use the Nano that gave us the right footing and outline to follow and implement the idea.”

Over the first night of the hackathon, the team took NVIDIA’s Deep Learning Institute course on getting started with AI on the Jetson Nano. They’d grasped the basics of deep learning. The next morning, they began hacking and training the model with images of themselves displaying correct versus incorrect exercise poses.

In just 36 hours since the idea first emerged, PocketPT was born.

Winning More Than Just Awards

The most exciting part of the weekend was finding out the team had made it to final pitches, according to Howk. They presented their project in front of a crowd of 500 and later found out that it had won the two prizes.

The hackathon attracted 197 projects. Competing against 65 other projects in the Medical Access category — many of which used cloud or other platforms — their project took home the category’s grand prize. It was also chosen as the “Best Use of Jetson Hack,” among 11 other groups that borrowed a Jetson for their projects.

But the quartet is looking to do more with their app than win awards.

Because of the fast-paced nature of the hackathon, PocketPT was only able to fully implement one pose, with others still in the works. However, the team is committed to expanding the product and promoting their overall mission of making physical therapy easily accessible to all.

While the hackathon took place just before the COVID outbreak in the U.S., the team highlighted how their project seems to be all the more relevant now.

“We didn’t even realize we were developing something that would become the future, which is telemedicine,” said de Souza, a rising senior at Northwestern University. “We were creating an at-home version of PT, which is very much needed right now. It’s definitely worth our time to continue working on this project.”

Read about other Jetson projects on the Jetson community projects page and get acquainted with other developers on the Jetson forum page.

Learn how to get started on a Jetson project of your own on the Jetson developers page.

The post It’s Not Pocket Science: Undergrads at Hackathon Create App to Evaluate At-Home Physical Therapy Exercises appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Breaks 16 AI Performance Records in Latest MLPerf Benchmarks

NVIDIA Breaks 16 AI Performance Records in Latest MLPerf Benchmarks

NVIDIA delivers the world’s fastest AI training performance among commercially available products, according to MLPerf benchmarks released today.

The A100 Tensor Core GPU demonstrated the fastest performance per accelerator on all eight MLPerf benchmarks. For overall fastest time to solution at scale, the DGX SuperPOD system, a massive cluster of DGX A100 systems connected with HDR InfiniBand, also set eight new performance milestones. The real winners are customers applying this performance today to transform their businesses faster and more cost effectively with AI.

This is the third consecutive and strongest showing for NVIDIA in training tests from MLPerf, an industry benchmarking group formed in May 2018. NVIDIA set six records in the first MLPerf training benchmarks in December 2018 and eight in July 2019.

NVIDIA set records in the category customers care about moshttps://mlperf.org/t: commercially available products. We ran tests using our latest NVIDIA Ampere architecture as well as our Volta architecture.

The NVIDIA DGX SuperPOD system set new milestones for AI training at scale.

NVIDIA was the only company to field commercially available products for all the tests. Most other submissions used the preview category for products that may not be available for several months or the research category for products not expected to be available for some time.

NVIDIA Ampere Ramps Up in Record Time

In addition to breaking performance records, the A100, the first processor based on the NVIDIA Ampere architecture, hit the market faster than any previous NVIDIA GPU. At launch, it powered NVIDIA’s third-generation DGX systems, and it became publicly available in a Google cloud service just six weeks later.

Also helping meet the strong demand for A100 are the world’s leading cloud providers, such as Amazon Web Services, Baidu Cloud, Microsoft Azure and Tencent Cloud, as well as dozens of major server makers, including Dell Technologies, Hewlett Packard Enterprise, Inspur and Supermicro.

Users across the globe are applying the A100 to tackle the most complex challenges in AI, data science and scientific computing.

Some are enabling a new wave of recommendation systems or conversational AI applications while others power the quest for treatments for COVID-19. All are enjoying the greatest generational performance leap in eight generations of NVIDIA GPUs.

The NVIDIA Ampere architecture swept all eight tests of commercially available accelerators.

A 4x Performance Gain in 1.5 Years

The latest results demonstrate NVIDIA’s focus on continuously evolving an AI platform that spans processors, networking, software and systems.

For example, the tests show at equivalent throughput rates today’s DGX A100 system delivers up to 4x the performance of the system that used V100 GPUs in the first round of MLPerf training tests. Meanwhile, the original DGX-1 system based on NVIDIA V100 can now deliver up to 2x higher performance thanks to the latest software optimizations.

These gains came in less than two years from innovations across the AI platform. Today’s NVIDIA A100 GPUs — coupled with software updates for CUDA-X libraries — power expanding clusters built with Mellanox HDR 200Gb/s InfiniBand networking.

HDR InfiniBand enables extremely low latencies and high data throughput, while offering smart deep learning computing acceleration engines via the scalable hierarchical aggregation and reduction protocol (SHARP) technology.

4x improve x1280
NVIDIA evolves its AI performance with new GPUs, software upgrades and expanding system designs.

NVIDIA Shines in Recommendation Systems, Conversational AI, Reinforcement Learning

The MLPerf benchmarks — backed by organizations including Amazon, Baidu, Facebook, Google, Harvard, Intel, Microsoft and Stanford — constantly evolve to remain relevant as AI itself evolves.

The latest benchmarks featured two new tests and one substantially revised test, all of which NVIDIA excelled in. One ranked performance in recommendation systems, an increasingly popular AI task; another tested conversational AI using BERT, one of the most complex neural network models in use today. Finally, the reinforcement learning test used Mini-go with the full-size 19×19 Go board and was the most complex test in this round involving diverse operations from game play to training.

Convo RecSys customers x1000
Customers using NVIDIA AI for conversational AI and recommendation systems.

Companies are already reaping the benefits of this performance on these strategic applications of AI.

Alibaba hit a $38 billion sales record on Singles Day in November, using NVIDIA GPUs to deliver more than 100x more queries/second on its recommendation systems than CPUs. For its part, conversational AI is becoming the talk of the town, driving business results in industries from finance to healthcare.

NVIDIA is delivering both the performance needed to run these powerful jobs and the ease of use to embrace them.

Software Paves Strategic Paths to AI

In May, NVIDIA announced two application frameworks, Jarvis for conversational AI and Merlin for recommendation systems. Merlin includes the HugeCTR framework for training that powered the latest MLPerf results.

These are part of a growing family of application frameworks for markets including automotive (NVIDIA DRIVE), healthcare (Clara), robotics (Isaac) and retail/smart cities (Metropolis).

SDKs x1280
NVIDIA application frameworks simplify enterprise AI from development to deployment.

DGX SuperPOD Architecture Delivers Speed at Scale

NVIDIA ran MLPerf tests for systems on Selene, an internal cluster based on the DGX SuperPOD, its public reference architecture for large-scale GPU clusters that can be deployed in weeks. That architecture extends the design principles and best practices used in the DGX POD to serve the most challenging problems in AI today.

Selene recently debuted on the TOP500 list as the fastest industrial system in the U.S. with more than an exaflops of AI performance. It’s also the world’s second most power-efficient system on the Green500 list.

Customers are already using these reference architectures to build DGX PODs and DGX SuperPODs of their own. They include HiPerGator, the fastest academic AI supercomputer in the U.S., which the University of Florida will feature as the cornerstone of its cross-curriculum AI initiative.

Meanwhile, a top supercomputing center, Argonne National Laboratory, is using DGX A100 to find ways to fight COVID-19. Argonne was the first of a half-dozen high performance computing centers to adopt A100 GPUs.

DGX POD Users x1280
Many users have adopted NVIDIA DGX PODs.

DGX SuperPODs are already driving business results for companies like Continental in automotive, Lockheed Martin in aerospace and Microsoft in cloud-computing services.

These systems are all up and running thanks in part to a broad ecosystem supporting NVIDIA GPUs and DGX systems.

Strong MLPerf Showing by NVIDIA Ecosystem

Of the nine companies submitting results, seven submitted with NVIDIA GPUs including cloud service providers (Alibaba Cloud, Google Cloud, Tencent Cloud) and server makers (Dell, Fujitsu, and Inspur), highlighting the strength of NVIDIA’s ecosystem.

NVIDIA AI Ecosystem x1000
Many partners leveraged the NVIDIA AI platform for MLPerf submissions.

Many of these partners used containers on NGC, NVIDIA’s software hub, along with publicly available frameworks for their submissions.

The MLPerf partners represent part of an ecosystem of nearly two dozen cloud-service providers and OEMs with products or plans for online instances, servers and PCIe cards using NVIDIA A100 GPUs.

Test-Proven Software Available on NGC Today

Much of the same software NVIDIA and its partners used for the latest MLPerf benchmarks is available to customers today on NGC.

NGC is host to several GPU-optimized containers, software scripts, pre-trained models and SDKs. They empower data scientists and developers to accelerate their AI workflows across popular frameworks such as TensorFlow and PyTorch.

Organizations are embracing containers to save time getting to business results that matter. In the end, that’s the most important benchmark of all.

Artist’s rendering at top: NVIDIA’s new DGX SuperPOD, built in less than a month and featuring more than 2,000 NVIDIA A100 GPUs, swept every MLPerf benchmark category for at-scale performance among commercially available products. 

The post NVIDIA Breaks 16 AI Performance Records in Latest MLPerf Benchmarks appeared first on The Official NVIDIA Blog.

Read More

Taiwanese Supercomputing Center Advances Real-Time Rendering from the Cloud with NVIDIA RTX Server and Quadro vDWS

Taiwanese Supercomputing Center Advances Real-Time Rendering from the Cloud with NVIDIA RTX Server and Quadro vDWS

As the stunning visual effects in movies and television advance, so do audience expectations for ever more spectacular and realistic imagery.

The National Center for High-performance Computing, home to Taiwan’s most powerful AI supercomputer, is helping video artists keep up with increasing industry demands.

NCHC delivers computing and networking platforms for filmmakers, content creators and artists. To provide them with high-quality, accelerated rendering and simulation services, the company needed some serious GPU power.

So it chose the NVIDIA RTX Server, including Quadro RTX 8000 and RTX 6000 GPUs and NVIDIA Quadro Virtual Data Center Workstation (Quadro vDWS) software, to bring accelerated rendering performance and real-time ray tracing to its customers.

NVIDIA GPUs and VDI: Driving Force Behind the Scenes

One of NCHC’s products, Render Farm, is built on NVIDIA Quadro RTX GPUs with Quadro vDWS software. It provides users with real-time rendering for high-resolution image processing.

A cloud computing platform, Render Farm enables users to rapidly render large 3D models. Its efficiency is stunning: it can reduce the time needed for opening files from nearly three hours to only three minutes.

“Last year, a team from Hollywood that reached out to us for visual effects production anticipated spending three days working on scenes,” said Chia-Chen Kuo, director of the Arts Technology Computing Division at NCHC. “But with the Render Farm computing platform, it only took one night to finish the work. That was far beyond their expectations.”

NCHC also aims to create a powerful cloud computing environment that can be accessed by anyone around the world. Quadro vDWS technology plays an important role in allowing teams to collaborate in this environment and makes its HPC resources widely available to the public.

With the rapid growth of data, physical hardware systems can’t keep up with data size and complexity. But Quadro vDWS technology makes it easy and convenient for anyone to securely access data and applications from anywhere, on any device.

Using virtual desktop infrastructure, NCHC’s Render Farm can provide up to 100 virtual workstations so users can do image processing at the same time. They only need a Wi-Fi or 4G connection to access the platform.

VMware vSphere and Horizon technology is integrated into Render Farm to provide on-demand virtual remote computing platform services. This virtualizes the HPC environment through NVIDIA virtual GPU technology and reduces by 10x the time required for redeploying the rendering environment. It also allows flexible switching between Windows and Linux operating systems.

High-Caliber Performance for High-Caliber Performers 

Over 200 video works have already been produced with NCHC’s technology services.

NCHC recently collaborated with acclaimed Taiwanese theater artist Huang Yi for one of his most popular productions, Huang Yi and KUKA. The project, which combined modern dance with visual arts and technology, was performed in over 70 locations worldwide such as the Cloud Gate Theater in northwest Taipei, the Ars Electronica Festival in Austria, and TED Conference in Vancouver.

During the program, Huang coordinated a dance with his robot companion KUKA, whose arm possessed a camera to capture the dance movements. Those images were sent to the NCHC Render Farm in Taichung, 170 km away,  to be processed in real time before projecting back to the robot on stage — with less than one second of end-to-end latency.

“I wanted to thoroughly immerse audiences in the performance so they can sense the flow of emotions. This requires strong and stable computing power,” said Huang. “NCHC’s Render Farm, powered by NVIDIA GPUs and NVIDIA virtualization technology, provides everything we need to animate the robot: exceptional computing power, extremely low latency and the remote access that you can use whenever and wherever you are.”

LeaderTek, a 3D scanning and measurement company, also uses NCHC services for image processing. With 3D and cloud rendering technology, LeaderTek is helping the Taiwan government archive historic monuments through creating advanced digital spatial models.

“Adopting Render Farm’s cloud computing platform helps us take a huge leap forward in improving our workflows,” said Hank Huang, general manager at LeaderTek. “The robust computing capabilities with NVIDIA vGPU for Quadro Virtual Workstations is also crucial for us to deliver high-quality images in a timely manner and get things done efficiently.”

Watch Huang Yi’s performance with KUKA below. And learn more about NVIDIA Quadro RTX and NVIDIA vGPU.

The post Taiwanese Supercomputing Center Advances Real-Time Rendering from the Cloud with NVIDIA RTX Server and Quadro vDWS appeared first on The Official NVIDIA Blog.

Read More

Banking on AI: RBC Builds a DGX-Powered Private Cloud

Banking on AI: RBC Builds a DGX-Powered Private Cloud

Royal Bank of Canada built an NVIDIA DGX-powered cloud and tied it to a strategic investment in AI. Despite headwinds from a global pandemic, it will further enable RBC to transform client experiences.

The voyage started in the fall of 2017. That’s when RBC, Canada’s largest bank with 17 million clients in 36 countries, created  its dedicated research institute, Borealis AI. The institute is headquartered next to Toronto’s MaRS Discovery District, a global hub for machine-learning experts.

Borealis AI quickly attracted dozens of top researchers. That’s no surprise given the institute is led by the bank’s chief science officer, Foteini Agrafioti, a patent-holding serial entrepreneur and Ph.D. in electrical and computer engineering who co-chairs Canada’s AI advisory council.

The bank initially booted up Borealis AI into a mix of systems. But as the group and the AI models it developed grew, it needed a larger, dedicated AI engine.

Brokering a Private AI Cloud for Banking

“I had the good fortune to help commission our first infrastructure for Borealis AI, but it wasn’t adequate to meet our evolving AI needs,” said Mike Tardif, a senior vice president of tech infrastructure at RBC.

The team wanted a distributed AI system that would serve four locations, from Vancouver to Montreal, securely behind the bank’s firewall. It needed to scale as workloads grew and leverage the regular flow of AI innovations in open source software without requiring hardware upgrades to do so.

In short, the bank aimed to build a state-of-the-art private AI cloud. For its key planks, RBC chose six NVIDIA DGX systems and Red Hat’s OpenShift to orchestrate containers running on those systems.

“We see NVIDIA as a leader in AI infrastructure. We were already using its DGX systems and wanted to expand our AI capabilities, so it was an obvious choice,” said Tardif.

AI Steers Bank Toward Smart Apps

RBC is already reporting solid results with the system despite commissioning it early this year in the face of the oncoming COVID-19 storm.

The private AI cloud can run thousands of simulations and analyze millions of data points in a fraction of the time that it could before, the bank says. As a result, it expects to transform the customer banking experience with a new generation of smart applications. And that’s just the beginning.

“For instance, in our capital markets business we are now able to train thousands of statistical models in parallel to cover this vast space of possibilities,” said Agrafioti, head of Borealis AI.

“This would be impossible without a distributed and fully automated environment. We can populate the entire cluster with a single click using the automated pipeline that this new solution has delivered,” she added.

The platform has already helped reduce client calls and resulted in faster delivery of new applications for RBC clients, thanks to the performance of GPUs combined with the automation of orchestrated containers.

RBC deployed Red Hat OpenShift in combination with NVIDIA DGX infrastructure to rapidly spin up AI compute instances in a fraction of the time it used to take.

OpenShift helps by creating an environment where users can run thousands of containers simultaneously, extracting datasets to train AI models and run them in production on DGX systems, said Yan Fisher, a global evangelist for emerging technologies at Red Hat.

OpenShift and NGC, NVIDIA’s software hub, let the companies support the bank remotely through the pandemic, he added.

“Building our AI infrastructure with NVIDIA DGX has given us in-house capabilities similar to what the Amazons and Googles of the world offer and we’ve achieved some significant savings in total cost of ownership,” said Tardif.

He singled out as key hardware assets the NVLink interconnect and NVIDIA’s support for enterprise networking standards with maximum bandwidth and reduced latency. They let users quickly access multiple GPUs within and between systems across data centers that host the bank’s AI cloud.

How a Bank with a Long History Stays Innovative

Though it’s 150 years old, RBC keeps in tune with the times by investing early in emerging technologies, as it did with Borealis AI.

“Innovation is in our DNA — we’re always looking at what’s coming around the corner and how we can operationalize it, and AI is a top strategic priority,” said Tardif.

Although its main expertise is in banking, RBC has tech chops, too. During the COVID lockdown it managed to “pressure test” the latest systems, pushing them well beyond they thought were their limits.

“We’re co-creating this vision of AI infrastructure with NVIDIA, and through this journey we’re raising the bar for AI innovation which everyone in the financial services industry can benefit from,” Tardif said.

Visit NVIDIA’s financial services industry page to learn more.

The post Banking on AI: RBC Builds a DGX-Powered Private Cloud appeared first on The Official NVIDIA Blog.

Read More

Top Content Creation Applications Turn ‘RTX On’ for Faster Performance

Top Content Creation Applications Turn ‘RTX On’ for Faster Performance

Whether tackling complex visualization challenges or creating Hollywood-caliber visual effects, artists and designers require powerful hardware to create their best work.

The latest application releases from Foundry, Chaos Group and Redshift by Maxon provide advanced features powered by NVIDIA RTX so creators can experience faster ray tracing and accelerated performance to elevate any design workflow.

Foundry Delivers New Features in Modo and Nuke

Foundry recently hosted Foundry LIVE, a series of virtual events where they announced the latest enhancements to their leading content creation applications, including NVIDIA OptiX 7.1 support in Modo.

Modo is Foundry’s powerful and flexible 3D modeling, texturing and rendering toolset. By upgrading to OptiX 7.1 in the mPath renderer, Version 14.1 delivers faster rendering, denoising and real-time feedback with up to 2x the memory savings on the GPU for greater flexibility when working with complex scenes.

Earlier this week, the team announced Nuke 12.2, the latest version of Foundry’s compositing, editorial and review tools. The recent release of Nuke 12.1, the NukeX Cara VR toolset for working with 360-degree video, as well as Nuke’s SphericalTransform and Bilateral nodes, takes advantage of new GPU-caching functionality to deliver significant improvements in viewer processing and rendering. The GPU-caching architecture is also available to developers creating custom GPU-accelerated tools using BlinkScript.

“Moving mPath to OptiX 7.1 dramatically reduces render times and memory usage, but the feature I’m particularly excited by is the addition of linear curves support, which now allows mPath to accelerate hair and fur rendering on the GPU,” said Allen Hastings, head of rendering at Foundry.

Image Courtesy of Foundry, model supplied by Aaron Sims Creative

NVIDIA Quadro RTX GPUs combined with Dell Precision workstations provide the performance, scalability and reliability to help artists and designers boost productivity and create amazing content faster than before. Learn more about how Foundry members in the U.S. can receive exclusive discounts and save on all Dell desktops, notebooks, servers, electronics and accessories.

Chaos Group Releases V-Ray 5 for Autodesk Maya

Chaos Group will soon release V-Ray 5 for Autodesk Maya, with a host of new GPU-accelerated features for lighting and materials.

Using LightMix in the new V-Ray Frame Buffer allows artists to freely experiment with lighting changes after they render, save out permutations and push back improvements in scenes. The new Layer Compositor allows users to fine-tune and finish images directly in the V-Ray frame buffer — without the need for a separate post-processing app.

“V-Ray 5 for Maya brings tremendous advancements for Maya artists wanting to improve their efficiency,” said Phillip Miller, vice president of product management at Chaos Group. “In addition, every new feature is supported equally by V-Ray GPU which can utilize RTX acceleration.”

V-Ray 5 for Maya image for the Nissan GTR. Image courtesy of Millergo CG.

V-Ray 5 also adds support for out-of-core geometry for rendering using NVIDIA CUDA, improving performance for artists and designers working with large scenes that aren’t able to fit into the GPU’s frame buffer.

V-Ray 5 for Autodesk Maya will be generally available in early August.

Redshift Brings Faster Ray Tracing, Bigger Memory

Maxon hosted The 3D and Motion Design Show this week, where they demonstrated Redshift 3.0 with OptiX 7 ray-tracing acceleration and NVLink for both geometry and textures.

Additional features of Redshift 3.0 include:

  • General performance improved 30 percent or more
  • Automatic sampling so users no longer need to manually tweak sampling settings
  • Maxon shader noises for all supported 3D apps
  • Hydra/Solaris support
  • Deeper traces and nested shader blending for even more visually compelling shaders

“Redshift 3.0 incorporates NVIDIA technologies such as OptiX 7 and NVLink. OptiX 7 enables hardware ray tracing so our users can now render their scenes faster than ever. And NVLink allows the rendering of larger scenes with less or no out-of-core memory access — which also means faster render times,” said Panos Zompolas, CTO at Redshift Rendering Technologies. “The introduction of Hydra and Blender support means more artists can join the ever growing Redshift family and render their projects at an incredible speed and quality.”

Redshift 3.0 will soon introduce OSL and Blender support. Redshift 3.0 is currently available to licensed customers, with general availability coming soon.

All registered participants of the 3D Motion and Design Show will be automatically entered for a chance to win an NVIDIA Quadro RTX GPU. See all prizes here.

Check out other RTX-accelerated applications that help professionals transform design workflows. And learn more about how RTX GPUs are powering high-performance NVIDIA Studio systems built to handle the most demanding creative workflows.

For developers looking to get the most out of RTX GPUs, learn more about integrating OptiX 7 into applications.


Featured blog image courtesy of Foundry.

The post Top Content Creation Applications Turn ‘RTX On’ for Faster Performance appeared first on The Official NVIDIA Blog.

Read More