With today’s beta launch on ChromeOS, Chromebooks now wield the power to play PC games using GeForce NOW.
Chromebook users join the millions on PC, Mac, SHIELD and Android mobile devices already playing their favorite games on our cloud gaming service with GeForce performance.
Getting started is simple. Head to play.geforcenow.com and log in with your GeForce NOW account. Signing up is easy, just choose either a paid Founders membership or a free account.
Right now is a great time to join. We just launched a six-month Founders membership that includes a Hyper Scape Season One Battle Pass token and exclusive Hyper Scape in-game content for $24.95. That’s a $64.94 value.
Once logged in, you’re only a couple clicks away from streaming a massive catalog of games. For the best experience, you’ll want to make those clicks with a USB mouse.
Distance Learning by Day, Distance Gaming by Night
Some students are heading back to school. Others are distance learning from home. However they’re learning, more students than ever rely on Chromebooks.
That’s because Chromebooks are great computers for studying. They’re fast, simple and secure devices that help you stay productive and connected.
Now, those same Chromebooks transform, instantly, into GeForce-powered distance gaming rigs, thanks to GeForce NOW.
Your Games on All Your Devices
Millions of GeForce NOW members play with and against their friends — no matter which platform they’re streaming on, whether that’s PC, Mac, Android or, now, Chromebooks.
That’s because when you stream games using GeForce NOW, you’re playing the PC version from digital stores like Steam, Epic Games Store and Ubisoft Uplay.
This is great for developers, who can bring their games to the cloud at launch, without adding development cycles.
And it’s great for the millions of GeForce NOW members. They’re tapping into an existing ecosystem anytime they stream one of more than 650 games instantly. That includes over 70 of the most-played free-to-play games.
When games like CD Projekt Red’s Cyberpunk 2077 come out later this year, members will be able to play using GeForce NOW servers the same day on their Chromebook.
Anywhere You Go
Chromebooks, of course, are lightweight devices that go where you do. From home to work to school. Or from your bedroom to the living room.
GeForce NOW is the perfect Chromebook companion. Simply plug in a mouse and go. Our beta release gives Chromebook owners the power to play their favorite PC games.
Take game progress or character level-ups from a desktop to a phone and then onto Chromebook. You’re playing the games you own from your digital game store accounts. So your progress goes with you.
More PC Gaming Features Heading to the Cloud
The heart of GeForce NOW is PC gaming. We continue to tap into the PC ecosystem to bring more PC features to the cloud.
PC gamers are intimately familiar with Steam. Many have massive libraries from the popular PC game store. To support them, we just launched Steam Game Sync so they can sync games from their Steam library with their library in GeForce NOW. It’s quickly become one of our most popular features for members playing on PC and Mac.
Soon, Chromebook owners will be able to take advantage of the feature, too.
Over the past few months, we’ve added two GeForce Experience features. Highlights delivers automatic video capture so you can share your best moments, and Freestyle provides gamers the ability to customize a game’s look. In the weeks ahead, we’ll add support for Ansel — a powerful in-game camera that lets gamers capture professional-grade screenshots. These features are currently only available on PC and Mac. Look for them to come to Chromebooks in future updates.
More games. More platforms. Legendary GeForce performance. And now on Chromebooks. That’s the power to play that only GeForce NOW can deliver.
Alex Schepelmann went from being a teacher’s assistant for an Intro to Programming class to educating 40,000 YouTube subscribers by championing the mantra: anyone can make something super using AI and machine learning.
His YouTube channel, Super Make Something, posts two types of videos. “Basics” videos provide in-depth explanations of technologies and their methods, using fun, understandable lingo. “Project” videos let viewers follow along with instructions for creating a product.
About the Maker
Schepelmann got a B.S. and M.S. in mechanical engineering from Case Western Reserve University and a Ph.D. in robotics from Carnegie Mellon University. His master’s thesis focused on employing computer vision to identify grass and obstacles in a camera stream, and he was part of a team that created an award-winning autonomous lawnmower.
Now, he’s a technical fellow for an engineering consulting firm and an aerospace contractor supporting various robotics projects in partnership with NASA. In his free time, he creates content for his channel, based out of his home in Cleveland.
In his undergrad years, Schepelmann saw how classmates found the introductory programming class hard because the assignments didn’t relate to their everyday lives. So, when he got to teach the class as a grad student, he implemented fun projects, like coding a Tamagotchi digital pet.
His aim was to help students realize that choosing topics they’re interested in can make learning easy and enjoyable. Schepelmann later heard from one of his students, an art history major, that his class had inspired her to add a computer science minor to her degree.
“Since then, I’ve thought it was great to introduce these topics to people who might never have considered them or felt that they were too hard,” he said. “I want to show people that AI can be really fun and easy to learn. With YouTube, it’s now possible to reach an audience of any background or age range on a large scale.”
Schepelmann’s YouTube channel started as a hobby during his years at Carnegie Mellon. It’s grown to reach 2.1 million total views on videos explaining 3D printing, robotics and machine learning, including how to use the NVIDIA Jetson platform to train AI models.
His Favorite Jetson Projects
“It’s super, super easy to use the NVIDIA Jetson products,” said Schepelmann. “It’s a great machine learning platform and an inexpensive way for people to learn AI and experiment with computationally intensive applications.”
To show viewers exactly how, he’s created two Jetson-based tutorials:
Machine Learning 101: Intro to Neural Networks – Schepelmann dives into what neural networks are and walks through how to set up the NVIDIA Jetson Nano developer kit to train a neural network model from scratch.
Machine Learning 101: Naive Bayes Classifier – Schepelmann explains how the probabilistic classifier can be used for image processing and speech recognition applications, using the NVIDIA Jetson Xavier NX developer kit to demonstrate.
The creator has released the full code used in both tutorials on his GitHub site for anyone to explore.
Where to Learn More
To make something super with Super Make Something, visit Schepelmann’s YouTube channel.
NVIDIA’s enterprise partner program has grown to more than 1,500 members worldwide and added new resources to boost opportunities for training, collaboration and sales.
The expanded NVIDIA Partner Network boasts an array of companies that span the globe and help customers across a variety of needs, from high performance computing to systems integration.
The NPN has seen exponential growth over the past two years, and these new program enhancements enable future expansion as Mellanox and Cumulus partner programs are set to be integrated into NPN throughout 2021.
Mellanox and Cumulus bring strong partners into the NVIDIA fold. Focused on enterprise data center markets, they provide accelerated, disaggregated and software-defined networking to meet the rapid growth in AI, cloud and HPC.
In anticipation of this growth, the NPN has introduced educational opportunities, tools and resources for training and collaboration, as well as added sales incentives. These benefits include:
Industry-Specific Training Curriculums: New courses and enablement tools in healthcare, higher education and research, financial services and insurance, and retail. Additional courses in energy and telco are coming next year.
NPN Learning Maps: These dramatically reduce the time partners need to get up and running. Partners can discover and build their NVIDIA learning matrix based on industry and cross-referenced by role, including sales, solution architect or data scientist.
New tools and resources:
AI Consulting Network: New AI consulting services for data scientists and solution architects who are part of our Service Delivery Partner-Professional Services program to help build and deploy HPC and AI solutions.
Enhanced NPN Partner Portal: Expanded to allow access to the vast storehouse of NVIDIA-built sales tools and data, including partner rebate history and registered opportunities. The simplified portal gives partners increased visibility and easy access to the information required to quickly track sales and build accurate forecasts.
Industry-Specific Marketing Campaigns: Provides partners with the opportunity to build campaigns that more accurately target customers with content built from data-driven insights.
A fixed backend rebate for Elite-level Solution Provider and Solutions Integration partners for compute, compute DGX, visualization and virtualization.
An enhanced quarterly performance bonus program, incorporating an annualized goal to better align with sudden fluctuations in partner selling seasons.
Dedicated market development funds for Elite-level providers and integration partners for most competencies.
NPN expanded categories:
Solution advisors focused on storage solutions and mutual reference architectures
Federal government system integrators
The NVIDIA Partner Network is dedicated to supporting partners that deliver world-class products and services to customers. The NPN collaborates with hundreds of companies globally, across a range of businesses and competencies, to serve customers in HPC, AI and emerging high-growth areas such as visualization, edge computing and virtualization.
DGX SuperPODs are driving business results for companies like Continental in automotive, Lockheed Martin in aerospace and Microsoft in cloud-computing services.
Birth of an AI System
The story of how and why NVIDIA built Selene starts in 2015.
NVIDIA engineers started their first system-level design with two motivations. They wanted to build something both powerful enough to train the AI models their colleagues were building for autonomous vehicles and general purpose enough to serve the needs of any deep-learning researcher.
The result was the SATURNV cluster, born in 2016 and based on the NVIDIA Pascal GPU. When the more powerful NVIDIA Volta GPU debuted a year later, the budding systems group’s motivation and its designs expanded rapidly.
AI Jobs Grow Beyond the Accelerator
“We’re trying to anticipate what’s coming based on what we hear from researchers, building machines that serve multiple uses and have long lifetimes, packing as much processing, memory and storage as possible,” said Michael Houston, a chief architect who leads the systems team.
As early as 2017, “we were starting to see new apps drive the need for multi-node training, demanding very high-speed communications between systems and access to high-speed storage,” he said.
AI models were growing rapidly, requiring multiple GPUs to handle them. Workloads were demanding new computing styles, like model parallelism, to keep pace.
So, in fast succession, the team crafted ever larger clusters of V100-based NVIDIA DGX-2 systems, called DGX PODs. They used 32, then 64 DGX-2 nodes, culminating in a 96-node architecture dubbed the DGX SuperPOD.
They christened it Circe for the irresistible Greek goddess. It debuted in June 2019 at No. 22 on the TOP500 list of the world’s fastest supercomputers and currently holds No. 23.
Cutting Cables in a Computing Jungle
Along the way, the team learned lessons about networking, storage, power and thermals. Those learnings got baked into the latest NVIDIA DGX systems, reference architectures and today’s 280-node Selene.
In the race through ever larger clusters to get to Circe, some lessons were hard won.
“We tore everything out twice, we literally cut the cables out. It was the fastest way forward, but it still had a lot of downtime and cost. So we vowed to never do that again and set ease of expansion and incremental deployment as a fundamental design principle,” said Houston.
The team redesigned the overall network to simplify assembling the system.
They defined modules of 20 nodes connected by relatively simple “thin switches.” Each of these so-called scalable units could be laid down, cookie-cutter style, turned on and tested before the next one was added.
The design let engineers specify set lengths of cables that could be bundled together with Velcro at the factory. Racks could be labeled and mapped, radically simplifying the process of filling them with dozens of systems.
Doubling Up on InfiniBand
Early on, the team learned to split up compute, storage and management fabrics into independent planes, spreading them across more, faster network-interface cards.
The number of NICs per GPU doubled to two. So did their speeds, going from 100 Gbit per second InfiniBand in Circe to 200G HDR InfiniBand in Selene. The result was a 4x increase in the effective node bandwidth.
Likewise, memory and storage links grew in capacity and throughput to handle jobs with hot, warm and cold storage needs. Four storage tiers spanned 100 terabyte/second memory links to 100 Gbyte/s storage pools.
Power and thermals stayed within air-cooled limits. The default designs used 35kW racks typical in leased data centers, but they can stretch beyond 50kW for the most aggressive supercomputer centers and down to 7kW racks some telcos use.
Seeking the Big, Balanced System
The net result is a more balanced design that can handle today’s many different workloads. That flexibility also gives researchers the freedom to explore new directions in AI and high performance computing.
“To some extent HPC and AI both require max performance, but you have to look carefully at how you deliver that performance in terms of power, storage and networking as well as raw processing,” said Julie Bernauer, who leads an advanced development team that’s worked on all of NVIDIA’s large-scale systems.
In the best of times, it can take dozens of engineers a few months to assemble, test and commission a supercomputer-class system. NVIDIA had to get Selene running in a few weeks to participate in industry benchmarks and fulfill obligations to customers like Argonne.
And engineers had to stay well within public-health guidelines of the pandemic.
“We had skeleton crews with strict protocols to keep staff healthy,” said Bernauer.
“To unbox and rack systems, we used two-person teams that didn’t mix with the others — they even took vacation at the same time. And we did cabling with six-foot distances between people. That really changes how you build systems,” she said.
Even with the COVID restrictions, engineers racked up to 60 systems in a day, the maximum their loading dock could handle. Virtual log-ins let administrators validate cabling remotely, testing the 20-node modules as they were deployed.
Bernauer’s team put several layers of automation in place. That cut the need for people at the co-location facility where Selene was built, a block from NVIDIA’s Silicon Valley headquarters.
Slacking with a Supercomputer
Selene talks to staff over a Slack channel as if it were a co-worker, reporting loose cables and isolating malfunctioning hardware so the system can keep running.
“We don’t want to wake up in the night because the cluster has a problem,” Bernauer said.
It’s part of the automation customers can access if they follow the guidance in the DGX POD and SuperPOD architectures.
Thanks to this approach, the University of Florida, for example, is expected to rack and power up a 140-node extension to its HiPerGator system, switching on the most powerful AI supercomputer in academia within as little as 10 days of receiving it.
As an added touch, the NVIDIA team bought a telepresence robot from Double Robotics so non-essential designers sheltering at home could maintain daily contact with Selene. Tongue-in-cheek, they dubbed it Trip given early concerns essential technicians on site might bump into it.
The fact that Trip is powered by an NVIDIA Jetson TX2 module was an added attraction for team members who imagined some day they might tinker with its programming.
Since late July, Trip’s been used regularly to let them virtually drive through Selene’s aisles, observing the system through the robot’s camera and microphone.
“Trip doesn’t replace a human operator, but if you are worried about something at 2 a.m., you can check it without driving to the data center,” she said.
Delivering HPC, AI Results at Scale
In the end, it’s all about the results, and they came fast.
In June, Selene hit No. 7 on the TOP500 list and No. 2 on the Green500 list of the most power-efficient systems. In July, it broke records in all eight systems tests for AI training performance in the latest MLPerf benchmarks.
“The big surprise for me was how smoothly everything came up given we were using new processors and boards, and I credit all the testing along the way,” said Houston. “To get this machine up and do a bunch of hard back-to-back benchmarks gave the team a huge lift,” he added.
The work pre-testing NGC containers and HPC software for Argonne was even more gratifying. The lab is already hammering on hard problems in protein docking and quantum chemistry to shine a light on the coronavirus.
At the same time, NVIDIA’s own researchers are using Selene to train autonomous vehicles and refine conversational AI, nearing advances they’re expected to report soon. They are among more than a thousand jobs run, often simultaneously, on the system so far.
Meanwhile the team already has on the whiteboard ideas for what’s next. “Give performance-obsessed engineers enough horsepower and cables and they will figure out amazing things,” said Bernauer.
At top: An artist’s rendering of a portion of Selene.
As its evocative name suggests, Abyss Solutions is a company taking AI to places where humans can’t — or shouldn’t — go.
The brainchild of four University of Sydney scientists and engineers, six years ago the startup set out to improve the maintenance and observation of industrial equipment.
It began by developing advanced technology to inspect the most difficult to reach assets of urban water infrastructure systems, such as dams, reservoirs, canals, bridges and ship hulls. Later, it zeroed in on an industry that often operates literally in the dark: offshore oil and gas platforms.
A few years ago, Abyss CEO Nasir Ahsan and CTO Suchet Bargoti were demonstrating to a Houston-based platform operator the insights they could generate from the image data collected by its underwater Lantern Eye 3D camera. The camera’s sub-millimeter accuracy provides a “way to inspect objects as if you’re taking them out of water,” said Bargoti.
An employee of the operator interrupted the meeting to describe an ongoing problem the company was having with their topside equipment that was decaying and couldn’t be repaired sufficiently. Once it was clear that Abyss could provide detailed insight into the problem and how to solve it, no more selling was needed.
“Every one of these companies is dreading the next Deepwater Horizon,” said Bargoti, referencing the 2010 incident in which BP spilled nearly 5 million barrels of oil into the Gulf of Mexico, killing 11 people and countless wildlife, and costing the company $65 billion in cleanup costs and fines. “What they wanted to know is, ‘Will your data analytics help us understand what to fix and when to fix it?’”
Today, Abyss’s combination of NVIDIA GPU-powered deep learning algorithms, unmanned vehicles and innovative underwater cameras is enabling platform operators to spot faults and anomalies such as corrosion on equipment above and below the water and address it before it fails, potentially saving millions of dollars and even a few human lives.
During the COVID-19 pandemic, the stakes have risen. Offshore rigs have emerged as hotbeds for the spread of the virus, forcing them to adopt strict quarantine procedures that limit the number of people onsite in order to reduce the disease’s spread and minimize interruptions.
Essentially, this has sped up the industry’s digital transformation push and fueled the urgency of Abyss’ work, said Bargoti. “They can’t afford to have these things happening,” he said.
Better Than Human Performance
Historically, inspection and maintenance of offshore platforms and equipment has been a costly, time-consuming and labor-intensive task for oil and gas companies. It often yields subjective findings that can result in missed needed repairs and unplanned shutdowns.
An independent audit found that Abyss’ semantic segmentation models are able to detect general corrosion with greater than 90 percent accuracy, while severe corrosion is identified with greater than 97 percent accuracy. Both are significant improvements over human efforts, and also have outcompeted other AI companies in the audit.
What’s more, Abyss says that its oil and gas platform clients report reductions in operating costs by as much as 25 percent thanks to its technology.
Training of Abyss’s models, which rely on many terabytes of data (each platform generates about 1TB a day), occurs on AWS instances running NVIDIA T4 Tensor Core GPUs. The company also uses the latest versions of CUDA and cuDNN in conjunction with TensorFlow to power deep learning applications such as image and video segmentation and classification, and object detection.
Most of the data can be processed in the cloud because of the slowness of the corrosion process, but there are times when real-time AI is needed onsite, such as when a robotic vehicle needs to make decisions on where to go next.
Taking Full Advantage of Inception
As a member of NVIDIA Inception, a program to help startups working in AI and data science get to market faster, Abyss has benefited from a try-before-you-buy approach to NVIDIA tech. That’s allowed it to experiment with technologies before making big investments.
It’s also getting valuable advice on what’s coming down the pipe and how to time its work with the release of new GPUs. Bargoti said NVIDIA’s regularly advancing technology is helping Abyss squeeze more data into each compute cycle, pushing it closer to its long-term vision.
“We want to be the intel in these unmanned systems that makes smart decisions and pushes the frontier of exploration,” said Bargoti. “It’s all leading to this better development of perception systems, better development of decision-making systems and better development of robotics systems.”
Abyss is taking a deep look at a number of additional markets it believes its technology can help. The team is taking on growth capital and rapidly expanding globally.
“Continuous investment in R&D and innovation plays a critical role in ensuring Abyss can provide game-changing solutions to the industry,” he said.
Testing for COVID-19 has become more widespread, but addressing the pandemic will require quickly screening for and triaging patients who are experiencing symptoms.
Lunit, a South Korean medical imaging startup — its name is a portmanteau of “learning unit” — has created an AI-based system to detect pneumonia, often present in COVID-19 infected patients, within seconds.
The Lunit INSIGHT CXR system, which is CE marked, uses AI to quickly detect 10 different radiological findings on chest X-rays, including pneumonia and potentially cancerous lung nodules.
It overlays the results onto the X-ray image along with a probability score for the finding. The system also monitors progression of a patient’s condition, automatically tracking changes within a series of chest X-ray images taken over time.
Lunit has recently partnered with GE Healthcare, which launched its Thoracic Care Suite using Lunit INSIGHT CXR’s AI algorithms to flag abnormalities on chest X-rays for radiologists’ review. It’s one of the first collaborations to bring AI from a medical startup to an existing X-ray equipment manufacturer, making AI-based solutions commercially available.
For integration of its algorithms with GE Healthcare and other partners’ products, Lunit’s hardware is powered by NVIDIA Quadro P1000 GPUs, and its AI model is optimized on the NVIDIA Jetson TX2i module. For cloud-based deployment, the company uses NVIDIA drivers and GPUs.
Lunit is a premier member of NVIDIA Inception, a program that helps startups with go-to-market support, expertise and technology. Brandon Suh, CEO of Lunit, said being an Inception partner “has helped position the company as a leader in state-of-the-art technology for social impact.”
AI Opens New Doors in Medicine
The beauty of AI, according to Suh, is its ability to process vast amounts of data and discover patterns — augmenting human ability, in terms of time and energy.
The founders of Lunit, he said, started with nothing but a “crazy obsession with technology” and a vision to use AI to “open a new door for medical practice with increased survival rates and more affordable costs.”
Initially, Lunit’s products were focused on detecting potentially cancerous nodules in a patient’s lungs or breasts, as well as analyzing pathology tissue slides. However, the COVID-19 outbreak provided an opportunity for the company to upgrade the algorithms being used to help alleviate the burdens of healthcare professionals on the frontlines of the pandemic.
“The definitive diagnosis for COVID-19 involves a polymerase chain reaction test to detect antigens, but the results take 1-2 days to be delivered,” said Suh. “In the meantime, the doctors are left without any clinical evidence that can help them make a decision on triaging the patients.”
With its newly refined algorithm, Lunit INSIGHT CXR can now single out pneumonia and identify it in a patient within seconds, helping doctors make immediate, actionable decisions for those in more urgent need of care.
The Lunit INSIGHT product line, which provides AI analysis for chest X-rays and mammograms, has been commercially deployed and tested in more than 130 sites in countries such as Brazil, France, Indonesia, Italy, Mexico, South Korea and Thailand.
“We feel fortunate to be able to play a part in the battle against COVID-19 with what we do best: developing medical AI solutions,” said Suh. “Though AI’s considered cutting-edge technology today, it could be a norm tomorrow, and we’d like everyone to benefit from a more accurate and efficient way of medical diagnosis and treatment.”
The team at Lunit is at work developing algorithms to use with 3D imaging, in addition to their current 2D ones. They’re also looking to create software that analyzes a tumor’s microenvironment to predict whether a patient would respond to immunotherapy.
Learn more about Lunit at NVIDIA’s healthcare AI startups solutions webinar on August 13. Register here.
Michael Kirk and Raphael Attie, scientists at NASA’s Goddard Space Flight Center, regularly face terabytes of data in their quest to analyze images of the sun.
This computational challenge, which could take a year or more on a CPU, has been reduced to less than a week on Quadro RTX data science workstations. Kirk and Attie spoke to AI Podcast host Noah Kravitz about the workflow they follow to study these images, and what they hope to find.
The lessons they’ve learned are useful for those in both science and industry grappling with how to best put torrents of data to work.
The researchers study images captured by telescopes on satellites, such as the Solar Dynamics Observatory spacecraft, as well as those from ground-based observatories.
They study these images to identify particles in Earth’s orbit that could damage interplanetary spacecraft, and to track solar surface flows, which allow them to develop models predicting weather in space.
Currently, these images are taken in space and sent to Earth for computation. But Kirk and Attie aim to shoot for the stars in the future: the goal is the ultimate form of edge computing, putting high-performance computers in space.
Key Points From This Episode:
The primary instrument that Kirk and Attie use to see images of the sun is the Solar Dynamics Observatory, a spacecraft that has four telescopes to take images of the extreme ultraviolet light of the sun, as well as an additional instrument to measure its magnetic fields.
Researchers such as Kirk and Attie have developed machine learning algorithms for a variety of projects, such as creating synthetic images of the sun’s surface and its flow fields.
“We take an image about once every 1.3 seconds of the sun … that entire data archive — we’re sitting at about 18 petabytes right now.” — Michael Kirk [6:50]
“What AI is really offering us is a way to crunch through terabytes of data that are very difficult to move back to Earth.” — Raphael Attie [34:34]
UC Berkeley’s Gerry Zhang talks about his work using deep learning to analyze signals from space for signs of intelligent extraterrestrial civilizations. And while we haven’t found aliens yet, the doctoral student has already made some extraordinary discoveries.
To turn the vast quantities of data that will be pouring out of new telescopes into world-changing scientific discoveries, Brant Robertson, a visiting professor at the Institute for Advanced Study in Princeton and an associate professor of astronomy at UC Santa Cruz, is turning to AI.
Creating a labeled dataset for training an AI application can hit the brakes on a company’s speed to market. Clarifai, an image and text recognition startup, aims to put that obstacle in the rearview mirror.
The New York City-based company today announced the general availability of its AI-assisted data labeling service, dubbed Clarifai Labeler. The company offers data labeling as a service as well.
Founded in 2013, Clarifai entered the image-recognition market in its early days. Since that time, the number of companies exploiting unstructured data for business advantages has swelled, creating a wave of demand for data scientists. And with industry disruption from image and text recognition spanning agriculture, retail, banking, construction, insurance and beyond, much is at stake.
“High-quality AI models start with high-quality dataset annotation. We’re able to use AI to make labeling data an order of magnitude faster than some of the traditional technologies out there,” said Alfredo Ramos, a senior vice president at Clarifai.
Backed by NVIDIA GPU Ventures, Clarifai is gaining traction in retail, banking and insurance, as well as for applications in federal, state and local agencies, he says.
AI Labeling with Benefits
Clarifai’s Labeler shines at labeling video footage. The tool integrates a statistical method so that an annotated object — one with a bounding box around it — can be tracked as it moves throughout the video.
Since each second of video is made up of multiple frames of images, the tracking capabilities result in increased accuracy and huge improvements in the quantity of annotations per object, as well as a drastic reduction in the time to label large volumes of data.
The new Labeler was most recently used to annotate days of video footage to build a model to detect whether people were wearing face masks, which resulted in a million annotations in less than four days.
Traditionally, this would’ve taken a human workforce six weeks to label the individual frames. With Labeler, they created 1 million annotations 10 times faster, said Ramos.
Ramos reports to one of AI’s academic champions. CEO and founder Matthew Zeiler took the industry by storm when his neural networks dominated the ImageNet Challenge in 2013. That became his launchpad for Clarifai.
Zeiler has since evolved his research into developer-friendly products that allow enterprises to quickly and easily integrate AI into their workflows and customer experiences. The company continues to attract new customers, most recently, with the release of its natural language processing product.
While much has changed in the industry, Clarifai’s focus on research hasn’t.
“We have a sizable team of researchers, and we have become adept at taking some of the best research out there in the academic world and very quickly deploying it for commercial use,” said Ramos.
Clarifai is a member of NVIDIA Inception, a virtual accelerator program that helps startups in AI and data science get to market faster.
Academic medical centers worldwide are building new AI tools to battle COVID-19 — including at Mass General, where one center is adopting NVIDIA DGX A100 AI systems to accelerate its work.
Researchers at the hospital’s Athinoula A. Martinos Center for Biomedical Imaging are working on models to segment and align multiple chest scans, calculate lung disease severity from X-ray images, and combine radiology data with other clinical variables to predict outcomes in COVID patients.
Built and tested using Mass General Brigham data, these models, once validated, could be used together in a hospital setting during and beyond the pandemic to bring radiology insights closer to the clinicians tracking patient progress and making treatment decisions.
“While helping hospitalists on the COVID-19 inpatient service, I realized that there’s a lot of information in radiologic images that’s not readily available to the folks making clinical decisions,” said Matthew D. Li, a radiology resident at Mass General and member of the Martinos Center’s QTIM Lab. “Using deep learning, we developed an algorithm to extract a lung disease severity score from chest X-rays that’s reproducible and scalable — something clinicians can track over time, along with other lab values like vital signs, pulse oximetry data and blood test results.”
The Martinos Center uses a variety of NVIDIA AI systems, including NVIDIA DGX-1, to accelerate its research. This summer, the center will install NVIDIA DGX A100 systems, each built with eight NVIDIA A100 Tensor Core GPUs and delivering 5 petaflops of AI performance.
“When we started working on COVID model development, it was all hands on deck. The quicker we could develop a model, the more immediately useful it would be,” said Jayashree Kalpathy-Cramer, director of the QTIM lab and the Center for Machine Learning at the Martinos Center. “If we didn’t have access to the sufficient computational resources, it would’ve been impossible to do.”
Comparing Notes: AI for Chest Imaging
COVID patients often get imaging studies — usually CT scans in Europe, and X-rays in the U.S. — to check for the disease’s impact on the lungs. Comparing a patient’s initial study with follow-ups can be a useful way to understand whether a patient is getting better or worse.
But segmenting and lining up two scans that have been taken in different body positions or from different angles, with distracting elements like wires in the image, is no easy feat.
Bruce Fischl, director of the Martinos Center’s Laboratory for Computational Neuroimaging, and Adrian Dalca, assistant professor in radiology at Harvard Medical School, took the underlying technology behind Dalca’s MRI comparison AI and applied it to chest X-rays, training the model on an NVIDIA DGX system.
“Radiologists spend a lot of time assessing if there is change or no change between two studies. This general technique can help with that,” Fischl said. “Our model labels 20 structures in a high-resolution X-ray and aligns them between two studies, taking less than a second for inference.”
This tool can be used in concert with Li and Kalpathy-Cramer’s research: a risk assessment model that analyzes a chest X-ray to assign a score for lung disease severity. The model can provide clinicians, researchers and infectious disease experts with a consistent, quantitative metric for lung impact, which is described subjectively in typical radiology reports.
Trained on a public dataset of over 150,000 chest X-rays, as well as a few hundred COVID-positive X-rays from Mass General, the severity score AI is being used for testing by four research groups at the hospital using the NVIDIA Clara Deploy SDK. Beyond the pandemic, the team plans to expand the model’s use to more conditions, like pulmonary edema, or wet lung.
Foreseeing the Need for Ventilators
Chest imaging is just one variable in a COVID patient’s health. For the broader picture, the Martinos Center team is working with Brandon Westover, executive director of Mass General Brigham’s Clinical Data Animation Center.
Westover is developing AI models that predict clinical outcomes for both admitted patients and outpatient COVID cases, and Kalpathy-Cramer’s lung disease severity score could be integrated as one of the clinical variables for this tool.
The outpatient model analyzes 30 variables to create a risk score for each of hundreds of patients screened at the hospital network’s respiratory infection clinics — predicting the likelihood a patient will end up needing critical care or dying from COVID.
For patients already admitted to the hospital, a neural network predicts the hourly risk that a patient will require artificial breathing support in the next 12 hours, using variables including vital signs, age, pulse oximetry data and respiratory rate.
“These variables can be very subtle, but in combination can provide a pretty strong indication that a patient is getting worse,” Westover said. Running on an NVIDIA Quadro RTX 8000 GPU, the model is accessible through a front-end portal clinicians can use to see who’s most at risk, and which variables are contributing most to the risk score.
Better, Faster, Stronger: Research on NVIDIA DGX
Fischl says NVIDIA DGX systems help Martinos Center researchers more quickly iterate, experimenting with different ways to improve their AI algorithms. DGX A100, with NVIDIA A100 GPUs based on the NVIDIA Ampere architecture, will further speed the team’s work with third-generation Tensor Core technology.
“Quantitative differences make a qualitative difference,” he said. “I can imagine five ways to improve our algorithm, each of which would take seven hours of training. If I can turn those seven hours into just an hour, it makes the development cycle so much more efficient.”
“Having access to this high-capacity, high-speed storage will allow us to to analyze raw multimodal data from our research MRI, PET and MEG scanners,” said Matthew Rosen, assistant professor in radiology at Harvard Medical School, who co-directs the Center for Machine Learning at the Martinos Center. “The VAST storage system, when linked with the new A100 GPUs, is going to offer an amazing opportunity to set a new standard for the future of intelligent imaging.”
To learn more about how AI and accelerated computing are helping healthcare institutions fight the pandemic, visit our COVID page.
Anyone can tell an eagle from an ostrich. It takes a skilled birdwatcher to tell a chipping sparrow from a house sparrow from an American tree sparrow.
Now researchers are using AI to take this to the next level — identifying individual birds.
André Ferreira, a Ph.D. student at France’s Centre for Functional and Evolutionary Ecology, harnessed an NVIDIA GeForce RTX 2070 to train a powerful AI that identifies individual birds within the same species.
It’s the latest example of how deep learning has become a powerful tool for wildlife biologists studying a wide range of animals.
Ferreira built his model using Keras, a popular open-source neural network library, running on a GeForce RTX 2070 GPU.
He then teamed up with researchers at Germany’s Max Planck Institute of Animal Behavior. Together, they adapted the model to identify wild great tits and captive zebra finches, two other widely studied bird species.
To train their models — a crucial step towards building any modern deep-learning-based AI — researchers made feeders equipped with cameras.
The researchers fitted birds with electronic tags, which triggered sensors in the feeders alerting researchers to the bird’s identity.
This data gave the model a “ground truth” that it could check against for accuracy.
The team’s AI was able to identify individual sociable weavers and wild great tits more than 90 percent of the time. And it identified captive zebra finches 87 percent of the time.
For bird researchers, the work promises several key benefits.
Using cameras and other sensors to track birds allows researchers to study bird behavior much less invasively.
With less need to put researchers in the field, the technique allows researchers to track bird behavior over more extended periods.
Next: Ferreira and his colleagues are working to build AI that can recognize individual birds it has never seen before, and better track groups of birds.