Deep Learning on Tap: NVIDIA Engineer Turns to AI, GPU to Invent New Brew

Some dream of code. Others dream of beer. NVIDIA’s Eric Boucher does both at once, and the result couldn’t be more aptly named.

Full Nerd #1 is a crisp, light-bodied blonde ale perfect for summertime quaffing.

Eric, an engineer in the GPU systems software kernel driver team, went to sleep one night in May wrestling with two problems.

One, he had to wring key information from the often cryptic logs for the systems he oversees to help his team respond to issues faster.

The other: the veteran home brewer wanted a way to brew new kinds of beer.

“I woke up in the morning and I knew just what to do,” Boucher said. “Basically I got both done on one night’s broken sleep.”

Both solutions involved putting deep learning to work on a NVIDIA TITAN V GPU. Such powerful gear tends to encourage this sort of parallel processing, it seems.

Eric, a native of France now based near Sacramento, Calif., began homebrewing two decades ago, inspired by a friend and mentor at Sun Microsystems. He took a break from it when his children were first born.

Now that they’re older, he’s begun brewing again in earnest, using gear in both his garage and backyard, turning to AI for new recipes this spring.

Of course, AI has been used in the past to help humans analyze beer flavors, and even create wild new craft beer names. Eric’s project, however, is more ambitious, because it’s relying on AI to create new beer recipes.

You’ve Got Ale — GPU Speeds New Brew Ideas

For training data, Eric started with the all-grain ale recipes from MoreBeer, a hub for brewing enthusiasts, where he usually shops for recipe kits and ingredients.

Eric focused on ales because they’re relatively easy and quick to brew, and encompass a broad range of different styles, from hearty Irish stout to tangy and refreshing Kölsch.

He used wget — an open source program that retrieves content from the web — to save four index pages of ale recipes.

Then, using a Python script, he filtered the downloaded HTML pages and downloaded the linked recipe PDFs. He then converted the PDFs to plain text and used another Python script to interpret the text and generate recipes in a standardized format.

He fed these 108 recipes — including one for Russian River Brewing’s legendary Pliny the Elder IPA — to Textgenrnn, a recurrent neural network, a type of neural network that can be applied to a sequence of data to help guess what should come next.

And, because no one likes to wait for good beer, he ran it on an NVIDIA TITAN V GPU. Eric estimates it cuts the time to learn from the recipes database to seven minutes from one hour and 45 minutes using a CPU alone.

After a little tuning, Eric generated 10 beer recipes. They ranged from dark stouts to yellowish ales, and in flavor from bitter to light.

To Eric’s surprise, most looked reasonable (though a few were “plain weird and impossible to brew” like a recipe that instructed him to wait 45 days with hops in the wort, or unfermented beer, before adding the yeast).

Speed of Light (Beer)

With the approaching hot California summer in mind, Eric selected a blonde ale.

He was particularly intrigued because the recipe suggested adding Warrior, Cascade, and Amarillo hops — the flowers of the herbaceous perennial Humulus lupulus that gives good beer a range of flavors, from bitter to citrusy — an “intriguing schedule.”

The result, Eric reports, was refreshing, “not too sweet, not too bitter,” with “a nice, fresh hops smell and a long, complex finish.”

He dubbed the result Full Nerd #1.

The AI-generated brew became the latest in a long line of brews with witty names Eric has produced, including a bourbon oak-infused beer named, appropriately enough, “The Groot Beer,” in honor of the tree-like creature from Marvel’s “Guardians of the Galaxy.”

Eric’s next AI brewing project: perhaps a dark stout, for winter, or a lager, a light, crisp beer that requires months of cold storage to mature.

For now, however, there’s plenty of good brew to drink. Perhaps too much. Eric usually shares his creations with his martial arts buddies. But with social distancing in place amidst the global COVID-19 pandemic, the five gallons, or forty pints, is more than the light drinker knows what to do with.

Eric, it seems, has found a problem deep learning can’t help him with. Bottoms up.

The post Deep Learning on Tap: NVIDIA Engineer Turns to AI, GPU to Invent New Brew appeared first on The Official NVIDIA Blog.

Read More

Fleet Dreams Are Made of These: TuSimple and Navistar to Build Autonomous Trucks Powered by NVIDIA DRIVE

Self-driving trucks are coming to an interstate near you.

Autonomous trucking startup TuSimple and truck maker Navistar recently announced they will build self-driving semi trucks, powered by the NVIDIA DRIVE AGX platform. The collaboration is one of the first to develop autonomous trucks, set to begin production in 2024.

Over the past decade, self-driving truck developers have relied on traditional trucks retrofitted with the sensors, hardware and software necessary for autonomous driving. Building these trucks from the ground up, however, allows for companies to custom-build them for the needs of a self-driving system as well as take advantage of the infrastructure of a mass production truck manufacturer.

This transition is the first step from research to widespread deployment, said Chuck Price, chief product officer at TuSimple.

“Our technology, developed in partnership with NVIDIA, is ready to go to production with Navistar,” Price said. “This is a significant turning point for the industry.”

Tailor-Made Trucks

Developing a truck to drive on its own takes more than a software upgrade.

Autonomous driving relies on redundant and diverse deep neural networks, all running simultaneously to handle perception, planning and actuation. This requires massive amounts of compute.

The NVIDIA DRIVE AGX platform delivers high-performance, energy-efficient compute to enable AI-powered and autonomous driving capabilities. TuSimple has been using the platform in its test vehicles and pilots, such as its partnership with the United States Postal Service.

Building dedicated autonomous trucks makes it possible for TuSimple and Navistar to develop a centralized architecture optimized for the power and performance of the NVIDIA DRIVE AGX platform. The platform is also automotive grade, meaning it is built to withstand the wear and tear of years driving on interstate highways.

Invaluable Infrastructure

In addition to a customized architecture, developing an autonomous truck in partnership with a manufacturer opens up valuable infrastructure.

Truck makers like Navistar provide nationwide support for their fleets, with local service centers and vehicle tracking. This network is crucial for deploying self-driving trucks that will criss-cross the country on long-haul routes, providing seamless and convenient service to maintain efficiency.

TuSimple is also building out an HD map network of the nation’s highways for the routes its vehicles will travel. Combined with the widespread fleet management network, this infrastructure makes its autonomous trucks appealing to a wide variety of partners — UPS, U.S. Xpress, Penske Truck Leasing and food service supply chain company McLane Inc., a Berkshire Hathaway company, have all signed on to this autonomous freight network.

And backed by the performance of NVIDIA DRIVE AGX, these vehicles will continue to improve, delivering safer, more efficient logistics across the country.

“We’re really excited as we move into production to have a partner like NVIDIA with us the whole way,” Price said.

The post Fleet Dreams Are Made of These: TuSimple and Navistar to Build Autonomous Trucks Powered by NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

Read More

Stop the Bleeding: AI Startup Deep01 Assists Physicians Evaluate Brain Hemorrhage

During a stroke, a patient loses an estimated 1.9 million brain cells every minute, so interpreting their CT scan even one second quicker is vital to maintaining their health.

To save precious time, Taiwan-based medical imaging startup Deep01 has created an AI-based medical imaging software, called DeepCT, to evaluate acute intracerebral hemorrhage (ICH), a type of stroke. The system works with 95 percent accuracy in just 30 seconds per case — about 10 times faster than competing methods.

Founded in 2016, Deep01 is the first AI company in Asia to have FDA clearances in both the U.S. and Taiwan. It’s a member of NVIDIA Inception, a program that helps startups develop, prototype and deploy their AI or data science technology and get to market faster.

The startup recently raised around $3 million for DeepCT, which detects suspected areas of bleeding around the brain and annotates where they’re located on CT scans, notifying physicians of the results.

The software was trained using 60,000 medical images that displayed all types of acute ICH. Deep01 uses a self-developed deep learning framework that runs images and trains the model on NVIDIA GPUs.

“Working with NVIDIA’s robust AI computing hardware, in addition to software frameworks like TensorFlow and PyTorch, allows us to deliver excellent AI inference performance,” said David Chou, founder and CEO of the company.

Making Quick Diagnosis Accessible and Affordable

Strokes are the world’s second-most common cause of death. When stroke patients are ushered into the emergency room, doctors must quickly determine whether the brain is bleeding and what next steps for treatment should be.

However, many hospitals lack enough manpower to perform such timely diagnoses, since only some emergency room doctors specialize in reading CT scans. Because of this, Deep01 was founded, according to Chou, with the mission of offering affordable AI-based solutions to medical institutions.

The 30-second speed with which DeepCT completes interpretation can help medical practitioners prioritize the patients in most urgent need for treatment.

Helpful for Facilities of All Types and Sizes

DeepCT has helped doctors evaluate more than 5,000 brain scans and is being used in nine medical institutions in Taiwan, ranging from small hospitals to large-scale medical centers.

“The lack of radiologists is a big issue even in large-scale medical centers like the one I work at, especially during late-night shifts when fewer staff are on duty,” said Tseng-Lung Yang, senior radiologist at Kaohsiung Veterans General Hospital in Taiwan.

Geng-Wang Liaw, an emergency physician at Yeezen General Hospital — a smaller facility in Taiwan — agreed that Deep01’s technology helps relieve physical and mental burdens for doctors.

“Doctors in the emergency room may misdiagnose a CT scan at times,” he said. “Deep01’s solution stands by as an assistant 24/7, to give doctors confidence and reduce the possibility for medical error.”

Beyond ICH, Deep01 is at work on expanding its technology to identify midline shift, a pathological finding that occurs when there’s increased pressure on the brain and increases mortality.

The post Stop the Bleeding: AI Startup Deep01 Assists Physicians Evaluate Brain Hemorrhage appeared first on The Official NVIDIA Blog.

Read More

AI Explains AI: Fiddler Develops Model Explainability for Transparency

Your online loan application just got declined without explanation. Welcome to the AI black box.

Businesses of all stripes turn to AI for computerized decisions driven by data. Yet consumers using applications with AI get left in the dark on how automated decisions work. And many people working within companies have no idea how to explain the inner workings of AI to customers.

Fiddler Labs wants to change that.

The San Francisco-based startup offers an explainable AI platform that enables companies to explain, monitor and analyze their AI products.

Explainable AI is a growing area of interest for enterprises because those outside of engineering often need to understand how their AI models work.

Using explainable AI, banks can provide reasons to customers for a loan’s rejection, based on data points fed to models, such as maxed credit cards or high debt-to-income ratios. Internally, marketers can strategize about customers and products by knowing more about the data points that drive them.

“This is bridging the gap between hardcore data scientists who are building the models and the business teams using these models to make decisions,” said Anusha Sethuraman, head of product marketing at Fiddler Labs.

Fiddler Labs is a member of NVIDIA Inception, a program that enables companies working in AI and data science with fundamental tools, expertise and marketing support, and helps them get to market faster.

What Is Explainable AI?

Explainable AI is a set of tools and techniques that help explore the math inside an AI model. It can map out the data inputs and their weighted values that were used to arrive at the data output of the model.

All of this, essentially, enables a layperson to study the sausage factory at work on the inside of an otherwise opaque process. The result is explainable AI can help deliver insights into how and why a particular decision was made by a model.

“There’s often a hurdle to get AI into production. Explainability is one of the things that we think can address this hurdle,” Sethuraman said.

With an ensemble of models often at use, creating this is no easy job.

But Fiddler Labs CEO and co-founder Krishna Gade is up to the task. He previously led the team at Facebook that built the “Why am I seeing this post?” feature to help consumers and internal teams understand how its AI works in the Facebook news feed.

He and Amit Paka — a University of Minnesota classmate — joined forces and quit their jobs to start Fiddler Labs. Paka, the company’s chief product officer, was motivated by his experience at Samsung with shopping recommendation apps and the lack of understanding into how these AI recommendation models work.

Explainability for Transparency

Founded in 2018, Fiddler Labs offers explainability for greater transparency in businesses. It helps companies make better informed business decisions through a combination of data, explainable AI and human oversight, according to Sethuraman.

Fiddler’s tech is used by Hired, a talent and job matchmaking site driven by AI. Fiddler provides real-time reporting on how Hired’s AI models are working. It can generate explanations on candidate assessments and provide bias monitoring feedback, allowing Hired to assess its AI.

Explainable AI needs to be quickly available for consumer fintech applications. That enables customer service representatives to explain automated financial decisions — like loan rejections and robo rates — and build trust with transparency about the process.

The algorithms used for explanations require hefty processing. Sethuraman said that Fiddler Labs taps into NVIDIA cloud GPUs to make this possible, saying CPUs aren’t up to the task.

“You can’t wait 30 seconds for the explanations — you want explanations within milliseconds on a lot of different things depending on the use cases,” Sethuraman said.

Visit NVIDIA’s financial services industry page to learn more.

Image credit: Emily Morter, via the Unsplash Photo Community. 

The post AI Explains AI: Fiddler Develops Model Explainability for Transparency appeared first on The Official NVIDIA Blog.

Read More

Keeping a Watchful AI: NASA Project Aims to Predict Space Weather Events

While a thunderstorm could knock out your neighborhood’s power for a few hours, a solar storm could knock out electricity grids across all of Earth, possibly taking weeks to recover from.

To try to predict solar storms — which are disturbances on the sun — and their potential effects on Earth, NASA’s Frontier Development Lab (FDL) is running what it calls a geoeffectiveness challenge.

It uses datasets of tracked changes in the magnetosphere — where the Earth’s magnetic field interacts with solar wind — to train AI-powered models that can detect patterns of space weather events and predict their Earth-related impacts.

The training of the models is optimized on NVIDIA GPUs available on Google Cloud, and data exploration is done on RAPIDS, NVIDIA’s open-source suite of software libraries built to execute data science and analytics pipelines entirely on GPUs.

Siddha Ganju, a solutions architect at NVIDIA who was named to Forbes’ 30 under 30 list in 2018, is advising NASA on the AI-related aspects of the challenge.

A deep learning expert, Ganju grew up going to hackathons. She says she’s always been fascinated by how an algorithm can read in between the lines of code.

Now, she’s applying her knowledge to NVIDIA’s automotive and healthcare businesses, as well NASA’s AI technical steering committee. She’s also written a book on practical uses of deep learning, published last October.

Modeling Space Weather Impacts with AI

Ganju’s work with the FDL began in 2017, when its founder, James Parr, asked her to start advising the organization. Her current task, advising the geoeffectiveness challenge, seeks to use machine learning to characterize magnetic field perturbations and model the impact of space weather events.

In addition to solar storms, space weather events can include such activities as solar flares, which are sudden flashes of increased brightness on the sun, and solar wind, a stream of charged particles released from it.

Not all space weather events impact the Earth, said Ganju, but in case one does, we need to be prepared. For example, a single powerful solar storm could knock out our planet’s telephone networks.

“Even if we’re able to predict the impact of an event just 15 minutes in advance, that gives us enough time to sound the alarm and prepare for potential connectivity loss,” said Ganju. “This data can also be useful for satellites to communicate in a better way.”

Exploring Spatial and Temporal Patterns

Solar events can impact parts of the Earth differently due to a variety of factors, Ganju said. With the help of machine learning, the FDL is trying to find spatial and temporal patterns of the effects.

“The datasets we’re working with are huge, since magnetometers collect data on the changes of a magnetic field at a particular location every second,” said Ganju. “Parallel processing using RAPIDS really accelerates our exploration.”

In addition to Ganju, researchers Asti Bhatt, Mark Cheung and Ryan McGranaghan, as well as NASA’s Lika Guhathakurta, are advising the geoeffectiveness challenge team. Its members include Téo Bloch, Banafsheh Ferdousi, Panos Tigas and Vishal Upendran.

The researchers use RAPIDS to explore the data quickly. Then, using the PyTorch and TensorFlow software libraries, they train the models for experiments to identify how the latitude of a location, the atmosphere above it, or the way sun rays hit it affect the consequences of a space weather event.

They’re also studying whether an earthly impact happens immediately as the space event occurs, or if it has a delayed effect, as an impact could depend on time-related factors, such as the Earth’s revolutions around the sun or its rotation about its own axis.

To detect such patterns, the team will continue to train the model and analyze data throughout the duration of FDL’s eight-week research sprint, which concludes later this month.

Other FDL projects participating in the sprint, according to Ganju, include the moon for good challenge, which aims to discover the best landing position on the moon. Another is the astronaut health challenge, which is investigating how high-radiation environments can affect an astronaut’s well-being.

The FDL is holding a virtual U.S. Space Science & AI showcase, on August 14, where the 2020 challenges will be presented. Register for the event here.

Feature image courtesy of NASA.

The post Keeping a Watchful AI: NASA Project Aims to Predict Space Weather Events appeared first on The Official NVIDIA Blog.

Read More

Teen’s Gambit: 15-Year-Old Chess Master Puts Blundering Laptop in Check with Jetson Platform

Only 846 people in the world hold the title of Woman International Master of chess. Evelyn Zhu, age 15, is one of them.

A rising high school junior in Long Island, outside New York City, Zhu began playing chess competitively at the age of seven and has worked her way up to being one of the top players of her age.

Before COVID-19 limited in-person gatherings, Zhu typically spent two to three hours a day practicing online for an upcoming tournament — if only her laptop could keep up.

Chess engines like Leela Chess Zero — Zhu’s go-to practice partner, which recently beat all others at the 17th season of the Top Chess Engine Championship — use artificial neural network algorithms to mimic the human brain and make moves.

It takes a lot of processing power to take full advantage of such algorithms, so Zhu’s two-year-old laptop would often crash from overheating.

Zhu turned to the NVIDIA Jetson Xavier NX module to solve the issue. She connected the module to her laptop with a MicroUSB-to-USB cable and launched the engine on it. The engine ran smoothly. She also noted that doing the same with the NVIDIA Jetson AGX Xavier module doubled the speed at which the engine analyzed chess positions.

This solution is game-changing, said Zhu, as running Leela Chess Zero on her laptop allows her to improve her skills even while on the go.

AI-based chess engines allow players like Zhu to perform opening preparation, the process of figuring out new lines of moves to be made during the beginning stage of the game. Engines also help with game analysis, as they point out subtle mistakes that a player makes during gameplay.

Opening New Moves Between Chess and Computer Science

“My favorite thing about chess is the peace that comes from being deep in your thoughts when playing or studying a game,” said Zhu. “And getting to meet friends at various tournaments.”

One of her favorite memories is from the 2020 U.S. Team East Tournament, the last she competed at before the COVID-19 outbreak. Instead of the usual competition where one wins or loses as an individual, this was a tournament where players scored points for their teams by winning individual matches.

Zhu’s squad, comprising three other girls around her age, placed second out of 318 teams of all ages.

“Nobody expected that, especially because we were a young all-girls team,” she said. “It was so memorable.”

Besides chess, Zhu has a passion for computer science and hopes to study it in college.

“What excites me most about CS is that it’s so futuristic,” she said. “It seems like we’re making progress in AI on a daily basis, and I really think that it’s the route to advancing society.”

Working with the Jetson platform has opened up a pathway for Zhu to combine her passions for chess and AI. After she posted online instructions on how she supercharged her crashing laptop with NVIDIA technology, Zhu heard from people all around the world.

Her post even sparked discussion of chess in the context of AI, she said, showing her that there’s a global community interested in the topic.

Find out more about Zhu’s chess and tech endeavors.

Learn more about the Jetson platform.

The post Teen’s Gambit: 15-Year-Old Chess Master Puts Blundering Laptop in Check with Jetson Platform appeared first on The Official NVIDIA Blog.

Read More

Non-Stop Shopping: Startup’s AI Let’s Supermarkets Skip the Line

Non-Stop Shopping: Startup’s AI Let’s Supermarkets Skip the Line

Eli Gorovici loves to take friends sailing on the Mediterranean. As the new pilot of Trigo, a Tel Aviv-based startup, he’s inviting the whole retail industry on a cruise to a future with AI.

“We aim to bring the e-commerce experience into the brick-and-mortar supermarket,” said Gorovici, who joined the company as its chief business officer in May.

The journey starts with the sort of shopping anyone who’s waited in a long checkout line has longed for.

You fill up your bags at the market and just walk out. Magically, the store knows what you bought, bills your account and sends you a digital receipt, all while preserving your privacy.

Trigo is building that experience and more. Its magic is an AI engine linked to cameras and a few weighted shelves for small items a shopper’s hand might completely cover.

With these sensors, Trigo builds a 3D model of the store. Neural networks recognize products customers put in their bags.

When shoppers leave, the system sends the grocer the tally and a number it randomly associated with them when they chose to swipe their smartphone as they entered the store. The grocer matches the number with a shopper’s account, charges it and sends off a digital bill.

And that’s just the start.

An Online Experience in the Aisles

Shoppers get the same personalized recommendation systems they’re used to seeing online.

“If I’m standing in front of pasta, I may see on my handset a related coupon or a nice Italian recipe tailored for me,” said Gorovici. “There’s so much you can do with data, it’s mind blowing.”

The system lets stores fine-tune their inventory management systems in real time. Typical shrinkage rates from shoplifting or human error could sink to nearly zero.

AI Turns Images into Insights

Making magic is hard work. Trigo’s system gathers a petabyte of video data a day for an average-size supermarket.

It uses as many as four neural networks to process that data at mind-melting rates of up to a few hundred frames per second. (By contrast, your TV displays high-definition movies at 60 fps.)

Trigo used a dataset of up to 500,000 2D product images to train its neural networks. In their daily operations, the system uses those models to run millions of inference tasks with help from NVIDIA TensorRT software.

The AI work requires plenty of processing muscle. A supermarket outside London testing the Trigo system uses servers in its back room with 40-50 NVIDIA RTX GPUs. To boost efficiency, Trigo plans to deliver edge servers using NVIDIA T4 Tensor Core GPUs and join the NVIDIA Metropolis ecosystem starting next year.

Trigo got early access to the T4 GPUs thanks to its participation in NVIDIA Inception, a program that gives AI startups traction with tools, expertise and go-to-market support. The program also aims to introduce Trigo to NVIDIA’s retail partners in Europe.

In 2021, Trigo aims to move some of the GPU processing to Google, Microsoft and other cloud services, keeping some latency- or privacy-sensitive uses inside the store. It’s the kind of distributed architecture businesses are just starting to adopt, thanks in part to edge computing systems such as NVIDIA’s EGX platform.

Big Supermarkets Plug into AI

Tesco, the largest grocer in the U.K., has plans to open its first market using Trigo’s system. “We’ve vetted the main players in the industry and Trigo is the best by a mile,” said Tesco CEO Dave Lewis.

Israel’s largest grocer, Shufersal, also is piloting Trigo’s system, as are other retailers around the world.

Trigo was founded in 2018 by brothers Michael and Daniel Gabay, leveraging tech and operational experience from their time in elite units of the Israeli military.

Seeking his next big opportunity in his field of video technology, Gorovici asked friends who were venture capitalists for advice. “They said Trigo was the future of retail,” Gorovici said.

Like sailing in the aqua-blue Mediterranean, AI in retail is a compelling opportunity.

“It’s a trillion-dollar market — grocery stores are among the biggest employers in the world. They are all being digitized, and selling more online now given the pandemic, so maybe this next stage of digital innovation for retail will now move even faster,” he said.

Read More

Taking the Heat Off: AI Temperature Screening Aids Businesses Amid Pandemic

Taking the Heat Off: AI Temperature Screening Aids Businesses Amid Pandemic

As businesses and schools consider reopening around the world, they’re taking safety precautions to mitigate the lingering threat of COVID-19 — often taking the temperature of each individual entering their facilities.

Fever is a common warning sign for the virus (and the seasonal flu), but manual temperature-taking with infrared thermometers takes time and requires workers stationed at a building’s entrances to collect temperature readings. AI solutions can speed the process and make it contactless, sending real-time alerts to facilities management teams when visitors with elevated temperatures are detected.

Central California-based IntelliSite Corp. and its recently acquired startup, Deep Vision AI, have developed a temperature screening application that can scan over 100 people a minute. Temperature readings are accurate within a tenth of a degree Celcius. And customers can get up and running with the app within a few hours, with an AI platform running on NVIDIA GPUs on premises or in the cloud for inference.

“Our software platform has multiple AI modules, including foot traffic counting and occupancy monitoring, as well as vehicle recognition,” said Agustin Caverzasi, co-founder of Deep Vision AI, and now president of IntelliSite’s AI business unit. “Adding temperature detection was a natural, easy step for us.”

The temperature screening tool has been deployed in several healthcare facilities and is being tested at U.S. airports, amusement parks and education facilities. Deep Vision is part of NVIDIA Inception, a program that helps startups working in AI and data science get to market faster.

“Deep Vision AI joined Inception at the very beginning, and our engineering and research teams received support with resources like GPUs for training,” Caverzasi said. “It was really helpful for our company’s initial development.”

COVID Risk or Coffee Cup? Building AI for Temperature Tracking

As the pandemic took hold, and social distancing became essential, Caverzasi’s team saw that the technology they’d spent years developing was more relevant than ever.

“The need to protect people from harmful viruses has never been greater,” he said. “With our preexisting AI modules, we can monitor in real time the occupancy levels in a store or a hospital’s waiting room, and trigger alerts before the maximum occupancy is reached in a given area.”

With governments and health organizations advising temperature checking, the startup applied its existing AI capabilities to thermal cameras for the first time. In doing so, they had to fine-tune the model so it wouldn’t be fooled by false positives — for example, when a person shows up red on a thermal camera because of their cup of hot coffee..

This AI model is paired with one of IntelliSite’s IoT solutions called human-based monitoring, or hBM. The hBM platform includes a hardware component: a mobile cart mounted with a thermal camera, monitor and Dell Precision tower workstation for inference. The temperature detection algorithms can now scan five people at the same time.

Double Quick: Faster, Easier Screening

The workstation uses the NVIDIA Quadro RTX 4000 GPU for real-time inference on thermal data from the live camera view. This reduces manual scanning time for healthcare customers by 80 percent, and drops the total cost of conducting temperature scans by 70 percent.

Facilities using hBM can also choose to access data remotely and monitor multiple sites, using either an on-premises Dell PowerEdge R740 server with NVIDIA T4 Tensor Core GPUs, or GPU resources through the IntelliSite Cloud Engine.

If businesses and hospitals are also taking a second temperature measurement with a thermometer, these readings can be logged in the hBM system, which can maintain records for over a million screenings. Facilities managers can configure alerts via text message or email when high temperatures are detected.

The Deep Vision developer team, based in Córdoba, Argentina, also had to adapt their AI models that use regular camera data to detect people wearing face masks. They use the NVIDIA Metropolis application framework for smart cities, including the NVIDIA DeepStream SDK for intelligent video analytics and NVIDIA TensorRT to accelerate inference.

Deep Vision and IntelliSite next plan to integrate the temperature screening AI with facial recognition models, so customers can use the application for employee registration once their temperature has been checked.

IntelliSite is a member of the NVIDIA Clara Guardian ecosystem, bringing edge AI to healthcare facilities. Visit our COVID page to explore how other startups are using AI and accelerated computing to fight the pandemic.

FDA disclaimer: Thermal measurements are designed as a triage tool and should not be the sole means of diagnosing high-risk individuals for any viral threat. Elevated thermal readings should be confirmed with a secondary, clinical-grade evaluation tool. FDA recommends screening individuals one at a time, not in groups.

Read More

HPE’s Jared Dame on How AI, Data Science Driving Demand for Powerful New Workstations

HPE’s Jared Dame on How AI, Data Science Driving Demand for Powerful New Workstations

Smart phones, smart devices, the cloud — if it seems like AI is everywhere, that’s because it is.

That makes more essential than ever the powerful workstations able to crunch the ever growing quantities of data on which modern AI is built.

Jared Dame, Hewlett Packard Enterprise’s director of business development and strategy for AI, data science and edge technologies, spoke to AI Podcast host Noah Kravitz about the role HPE’s workstations play in cutting-edge AI and data science.

In the AI pipeline, Dame explained, workstations can do just about everything — from training to inference. The biggest demand for workstations is now coming from biopharmaceutical companies, the oil and gas industry and the federal government.

Key Points From This Episode:

  • Z by HP workstations feature hundreds of thousands of sensors that predict problems within a machine up to a month in advance, so customers don’t experience a loss of data or time.
  • The newest Z Book Studio, equipped with NVIDIA Quadro graphics, will be launching this fall.

Tweetables:

“Z by HP is selling literally everywhere. Every vertical market does data science, every vertical market is adopting various types of AI.” — Jared Dame [5:47]

“We’re drinking our own Kool Aid — we use our own machines. And we’re using the latest and greatest technologies from CUDA TensorFlow to traditional programming languages.” — Jared Dame [18:36]

You Might Also Like

Lenovo’s Mike Leach on the Role of the Workstation in Modern AI

Whether it’s the latest generation of AI-enabled mobile apps or robust business systems powered on banks of powerful servers, chances are the technology was built first on a workstation. Lenovo’s Mike Leach describes how these workhorses are adapting to support a plethora of new kinds of AI applications.

Serkan Piantino’s Company Makes AI for Everyone

Spell, founded by Serkan Piantino, is making machine learning as easy as ABC. Piantino, CEO of the New York-based startup, explained how he’s bringing compute power to those who don’t have easy access to GPU clusters.

SAS Chief Operating Officer Oliver Schabenberger

SAS Chief Operating Officer Oliver Schabenberger spoke about how organizations can use AI and related technologies.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

The post HPE’s Jared Dame on How AI, Data Science Driving Demand for Powerful New Workstations appeared first on The Official NVIDIA Blog.

Read More

It’s Not Pocket Science: Undergrads at Hackathon Create App to Evaluate At-Home Physical Therapy Exercises

It’s Not Pocket Science: Undergrads at Hackathon Create App to Evaluate At-Home Physical Therapy Exercises

The four undergrads met for the first at the Stanford TreeHacks hackathon, became close friends, and developed an AI-powered app to help physical therapy patients ensure correct posture for their at-home exercises — all within 36 hours.

Back in February, just before the lockdown, Shachi Champaneri, Lilliana de Souza, Riley Howk and Deepa Marti happened to sit across from each other at the event’s introductory session and almost immediately decided to form a team for the competition.

Together, they created PocketPT, an app that lets users know whether they’re completing a physical therapy exercise with the correct posture and form. It captured two prizes against a crowded field, and inspired them to continue using AI to help others.

The app’s AI model uses the NVIDIA Jetson Nano developer kit to detect a user doing the tree pose, a position known to increase shoulder muscle strength and improve balance. The Jetson Nano performs image classification so the model can tell whether the pose is being done correctly based on 100+ images it was trained on, which the team took of themselves. Then, it provides feedback to the user, letting them know if they should adjust their form.

“It can be taxing for patients to go to the physical therapist often, both financially and physically,” said Howk.

Continuing exercises at home is a crucial part of recovery for physical therapy patients, but doing them incorrectly can actually hinder progress, she explained.

Bringing the Idea to Life

In the months leading up to the hackathon, Howk, a rising senior at the University of Alabama, was interning in Los Angeles, where a yoga studio is virtually on every corner. She’d arrived at the competition with the idea to create some kind of yoga app, but it wasn’t until the team came across the NVIDIA table at the hackathon’s sponsor fair that they realized the idea’s potential to expand and help those in need.

“A demo of the Jetson Nano displayed how the system can track bodily movement down to the joint,” said Marti, a rising sophomore at UC Davis. “That’s what sparked the possibility of making a physical therapy app, rather than limiting it to yoga.”

None of the team members had prior experience working with deep learning and computer vision, so they faced the challenge of learning how to implement the model in such a short period of time.

“The NVIDIA mentors were really helpful,” said Champaneri, a rising senior at UC Davis. “They put together a tutorial guide on how to use the Nano that gave us the right footing and outline to follow and implement the idea.”

Over the first night of the hackathon, the team took NVIDIA’s Deep Learning Institute course on getting started with AI on the Jetson Nano. They’d grasped the basics of deep learning. The next morning, they began hacking and training the model with images of themselves displaying correct versus incorrect exercise poses.

In just 36 hours since the idea first emerged, PocketPT was born.

Winning More Than Just Awards

The most exciting part of the weekend was finding out the team had made it to final pitches, according to Howk. They presented their project in front of a crowd of 500 and later found out that it had won the two prizes.

The hackathon attracted 197 projects. Competing against 65 other projects in the Medical Access category — many of which used cloud or other platforms — their project took home the category’s grand prize. It was also chosen as the “Best Use of Jetson Hack,” among 11 other groups that borrowed a Jetson for their projects.

But the quartet is looking to do more with their app than win awards.

Because of the fast-paced nature of the hackathon, PocketPT was only able to fully implement one pose, with others still in the works. However, the team is committed to expanding the product and promoting their overall mission of making physical therapy easily accessible to all.

While the hackathon took place just before the COVID outbreak in the U.S., the team highlighted how their project seems to be all the more relevant now.

“We didn’t even realize we were developing something that would become the future, which is telemedicine,” said de Souza, a rising senior at Northwestern University. “We were creating an at-home version of PT, which is very much needed right now. It’s definitely worth our time to continue working on this project.”

Read about other Jetson projects on the Jetson community projects page and get acquainted with other developers on the Jetson forum page.

Learn how to get started on a Jetson project of your own on the Jetson developers page.

The post It’s Not Pocket Science: Undergrads at Hackathon Create App to Evaluate At-Home Physical Therapy Exercises appeared first on The Official NVIDIA Blog.

Read More