Driving the Future: What Is an AI Cockpit?

Driving the Future: What Is an AI Cockpit?

From Knight Rider’s KITT to Ironman’s JARVIS, intelligent copilots have been a staple of forward-looking pop culture.

Advancements in AI and high-performance processors are turning these sci-fi concepts into reality. But what, exactly, is an AI cockpit, and how will it change the way we move?

AI is enabling a range of new software-defined, in-vehicle capabilities across the transportation industry. With centralized, high-performance compute, automakers can now build vehicles that become smarter over time.

A vehicle’s cockpit typically requires a collection of electronic control units and switches to perform basic functions, such as powering entertainment or adjusting temperature. Consolidating these components with an AI platform such as NVIDIA DRIVE AGX simplifies the architecture while creating more compute headroom to add new features. In addition, NVIDIA DRIVE IX provides an open and extensible software framework for a software-defined cockpit experience.

Mercedes-Benz released the first such intelligent cockpit, the MBUX AI system, powered by NVIDIA technology, in 2018. The system is currently in more than 20 Mercedes-Benz models, with the second generation debuting in the upcoming S-Class.

The second-generation MBUX system is set to debut in the Mercedes-Benz S-Class.

MBUX and other such AI cockpits orchestrate crucial safety and convenience features much more smoothly than the traditional vehicle architecture. They centralize compute for streamlined functions, and they’re constantly learning. By regularly delivering new features, they extend the joy of ownership throughout the life of the vehicle.

Always Alert

But safety is the foremost benefit of AI in the vehicle. AI acts as an extra set of eyes on the 360-degree environment surrounding the vehicle, as well as an intelligent guardian for drivers and passengers inside.

One key feature is driver monitoring. As automated driving functions become more commonplace across vehicle fleets, it’s critical to ensure the human at the wheel is alert and paying attention.

AI cockpits use interior cameras to monitor whether the driver is paying attention to the road.

Using interior-facing cameras, AI-powered driver monitoring can track driver activity, head position and facial movements to analyze whether the driver is paying attention, drowsy or distracted. The system can then alert the driver, bringing attention back to the road.

This system can also help keep those inside and outside the vehicle safe and alert. By sensing whether a passenger is about to exit a car and using exterior sensors to monitor the outside environment, AI can warn of oncoming traffic or pedestrians and bikers potentially in the path of the opening door.

It also acts as a guardian in emergency situations. If a passenger is not sitting properly in their seat, the system can prevent an airbag activation that would harm rather than help them. It can also use AI to detect the presence of children or pets left behind in the vehicle, helping prevent heat stroke.

An AI cockpit is always on the lookout for a vehicle’s occupants, adding an extra level of safety with full cabin monitoring so they can enjoy the ride.

Constant Convenience

In addition to safety, AI helps make the daily drive easier and more enjoyable.

With crystal-clear graphics, drivers can receive information about their route, as well as what the sensors on the car see, quickly and easily. Augmented reality heads-up displays and virtual reality views of the vehicle’s surroundings deliver the most important data (such as parking assistance, directions, speed and oncoming obstacles) without disrupting the driver’s line of sight.

These visualizations help build trust in the driver assistance system as well as understanding of its capabilities and limitations for a safer and more effective driving experience.

Using natural language processing, drivers can control vehicle settings without taking their eyes off the road. Conversational AI enables easy access to search queries, like finding the best coffee shops or sushi restaurants along a given route. The same system that monitors driver attention can also interpret gesture controls, providing another way for drivers to communicate with the cockpit without having to divert their gaze.

Natural language processing makes it possible to access vehicle controls without taking your eyes off the road.

These technologies can also be used to personalize the driving experience. Biometric user authentication and voice recognition allow the car to identify who is driving, and adjust settings and preferences accordingly.

AI cockpits are being integrated into more models every year, making them smarter and safer and constantly adding new features. High-performance, energy-efficient AI compute platforms, consolidate in-car systems with a centralized architecture to enable the open NVIDIA DRIVE IX software platform to meet future cockpit needs.

What used to be fanciful fiction will soon be part of our daily driving routine.

The post Driving the Future: What Is an AI Cockpit? appeared first on The Official NVIDIA Blog.

Read More

Meet the Maker: ‘Smells Like ML’ Duo Nose Where It’s at with Machine Learning

Meet the Maker: ‘Smells Like ML’ Duo Nose Where It’s at with Machine Learning

Whether you want to know if your squats have the correct form, you’re at the mirror deciding how to dress and wondering what the weather’s like, or you keep losing track of your darts score, the Smells Like ML duo have you covered — in all senses.

This maker pair is using machine learning powered by NVIDIA Jetson’s edge AI capabilities to provide smart solutions to everyday problems.

About the Makers

Behind Smells Like ML are Terry Rodriguez and Salma Mayorquin, freelance machine learning consultants based in San Francisco. The business partners met as math majors in 2013 at UC Berkeley and have been working together ever since. The duo wondered how they could apply their knowledge in theoretical mathematics more generally. Robotics, IoT and computer vision projects, they found, are the answer.

Their Inspiration

The team name, Smells Like ML, stems from the idea that the nose is often used in literature to symbolize intuition. Rodriguez described their projects as “the ongoing process of building the intuition to understand and process data, and apply machine learning in ways that are helpful to everyday life.”

To create proofs of concept for their projects, they turned to the NVIDIA Jetson platform.

“The Jetson platform makes deploying machine learning applications really friendly even to those who don’t have much of a background in the area,” said Mayorquin.

Their Favorite Jetson Projects

Of Smells Like ML’s many projects using the Jetson platform, here are some highlights:

SpecMirror — Make eye contact with this AI-powered mirror, ask it a question and it searches the web to provide an answer. The smart assistant mirror can be easily integrated into your home. It processes sound and video input simultaneously, with the help of NVIDIA Jetson Xavier NX and NVIDIA DeepStream SDK.

ActionAI — Whether you’re squatting, spinning or loitering, this device classifies all kinds of human movement. It’s optimized by the Jetson Nano developer kit’s pose estimation inference capabilities. Upon detecting the type of movement someone displays, it annotates the results right back onto the video it was analyzing. ActionAI can be used to prototype any products that require human movement detection, such as a yoga app or an invisible keyboard.

Shoot Your Shot — Bring a little analytics to your dart game. This computer vision booth analyzes dart throws from multiple camera angles, and then scores, logs and even predicts the results. The application runs on a single Jetson Nano system on module.

Where to Learn More 

In June, Smells Like ML won second place in NVIDIA’s AI at the Edge Hackster.io competition in the intelligent video analytics category.

For more sensory overload, check out other cool projects from Smells Like ML.

Anyone can get started on a Jetson project. Learn how on the Jetson developers page.

The post Meet the Maker: ‘Smells Like ML’ Duo Nose Where It’s at with Machine Learning appeared first on The Official NVIDIA Blog.

Read More

Smart Hospitals: DARVIS Automates PPE Checks, Hospital Inventories Amid COVID Crisis

Smart Hospitals: DARVIS Automates PPE Checks, Hospital Inventories Amid COVID Crisis

After an exhausting 12-hour shift caring for patients, it’s hard to blame frontline workers for forgetting to sing “Happy Birthday” twice to guarantee a full 30 seconds of proper hand-washing.

Though at times tedious, the process of confirming such detailed, protective measures like the amount of time hospital employees spend sanitizing their hands, the cleaning status of a room, or the number of beds available is crucial to preventing the spread of infectious diseases such as COVID-19.

DARVIS, an AI company founded in San Francisco in 2015, automates tasks like these to make hospitals “smarter” and give hospital employees more time for patient care, as well as peace of mind for their own protection.

The company developed a COVID-19 infection-control compliance model within a month of the pandemic breaking out. It provides a structure to ensure that workers are wearing personal protective equipment and complying with hygiene protocols amidst the hectic pace of hospital operations, compounded by the pandemic. The system can also provide information on the availability of beds and other equipment.

Short for “Data Analytics Real-World Visual Information System,” DARVIS uses the NVIDIA Clara Guardian application framework, employing machine learning and advanced computer vision.

The system analyzes information processed by optical sensors, which act as the “eyes and ears” of the machine, and alerts users if a bed is clean or not, or if a worker is missing a glove, among other contextual insights. Upon providing feedback, all records are fully anonymized.

“It’s all about compliance,” said Jan-Philipp Mohr, co-founder and CEO of the company. “It’s not about surveilling workers, but giving them feedback where they could harm themselves. It’s for both worker protection and patient security.”

DARVIS is a member of NVIDIA Inception, a program that helps startups working in AI and data science accelerate their product development, prototyping and deployment.

The Smarter the Hospital, the Better

Automation in hospitals has always been critical to saving lives and increasing efficiency, said Paul Warren, vice president of Product and team lead for AI at DARVIS. However, the need for smart hospitals is all the more urgent in the midst of the COVID-19 crisis, he said.

“We talk to the frontline caregivers, the doctors, the nurses, the transport staff and figure out what part of their jobs is particularly repetitive, frustrating or complicated,” said Warren. “And if we can help automate that in real time, they’re able to do their job a lot more efficiently, which is ultimately good for improving patient outcomes.”

DARVIS can help save money as well as lives. Even before the COVID crisis, the U.S. Centers for Disease Control and Prevention estimated the annual direct medical costs of infectious diseases in U.S. hospitals to be around $45 billion, a cost bound to rise due to the global pandemic. By optimizing infection control practices and minimizing the spread of infectious disease, smart hospitals can decrease this burden, Mohr said.

To save costs and time needed to train and deploy their own devices, DARVIS uses PyTorch and TensorFlow optimized on NGC, NVIDIA’s registry of GPU-accelerated software containers.

“NVIDIA engineering efforts to optimize deep learning solutions is a game-changer for us,” said Warren. “NGC makes structuring and maintaining the infrastructure environment very easy for us.”

DARVIS’s current centralized approach involves deep learning techniques optimized on NVIDIA GPU-powered servers running on large workstations within the hospital’s data center.

As they onboard more users, the company plans to also use NVIDIA DeepStream SDK on edge AI embedded systems like NVIDIA Jetson Xavier NX to scale out and deploy at hospitals in a more decentralized manner, according to Mohr.

Same Technology, Numerous Possibilities

While DARVIS was initially focused on tracking beds and inventory, user feedback led to the expansion of its platform to different areas of need.

The same technology was developed to evaluate proper usage of PPE, to analyze worker compliance with infection control practices and to account for needed equipment in an operating room.

The team at DARVIS continues to research what’s possible with their device, as well as in the field of AI more generally, as they expand and deploy their product at hospitals around the world.

Watch DARVIS in action:

Learn more about NVIDIA’s healthcare-application framework on the NVIDIA Clara developers page.

Images courtesy of DARVIS, Inc.

The post Smart Hospitals: DARVIS Automates PPE Checks, Hospital Inventories Amid COVID Crisis appeared first on The Official NVIDIA Blog.

Read More

Learning Life’s ABCs: AI Models Read Proteins to Fight COVID-19

Learning Life’s ABCs: AI Models Read Proteins to Fight COVID-19

Ahmed Elnaggar and Michael Heinzinger are helping computers read proteins as easily as you read this sentence.

The researchers are applying the latest AI models used to understand text to the field of bioinformatics. Their work could accelerate efforts to characterize living organisms like the coronavirus.

By the end of the year, they aim to launch a website where researchers can plug in a string of amino acids that describe a protein. Within seconds, it will provide some details of the protein’s 3D structure, a key to knowing how to treat it with a drug.

Today, researchers typically search databases to get this kind of information. But the databases are growing rapidly as more proteins are sequenced, so a search can take up to 100 times longer than the approach using AI, depending on the size of a protein’s amino acid string.

In cases where a particular protein hasn’t been seen before, a database search won’t provide any useful results — but AI can.

“Twelve of the 14 proteins associated with COVID-19 are similar to well validated proteins, but for the remaining two we have very little data — for such cases, our approach could help a lot,” said Heinzinger, a Ph.D. candidate in computational biology and bioinformatics.

While time consuming, methods based on the database searches have been 7-8 percent more accurate than previous AI methods. But using the latest models and datasets, Elnaggar and Heinzinger cut the accuracy gap in half, paving the way for a shift to using AI.

AI Models, GPUs Drive Biology Insights

“The speed at which these AI algorithms are improving makes me optimistic we can close this accuracy gap, and no field has such fast growth in datasets as computational biology, so combining these two things I think we will reach a new state of the art soon,” said Heinzinger.

“This work couldn’t have been done two years ago,” said Elnaggar, an AI specialist with a Ph.D. in transfer learning. “Without the combination of today’s bioinformatics data, new AI algorithms and the computing power from NVIDIA GPUs, it couldn’t be done,” he said.

Elnaggar and Heinzinger are team members in the Rostlab at the Technical University of Munich, which helped pioneer this field at the intersection of AI and biology. Burkhard Rost, who heads the lab, wrote a seminal paper in 1993 that set the direction.

The Semantics of Reading a Protein

The underlying concept is straightforward. Proteins, the building blocks of life, are made up of strings of amino acids that need to be interpreted sequentially, just like words in a sentence.

So, researchers like Rost started applied emerging work in natural-language processing to understand proteins. But in the 1990s they had very little data on proteins and the AI models were still fairly crude.

Fast forward to today and a lot has changed.

Sequencing has become relatively fast and cheap, generating massive datasets. And thanks to modern GPUs, advanced AI models such as BERT can interpret language in some cases better than humans.

AI Models Grow 6x in Sophistication

The breakthroughs in natural-language processing have been particularly breathtaking. Just 18 months ago, Elnaggar and Heinzinger reported on work using a version of recurrent neural network models with 90 million parameters; this month their work leveraged Transformer models with 567 million parameters.

“Transformer models are hungry for compute power, so to do this work we used 5,616 GPUs on the Summit supercomputer and even then it took up to two days to train some of the models,” said Elnaggar.

Running the models on thousands of Summit’s nodes presented challenges.

Elnaggar tells a story familiar to those who work on supercomputers. He needed lots of patience to sync and manage files, storage, comms and their overheads at such a scale. He started small, working on a few nodes, and moved a step at a time.

Patient, stepwise work paid off in scaling complex AI algorithms across thousands of GPUs on the Summit supercomputer.

“The good news is we can now use our trained models to handle inference work in the lab using a single GPU,” he said.

Now Available: Pretrained AI Models

Their latest paper, published in July, characterizes the pros and cons of a handful of the latest AI models they used on various tasks. The work is funded with a grant from the COVID-19 High Performance Computing Consortium.

The duo also published the first versions of their pretrained models. “Given the pandemic, it’s better to have an early release,” rather than wait until the still ongoing project is completed, Elnaggar said.

“The proposed approach has the potential to revolutionize the way we analyze protein sequences,” said Heinzinger.

The work may not in itself bring the coronavirus down, but it is likely to establish a new and more efficient research platform to attack future viruses.

Collaborating Across Two Disciplines

The project highlights two of the soft lessons of science: Keep a keen eye on the horizon and share what’s working.

“Our progress mainly comes from advances in natural-language processing that we apply to our domain — why not take a good idea and apply it to something useful,” said Heinzinger, the computational biologist.

Elnaggar, the AI specialist, agreed. “We could only succeed because of this collaboration across different fields,” he said.

See more stories online of researchers advancing science to fight COVID-19.

The image at top shows language models trained without labelled samples picking up the signal of a protein sequence that is required for DNA binding.

The post Learning Life’s ABCs: AI Models Read Proteins to Fight COVID-19 appeared first on The Official NVIDIA Blog.

Read More

How B&N Keeps Designs Fresh While Working Remotely

How B&N Keeps Designs Fresh While Working Remotely

Whether designing buildings, complex highway interchanges or water purification systems, the architects and engineers at Burgess & Niple have a common goal: bringing data together to develop incredible visuals that will be the blueprint for their design.

B&N, an engineering and architecture firm headquartered in Columbus, Ohio, with approximately 400 employees, specializes in the designing and planning of roads, buildings, bridges and utility infrastructure, such as the award-winning Southwestern Parkway Combined Sewer Overflow Basin, located in the historic Shawnee Park in Louisville, Kentucky.

To provide infrastructure designs for federal, state and local government agencies and private corporations across 10 states, often in remote locations, B&N needs to have access to its applications and data anytime and anywhere.

To enable geographically dispersed architects, engineers and designers to collaborate on these projects, B&N transitioned 100 percent of its users to virtual desktop infrastructure. The company turned to NVIDIA virtual GPU technology to provide access to graphics-intensive applications, such as Autodesk AutoCAD, Revit and Civil 3D, Bentley Systems MicroStation, and Microsoft Office 365 along with other office productivity apps.

B&N chose Dell PowerEdge servers, each installed with two NVIDIA M10 GPUs with NVIDIA Quadro Virtual Data Center Workstation (Quadro vDWS) software and VMware ESXi 6.7 U3. The systems enable the company to maintain the same level of productivity and performance in virtual workstations as it would have running the applications natively on physical workstations.

Because it was already using VDI, the company was able to shift to conducting business at home during the COVID-19 outbreak almost immediately and has continued to collaborate seamlessly and efficiently in real time.

“It is very much business as usual for us, which is pretty remarkable given the circumstances,” said Rod Dickerson, chief technology officer at B&N.

VDI enables B&N to keep data centralized in the data center to protect intellectual property and enable quick access across different locations without version control issues and lengthy upload and download times.

“The files that we work with are very large, which makes sharing them between engineers using traditional means difficult, especially with the inconsistencies in residential broadband during the COVID-19 pandemic,” said Dickerson. “Using VDI with NVIDIA GPUs and Quadro vDWS allows us to maintain our productivity output regardless of our physical locations.”

NVIDIA Quadro vDWS: Work from Anywhere Without Sacrificing Performance

Keeping its employees productive while working from home has not only allowed B&N to continue work with existing clients without delays, but it also allowed them to win new projects. In Ohio, the firm interviewed for a new highway interchange design project and won in part thanks to the ability to seamlessly collaborate internally and present to the client virtually.

They were also able to accelerate construction on a project in Indiana when the opportunity arose due to stay-at-home orders. Without NVIDIA Quadro vDWS providing access to needed applications and infrastructure, it would’ve been difficult to meet the accelerated schedule without any glitches.

“Across the board, project delivery continues to be seamless as a result of VDI with NVIDIA vGPU,” said Dickerson. “We are continuing to work as if we were in the office with the same level of performance.”

B&N has added about 40 new employees since the pandemic started. Onboarding is simplified because they can begin working with high-performance virtual workstations on their very first day. With virtual machines centrally located in the data center, the B&N IT team can easily maintain and manage the client computing environment as well.

B&N has used NVIDIA vGPUs to provide employees with workstation performance with the mobility and security of virtualization, effectively eliminating physical boundaries.

“NVIDIA vGPU is central to our business continuity strategy and has proven not only viable, but vital,” said Dickerson. “We would have significant business continuity issues if it weren’t for our implementation of NVIDIA vGPU technology.”

Learn more about how NVIDIA vGPU helps employees to work remotely.

The post How B&N Keeps Designs Fresh While Working Remotely appeared first on The Official NVIDIA Blog.

Read More

Not So Taxing: Intuit Uses AI to Make Tax Day Easier

Not So Taxing: Intuit Uses AI to Make Tax Day Easier

Understanding the U.S. tax code can take years of study — it’s 80,000 pages long. Software company Intuit has decided that it’s a job for AI.

Ashok Srivastava, its senior vice president and chief data officer, spoke to AI Podcast host Noah Kravitz about how the company is utilizing machine learning to help customers with taxes and aid small businesses through the financial effects of COVID-19.

EMBED PODCAST HERE

To help small businesses, Intuit has a range of programs such as the Intuit Aid Assist Program, which helps business owners figure out if they’re eligible for loans from the government. Other programs include cash flow forecasting, which estimates how much money businesses will have within a certain time frame.

And in the long term, Intuit is working on a machine learning program capable of using photos of financial documents to automatically extract necessary information and fill in tax documents.

Key Points From This Episode:

  • Intuit frequently uses an AI technique called knowledge engineering, which converts written regulations or rules into code, providing the information behind programs such as TurboTax.
  • Intuit also provides natural language processing and chatbot services, which use a customer’s questions as well as their feedback and product usage to determine the best reply.

Tweetables:

“We’re spending our time not only analyzing data, but thinking about new ways that we can use artificial intelligence in order to help small businesses.” — Ashok Srivastava [10:23]

“Data and artificial intelligence are going to come into play again and again to help people … make the best decisions about their own financial future.” — Ashok Srivastava [26:43]

You Might Also Like

AI Startup Brings Computer Vision to Customer Service

When your appliances break, the last thing you want to do is spend an hour on the phone trying to reach a customer service representative. Using computer vision, Drishyam.AI is eliminating service lines to help consumers more quickly.

Dial A for AI: Charter Boosts Customer Service with AI

Charter Communications is working to make customer service smarter before an operator even picks up the phone. Senior Director of Wireless Engineering Jared Ritter speaks about Charter’s perspective on customer relations.

What’s in Your Wallet? For Capital One, the Answer is AI

Nitzan Mekel, managing vice president of machine learning at Capital One, explains how the banking giant is integrating AI and machine learning into customer-facing applications such as fraud-monitoring and detection, call center operations and customer experience.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Not So Taxing: Intuit Uses AI to Make Tax Day Easier appeared first on The Official NVIDIA Blog.

Read More

Keeping Its Cool: Lenovo Expands Portfolio in Red Hot HPC and AI Market

Keeping Its Cool: Lenovo Expands Portfolio in Red Hot HPC and AI Market

Known around the world for its broad range of personal computers, phones, servers, networking and services, Lenovo also has years of experience in designing and delivering high performance computing and AI systems.

NVIDIA and Mellanox have been long-time collaborators with Lenovo and now this relationship is expanding in a big way. This fall, Lenovo will begin providing NVIDIA Mellanox Spectrum Ethernet switches to its customers in selected integrated solutions, joining the NVIDIA Quantum InfiniBand switches already offered by the company.

These are the fastest, most efficient, switches for end-to-end InfiniBand and Ethernet networking, built for any type of compute and storage infrastructures serving data-intensive applications, scientific simulations, AI and more.

The Spectrum Ethernet switches are the most advanced available in the market and are optimized for high-performance, AI, cloud and other enterprise-class systems. They offer connectivity from 10 to 400 gigabits per second and come in designs tailored for top-of-rack, storage clusters, spine and superspine uses.

All Spectrum Ethernet switches feature a fully shared buffer to support fair bandwidth allocation and predictably low latency, as well as traffic flow prioritization and optimization technology. They also offer What Just Happened — the most useful Ethernet switch telemetry technology on the market — which provides faster and easier network monitoring, troubleshooting and problem resolution.

The Spectrum switches from Lenovo will include Cumulus Linux, the leading Linux-based network operating system, recently acquired by NVIDIA.

Picture of NVIDIA Mellanox Spectrum-2 based Ethernet switches supporting all speeds from 1GbE to 400GbE, with up to 64 ports of 100GbE networking in one switch. Lenovo will ship them with the Cumulus Linux network OS.
NVIDIA Spectrum Ethernet switches offer high bandwidth, low latency, consistent performance, easy management and a choice of network OS, in this case Cumulus Linux.

All the Best Networking for High-Performance and AI Data Centers

With a broad portfolio of NVIDIA Mellanox ConnectX adapters, Mellanox Quantum InfiniBand switches, Mellanox Spectrum Ethernet switches and LinkX cables and transceivers, Lenovo customers can select from the fastest and most advanced networking for their data center compute and storage infrastructures.

InfiniBand provides the most efficient data throughput and lowest latency for keeping hungry CPUs and GPUs well fed with data, as well as connecting high-speed file and block storage for computation and checkpointing. It also offers in-network computing engines that enable the network to perform data processing on transferred data, including data reduction and data aggregation, message passing interface acceleration engines and more. This speeds up calculations and increases performance for high performance and AI applications.

For high-performance and AI data centers using Ethernet for storage or management connectivity, the Spectrum switches provide an ideal network infrastructure. InfiniBand data centers can seamlessly connect to an Ethernet fabric via the Mellanox Skyway 100G and 200G InfiniBand-to-Ethernet gateways.

Expanding Into Private and Hybrid Cloud

A Lenovo ThinkAgile SX for Microsoft Azure short rack. This and other Lenovo ThinkAgile solutions will qualify NVIDIA Spectrum Ethernet switches.
Lenovo ThinkAgile SX for Microsoft Azure and ThinkAgile HX for SAP HANA will qualify NVIDIA Spectrum Ethernet switches.

Lenovo also will use Spectrum Ethernet switches as part of its enterprise offerings, such as private and hybrid cloud, hyperconverged infrastructure, Microsoft Azure and SAP HANA solutions.

Additionally, Lenovo ThinkAgile products will qualify NVIDIA Spectrum switches to provide top-of-rack and rack-to-rack connectivity. This lineup includes:

Ongoing Innovation

Lenovo and NVIDIA have long collaborated in creating faster, denser, more efficient solutions for high performance computing, AI and, most recently, for private clouds.

  • GPU servers: Lenovo offers a full portfolio of ThinkSystem servers with NVIDIA GPUs to deliver the fastest compute times for high-performance and AI workloads. These servers can be connected with Mellanox ConnectX adapters using InfiniBand or Ethernet connectivity as needed.
  • Liquid cooling: Lenovo Neptune water-cooled server technology offers faster, quieter, more efficient cooling. This includes the ability to cool server CPUs, GPUs, InfiniBand adapters and entire racks using water instead of fan-blown air to carry away heat more quickly.
  • InfiniBand speed: Lenovo was one of the earliest server vendors to implement new generations of InfiniBand connectivity, including EDR (100Gb/s) and HDR (200Gb/s), both of which can be water-cooled, and the companies continue to innovate around InfiniBand interconnects.
  • Network topologies: Lenovo and Mellanox were first to build an InfiniBand Dragonfly+ topology supercomputer, at the University of Toronto. Born as an ultimate software-defined network, InfiniBand supports many network topologies, including Fat Tree, Torus and Dragonfly+, which provide a rich set of cost/performance/scale-optimized options for data center deployments.
Picture of a Lenovo Direct to Node liquid-cooling system using water pipes to carry heat away from CPUs and GPUs more quickly and efficiently than fan-blown air.
Lenovo Neptune water-cooled solutions provide more efficient cooling CPUs, GPUs and InfiniBand adapters, enabling higher compute densities and reduced power consumption per rack.

Learn More

To learn more about NVIDIA Mellanox interconnect products and Lenovo’s support for any type of compute, storage or management traffic for high-performance, AI, private/hybrid cloud and hyperconverged infrastructure, check out the following resources.

Feature image by Gerd Altmann.

The post Keeping Its Cool: Lenovo Expands Portfolio in Red Hot HPC and AI Market appeared first on The Official NVIDIA Blog.

Read More

Screening for COVID-19: Japanese Startup Uses AI for Drug Discovery

Screening for COVID-19: Japanese Startup Uses AI for Drug Discovery

Researchers are racing to discover the right drug molecule to treat COVID-19 — but the number of potential drug-like molecules out there is estimated to be an inconceivable 1060.

“Even if you hypothetically checked one molecule per second, it would take longer than the age of the universe to explore the entire chemical space,” said Shinya Yuki, co-founder and CEO of Tokyo-based startup Elix, Inc. “AI can efficiently explore huge search spaces to solve difficult problems, whether in drug discovery, materials development or a game like Go.”

Yuki’s company is using deep learning to accelerate drug discovery, building neural networks that predict the properties of molecules much faster than computer simulations can. To support COVID-19 research, the team is using AI to find drugs that are FDA-approved or in clinical trials that could be repurposed to treat the coronavirus.

“Developing a new drug from scratch is a years-long process, which is unwanted especially in this pandemic situation,” Yuki said. “Speed is critical, and drug-repurposing can help identify candidates with an existing clinical safety record, significantly reducing the time and cost of drug development.”

Elix recently published a paper on approved and clinical trial-stage drugs that its AI model flagged for potential COVID-19 treatments. Among the candidates selected by Elix’s AI tool was remdevisir, an antiviral drug that recently received emergency use authorization from the FDA for coronavirus cases.

A member of NVIDIA Inception, a program that helps startups get to market faster, Elix uses the NVIDIA DGX Station for training and inference of its deep learning algorithms. Yuki spoke about the company’s work in AI for drug discovery in the Inception Startup Showcase at GTC Digital, NVIDIA’s digital conference for developers and AI researchers.

Elix’s AI Fix for Drug Discovery

At the molecular level, a successful drug must have the perfect combination of shape, flexibility and interaction energies to bind to a target protein — like the spike proteins that cover the viral envelope of SARS-CoV-2, the virus that causes COVID-19.

SARS-CoV-2, the virus that causes COVID-19, has a surface covered in protein spikes. Image credit: CDC/ Alissa Eckert, MSMI; Dan Higgins, MAMS. Licensed under public domain.

A person gets infected with COVID-19 when these spike proteins attach to cells in the body, bringing the virus into the cells. An effective antiviral drug might interfere with this attachment process. For example, a promising drug molecule would bind with receptors on the spike proteins, preventing the virus from attaching to human cells.

To help researchers find the best drug for the job, Elix uses a variety of neural networks to rapidly narrow down the field of potential molecules. This allows researchers to reserve physical tests in the lab for a smaller subset of molecules that have a higher likelihood of solving the problem.

With predictive AI models, Yuki’s team can analyze a database of drug candidates to infer which have the right physical and chemical properties to treat a given disease. They also use generative models, which start from scratch to come up with promising molecular structures — some of which may not be found in nature.

That’s where a third neural network comes in, a retrosynthesis model that helps researchers figure out if the generated molecules can be synthesized in the lab.

Elix uses multiple NVIDIA DGX Station systems — GPU-powered AI workstations for data science development teams — to accelerate training and inference of these neural networks, achieving up to a 6x speedup using a single GPU for training versus a CPU.

Yuki says the acceleration is essential for the generative models, which would otherwise take a week or more to train until convergence, when the neural network reaches the lowest error rate possible. Each DGX Station has four NVIDIA V100 Tensor Core GPUs, enabling the Elix team to tackle bigger AI models and run multiple experiments at once.

“DGX Stations are basically supercomputers. We usually have several users working on the same machine at the same time,” he said. “We can not only train models faster, we can also run up to 15 experiments in parallel.”

The startup’s customers include pharmaceutical companies, research institutes and universities. Since molecular data is sensitive intellectual property for the pharma industry, most choose to run the AI models on their own on-prem servers.

Beyond drug discovery, Elix also uses AI for molecular design for material informatics, working with companies like tire- and rubber-manufacturer Bridgestone and RIKEN, Japan’s largest research institution. The company also develops computer vision models for autonomous driving and AI at the edge.

In one project, Yuki’s team worked with global chemical company Nippon Shokubai to generate a molecule that can be used as a blending material for ink, while posing a low risk of skin irritation.

Learn more about Elix in Yuki’s GTC Digital lightning talk. Visit our COVID page to explore how other startups are using AI and accelerated computing to fight the pandemic.

Main image by Chaos, licensed from Wikimedia Commons under CC BY-SA 3.0

The post Screening for COVID-19: Japanese Startup Uses AI for Drug Discovery appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Ampere GPUs Come to Google Cloud at Speed of Light

NVIDIA Ampere GPUs Come to Google Cloud at Speed of Light

The NVIDIA A100 Tensor Core GPU has landed on Google Cloud.

Available in alpha on Google Compute Engine just over a month after its introduction, A100 has come to the cloud faster than any NVIDIA GPU in history.

Today’s introduction of the Accelerator-Optimized VM (A2) instance family featuring A100 makes Google the first cloud service provider to offer the new NVIDIA GPU.

A100, which is built on the newly introduced NVIDIA Ampere architecture, delivers NVIDIA’s greatest generational leap ever. It boosts training and inference computing performance by 20x over its predecessors, providing tremendous speedups for workloads to power the AI revolution.

“Google Cloud customers often look to us to provide the latest hardware and software services to help them drive innovation on AI and scientific computing workloads, ” said Manish Sainani, director of Product Management at Google Cloud. “With our new A2 VM family, we are proud to be the first major cloud provider to market NVIDIA A100 GPUs, just as we were with NVIDIA T4 GPUs. We are excited to see what our customers will do with these new capabilities.”

In cloud data centers, A100 can power a broad range of compute-intensive applications, including AI training and inference, data analytics, scientific computing, genomics, edge video analytics, 5G services, and more.

Fast-growing, critical industries will be able to accelerate their discoveries with the breakthrough performance of A100 on Google Compute Engine. From scaling up AI training and scientific computing, to scaling out inference applications, to enabling real-time conversational AI, A100 accelerates complex and unpredictable workloads of all sizes running in the cloud. 

NVIDIA CUDA 11, coming to general availability soon, makes accessible to developers the new capabilities of NVIDIA A100 GPUs, including Tensor Cores, mixed-precision modes, multi-instance GPU, advanced memory management and standard C++/Fortran parallel language constructs.

Breakthrough A100 Performance in the Cloud for Every Size Workload

The new A2 VM instances can deliver different levels of performance to efficiently accelerate workloads across CUDA-enabled machine learning training and inference, data analytics, as well as high performance computing.

For large, demanding workloads, Google Compute Engine offers customers the a2-megagpu-16g instance, which comes with 16 A100 GPUs, offering a total of 640GB of GPU memory and 1.3TB of system memory — all connected through NVSwitch with up to 9.6TB/s of aggregate bandwidth.

For those with smaller workloads, Google Compute Engine is also offering A2 VMs in smaller configurations to match specific applications’ needs.

Google Cloud announced that additional NVIDIA A100 support is coming soon to Google Kubernetes Engine, Cloud AI Platform and other Google Cloud services. For more information, including technical details on the new A2 VM family and how to sign up for access, visit the Google Cloud blog.

The post NVIDIA Ampere GPUs Come to Google Cloud at Speed of Light appeared first on The Official NVIDIA Blog.

Read More

Hardhats and AI: Startup Navigates 3D Aerial Images for Inspections

Hardhats and AI: Startup Navigates 3D Aerial Images for Inspections

Childhood buddies from back in South Africa, Nicholas Pilkington, Jono Millin and Mike Winn went off together to a nearby college, teamed up on a handful of startups and kept a pact: work on drones once a week.

That dedication is paying off. Their drone startup, based in San Francisco, is picking up interest worldwide and has landed $35 million in Series D funding.

It all catalyzed in 2014, when the friends were accepted into the AngelPad accelerator program in Silicon Valley. They founded DroneDeploy there, enabling contractors to capture photos, maps, videos and high-fidelity panoramic images for remote inspections of job sites.

“We had this a-ha moment: Almost any industry can benefit from aerial imagery, so we set out to build the best drone software out there and make it easy for everyone,” said Pilkington, co-founder and CTO at DroneDeploy.

DroneDeploy’s AI software platform — it’s the navigational brains and eyes — is operating in more than 200 countries and handling more than 1 million flights a year.

Nailing Down Applications

DroneDeploy’s software has been adopted in construction, agriculture, forestry, search and rescue, inspection, conservation and mining.

In construction, DroneDeploy is used by one-quarter of the world’s 400 largest building contractors and six of the top 10 oil and gas companies, according to the company.

DroneDeploy was one of three startups that recently presented at an NVIDIA Inception Connect event held by Japanese insurer Sompo Holdings. For good reason: Startups are helping insurance and reinsurance firms become more competitive by analyzing portfolio risks with AI.

The NVIDIA Inception program nurtures startups with access to GPU guidance, Deep Learning Institute courses, networking and marketing opportunities.

Navigating Drone Software

DroneDeploy offers features like fast setup of autonomous flights, photogrammetry to take physical measurements and APIs for drone data.

In addition to supporting industry-leading drones and hardware, DroneDeploy operates an app ecosystem for partners to build apps using its drone data platform. John Deere, for example, offers an app for customers to upload aerial drone maps of their fields to their John Deere account so that they can plan flights based on the field data.

Split-second photogrammetry and 360-degree images provided by DroneDeploy’s algorithms running on NVIDIA GPUs in the cloud help provide pioneering mapping and visibility.

AI on Safety, Cost and Time

Drones used in high places instead of people can aid in safety. The U.S. Occupational Safety and Health Administration last year reported that 22 people were killed in roofing-related accidents in the U.S.

Inspecting roofs and solar panels with drone technology can improve that safety record. It can also save on cost: The traditional alternative to having people on rooftops to perform these inspections is using helicopters.

Customers of the DroneDeploy platform can follow a quickly created map to carry out a sequence of inspections with guidance from cameras fed into image recognition algorithms.

Using drones, customers can speed up inspections by 80 percent, according to the company.  

“In areas like oil, gas and energy, it’s about zero-downtime inspections of facilities for operations and safety, which is a huge value driver for these customers,” said Pilkington.

The post Hardhats and AI: Startup Navigates 3D Aerial Images for Inspections appeared first on The Official NVIDIA Blog.

Read More