NVIDIA Ampere GPUs Come to Google Cloud at Speed of Light

NVIDIA Ampere GPUs Come to Google Cloud at Speed of Light

The NVIDIA A100 Tensor Core GPU has landed on Google Cloud.

Available in alpha on Google Compute Engine just over a month after its introduction, A100 has come to the cloud faster than any NVIDIA GPU in history.

Today’s introduction of the Accelerator-Optimized VM (A2) instance family featuring A100 makes Google the first cloud service provider to offer the new NVIDIA GPU.

A100, which is built on the newly introduced NVIDIA Ampere architecture, delivers NVIDIA’s greatest generational leap ever. It boosts training and inference computing performance by 20x over its predecessors, providing tremendous speedups for workloads to power the AI revolution.

“Google Cloud customers often look to us to provide the latest hardware and software services to help them drive innovation on AI and scientific computing workloads, ” said Manish Sainani, director of Product Management at Google Cloud. “With our new A2 VM family, we are proud to be the first major cloud provider to market NVIDIA A100 GPUs, just as we were with NVIDIA T4 GPUs. We are excited to see what our customers will do with these new capabilities.”

In cloud data centers, A100 can power a broad range of compute-intensive applications, including AI training and inference, data analytics, scientific computing, genomics, edge video analytics, 5G services, and more.

Fast-growing, critical industries will be able to accelerate their discoveries with the breakthrough performance of A100 on Google Compute Engine. From scaling up AI training and scientific computing, to scaling out inference applications, to enabling real-time conversational AI, A100 accelerates complex and unpredictable workloads of all sizes running in the cloud. 

NVIDIA CUDA 11, coming to general availability soon, makes accessible to developers the new capabilities of NVIDIA A100 GPUs, including Tensor Cores, mixed-precision modes, multi-instance GPU, advanced memory management and standard C++/Fortran parallel language constructs.

Breakthrough A100 Performance in the Cloud for Every Size Workload

The new A2 VM instances can deliver different levels of performance to efficiently accelerate workloads across CUDA-enabled machine learning training and inference, data analytics, as well as high performance computing.

For large, demanding workloads, Google Compute Engine offers customers the a2-megagpu-16g instance, which comes with 16 A100 GPUs, offering a total of 640GB of GPU memory and 1.3TB of system memory — all connected through NVSwitch with up to 9.6TB/s of aggregate bandwidth.

For those with smaller workloads, Google Compute Engine is also offering A2 VMs in smaller configurations to match specific applications’ needs.

Google Cloud announced that additional NVIDIA A100 support is coming soon to Google Kubernetes Engine, Cloud AI Platform and other Google Cloud services. For more information, including technical details on the new A2 VM family and how to sign up for access, visit the Google Cloud blog.

The post NVIDIA Ampere GPUs Come to Google Cloud at Speed of Light appeared first on The Official NVIDIA Blog.

Read More

Hardhats and AI: Startup Navigates 3D Aerial Images for Inspections

Hardhats and AI: Startup Navigates 3D Aerial Images for Inspections

Childhood buddies from back in South Africa, Nicholas Pilkington, Jono Millin and Mike Winn went off together to a nearby college, teamed up on a handful of startups and kept a pact: work on drones once a week.

That dedication is paying off. Their drone startup, based in San Francisco, is picking up interest worldwide and has landed $35 million in Series D funding.

It all catalyzed in 2014, when the friends were accepted into the AngelPad accelerator program in Silicon Valley. They founded DroneDeploy there, enabling contractors to capture photos, maps, videos and high-fidelity panoramic images for remote inspections of job sites.

“We had this a-ha moment: Almost any industry can benefit from aerial imagery, so we set out to build the best drone software out there and make it easy for everyone,” said Pilkington, co-founder and CTO at DroneDeploy.

DroneDeploy’s AI software platform — it’s the navigational brains and eyes — is operating in more than 200 countries and handling more than 1 million flights a year.

Nailing Down Applications

DroneDeploy’s software has been adopted in construction, agriculture, forestry, search and rescue, inspection, conservation and mining.

In construction, DroneDeploy is used by one-quarter of the world’s 400 largest building contractors and six of the top 10 oil and gas companies, according to the company.

DroneDeploy was one of three startups that recently presented at an NVIDIA Inception Connect event held by Japanese insurer Sompo Holdings. For good reason: Startups are helping insurance and reinsurance firms become more competitive by analyzing portfolio risks with AI.

The NVIDIA Inception program nurtures startups with access to GPU guidance, Deep Learning Institute courses, networking and marketing opportunities.

Navigating Drone Software

DroneDeploy offers features like fast setup of autonomous flights, photogrammetry to take physical measurements and APIs for drone data.

In addition to supporting industry-leading drones and hardware, DroneDeploy operates an app ecosystem for partners to build apps using its drone data platform. John Deere, for example, offers an app for customers to upload aerial drone maps of their fields to their John Deere account so that they can plan flights based on the field data.

Split-second photogrammetry and 360-degree images provided by DroneDeploy’s algorithms running on NVIDIA GPUs in the cloud help provide pioneering mapping and visibility.

AI on Safety, Cost and Time

Drones used in high places instead of people can aid in safety. The U.S. Occupational Safety and Health Administration last year reported that 22 people were killed in roofing-related accidents in the U.S.

Inspecting roofs and solar panels with drone technology can improve that safety record. It can also save on cost: The traditional alternative to having people on rooftops to perform these inspections is using helicopters.

Customers of the DroneDeploy platform can follow a quickly created map to carry out a sequence of inspections with guidance from cameras fed into image recognition algorithms.

Using drones, customers can speed up inspections by 80 percent, according to the company.  

“In areas like oil, gas and energy, it’s about zero-downtime inspections of facilities for operations and safety, which is a huge value driver for these customers,” said Pilkington.

The post Hardhats and AI: Startup Navigates 3D Aerial Images for Inspections appeared first on The Official NVIDIA Blog.

Read More

You Can’t Touch This: Deep Clean System Flags Potentially Contaminated Surfaces

You Can’t Touch This: Deep Clean System Flags Potentially Contaminated Surfaces

Amid the continued spread of coronavirus, extra care is being taken by just about everyone to wash hands and wipe down surfaces, from countertops to groceries.

To spotlight potentially contaminated surfaces, hobbyist Nick Bild has come up with Deep Clean, a stereo camera system that flags objects that have been touched in a room.

The device can be used by cleaning crews at hospitals and assisted living facilities or anyone  who’d like to know what areas need special attention when trying to prevent disease transmission.

Courtesy of Nick Bild.

Deep Clean uses an NVIDIA Jetson AGX Xavier developer kit as the main processing unit to map out a room, detecting where different objects lie within it. Jetson helps pinpoint the exact location (x,y-coordinates) and depth (z-coordinate) of each object.

When an object overlaps with a person’s hand, which is identified by an open-source body keypoint detection system called OpenPose, those coordinates are stored in the system’s memory. To maintain users’ privacy, only the coordinates are stored, not the images.

Then, the coordinates are used to automatically annotate an image of the unoccupied room, displaying what has been touched and thus potentially contaminated.

Nick the Bild-er: Equipped with the Right Tools

When news broke in early March that COVID-19 was spreading in the U.S., Bild knew he had to take action.

“I’m not a medical doctor. I’m not a biologist. So, I thought, what can I do as a software developer slash hardware hacker to help?” said Bild.

Juggling a software engineering job by day, as well as two kids at home in Orlando, Florida, Bild faced the challenges of finding the time and resources to get this machine built. He knew getting his hands on a 3D camera would be expensive, which is why he turned to Jetson, an edge AI platform he found to be simultaneously affordable and powerful.

Deep Clean’s stereo camera system. Image courtesy of Nick Bild.

“It’s really a good general-purpose tool that hits the sweet spot of low price and good performance,” said Bild. “You can do a lot of different types of tasks — classify images, sounds, pretty much whatever kind of AI inference you need to do.”

Within a week and a half, Bild had made a 3D camera of his own, which he further developed into the prototype for Deep Clean.

Looking ahead, Bild hopes to improve the device to detect sources of potential contamination beyond human touch, such as cough or sneeze droplets.

Technology to Help the Community

Deep Clean isn’t Bild’s first instance of helping the community through his technological pursuits. He’s developed seven different projects since he began using NVIDIA products when the first Jetson Nano was released in March 2019.

One of these projects, a pair of AI-enabled glasses, won NVIDIA’s Jetson Community Project of the Month Award for allowing people to switch devices such as a lamp or stereo on and off simply by looking at them and waving. The shAIdes are especially helpful for those with limited mobility.

Bild calls himself a “prototyper,” as he creates a variety of smart, useful devices like Deep Clean in hopes that someday one will be made available for wide commercial use.

A fast learner who’s committed to making a difference, Bild is always exploring how to make a device better and looking for what to embark upon as his next project.

Anyone can get started on a Jetson project. Learn how on the Jetson developers page.

The post You Can’t Touch This: Deep Clean System Flags Potentially Contaminated Surfaces appeared first on The Official NVIDIA Blog.

Read More

Heads Up, Down Under: Sydney Suburb Enhances Livability with Traffic Analytics

Heads Up, Down Under: Sydney Suburb Enhances Livability with Traffic Analytics

With a new university campus nearby and an airport under construction, the city of Liverpool, Australia, 27 kilometers southwest of Sydney, is growing fast.

More than 30,000 people are expected to make a daily commute to its central business district. Liverpool needed to know the possible impact to traffic flow and movement of pedestrians, cyclists and vehicles.

The city already hosts closed-circuit televisions to monitor safety and security. Each CCTV captures lots of video and data that, due to stringent privacy regulations, is mainly combed through after an incident has been reported.

The challenge before the city was to turn this massive dataset into information that could help it run more efficiently, handle an influx of commuters and keep the place liveable for residents — without compromising anyone’s privacy.

To achieve this goal, the city has partnered with the Digital Living Lab of the University of Wollongong. Part of Wollongong’s SMART Infrastructure Facility, the DLL has developed what it calls the Versatile Intelligent Video Analytics platform. VIVA, for short, unlocks data so that owners of CCTV networks can access real-time, privacy-compliant data to make better informed decisions.

VIVA is designed to convert existing infrastructure into edge-computing devices embedded with the latest AI. The platform’s state-of-the-art deep learning algorithms are developed at DLL on the NVIDIA Metropolis platform. Their video analytics deep-learning models are trained using transfer learning to adapt to use cases, optimized via NVIDIA TensorRT software and deployed on NVIDIA Jetson edge AI computers.

“We designed VIVA to process video feeds as close as possible to the source, which is the camera,” said Johan Barthelemy, lecturer at the SMART Infrastructure Facility of the University of Wollongong. “Once a frame has been analyzed using a deep neural network, the outcome is transmitted and the current frame is discarded.”

Disposing of frames maintains privacy as no images are transmitted. It also reduces the bandwidth needed.

Beyond city streets like in Liverpool, VIVA has been adapted for a wide variety of applications, such as identifying and tracking wildlife; detecting culvert blockage for stormwater management and flash flood early warnings; and tracking of people using thermal cameras to understand people’s mobility behavior during heat waves. It can also distinguish between firefighters searching a building and other building occupants, helping identify those who may need help to evacuate.

Making Sense of Traffic Patterns

The research collaboration between SMART, Liverpool’s city council and its industry partners is intended to improve the efficiency, effectiveness and accessibility of a range of government services and facilities.

For pedestrians, the project aims to understand where they’re going, their preferred routes and which areas are congested. For cyclists, it’s about the routes they use and ways to improve bicycle usage. For vehicles, understanding movement and traffic patterns, where they stop, and where they park are key.

Understanding mobility within a city formerly required a fleet of costly and fixed sensors, according to Barthelemy. Different models were needed to count specific types of traffic, and manual processes were used to understand how different types of traffic interacted with each other.

Using computer vision on the NVIDIA Jetson TX2 at the edge, the VIVA platform can count the different types of traffic and capture their trajectory and speed. Data is gathered using the city’s existing CCTV network, eliminating the need to invest in additional sensors.

Patterns of movements and points of congestion are identified and predicted to help improve street and footpath layout and connectivity, traffic management and guided pathways. The data has been invaluable in helping Liverpool plan for the urban design and traffic management of its central business district.

Machine Learning Application Built Using NVIDIA Technologies

SMART trained the machine learning applications on its VIVA platform for Liverpool on four workstations powered by a variety of NVIDIA TITAN GPUs, as well as six workstations equipped with NVIDIA RTX GPUs to generate synthetic data and run experiments.

In addition to using open databases such as OpenImage, COCO and Pascal VOC for training, DLL created synthetic data via an in-house application based on the Unity Engine. Synthetic data allows the project to learn from numerous scenarios that might not otherwise be present at any given time, like rainstorms or masses of cyclists.

“This synthetic data generation allowed us to generate 35,000-plus images per scenario of interest under different weather, time of day and lighting conditions,” said Barthelemy. “The synthetic data generation uses ray tracing to improve the realism of the generated images.”

Inferencing is done with NVIDIA Jetson Nano, NVIDIA Jetson TX2 and NVIDIA Jetson Xavier NX, depending on the use case and processing required.

The post Heads Up, Down Under: Sydney Suburb Enhances Livability with Traffic Analytics appeared first on The Official NVIDIA Blog.

Read More

Sand Safety: Startup’s Lifeguard AI Hits the Beach to Save Lives

Sand Safety: Startup’s Lifeguard AI Hits the Beach to Save Lives

A team in Israel is making a splash with AI.

It started as biz school buddies Netanel Eliav and Adam Bismut were looking to solve a problem to change the world. The problem found them: Bismut visited the Dead Sea after a drowning and noticed a lack of tech for lifeguards, who scanned the area with age-old binoculars.

The two aspiring entrepreneurs — recent MBA graduates of Ben-Gurion University, in the country’s south — decided this was their problem to solve with AI.

“I have two little girls, and as a father, I know the feeling that parents have when their children are near the water,” said Eliav, the company’s CEO.

They founded Sightbit in 2018 with BGU classmates Gadi Kovler and Minna Shezaf to help lifeguards see dangerous conditions and prevent drownings.

The startup is seed funded by Cactus Capital, the venture arm of their alma mater.

Sightbit is now in pilot testing at Palmachim Beach, a popular escape for sunbathers and surfers in the Palmachim Kibbutz area along the Mediterranean Sea, south of Tel Aviv. The sand dune-lined destination, with its inviting, warm aquamarine waters, gets packed with thousands of daily summer visitors.

But it’s also a place known for deadly rip currents.

Danger Detectors

Sightbit has developed image detection to help spot dangers to aid lifeguards in their work. In collaboration with the Israel Nature and Parks Authority, the Beersheba-based startup has installed three cameras that feed data into a single NVIDIA Jetson AGX at the lifeguard towers at Palmachim beach. NVIDIA Metropolis is deployed for video analytics.

The system of danger detectors enables lifeguards to keep tabs on a computer monitor that flags potential safety concerns while they scan the beach.

Sightbit has developed models based on convolutional neural networks and image detection to provide lifeguards views of potential dangers. Kovler, the company’s CTO, has trained the company’s danger detectors on tens of thousands of images, processed with NVIDIA GPUs in the cloud.

Training on the images wasn’t easy with sun glare on the ocean, weather conditions, crowds of people, and people partially submerged in the ocean, said Shezaf, the company’s CMO.

But Sightbit’s deep learning and proprietary algorithms have enabled it to identify children alone as well as clusters of people. This allows its system to flag children who have strayed from the pack.

Rip Current Recognition

The system also harnesses optical flow algorithms to detect dangerous rip currents in the ocean for helping lifeguards keep people out of those zones.  These algorithms make it possible to identify the speed of every object in an image, using partial differential equations to calculate acceleration vectors of every voxel in the image.

Lifeguards can get updates on ocean conditions so when they start work they have a sense of hazards present that day.

“We spoke with many lifeguards. The lifeguard is trying to avoid the next accident. Many people go too deep and get caught in the rip currents,” said Eliav.

Cameras at lifeguard towers processed on the single compact supercomputing Jetson Xavier and accessing Metropolis can offer split-second inference for alerts, tracking, statistics and risk analysis in real time.

The Israel Nature and Parks Authority is planning to have a structure built on the beach to house more cameras for automated safety, according to Sightbit.

COVID-19 Calls 

Palmachim Beach lifeguards have a lot to watch, especially now as people get out of their homes for fresh air after the region begins reopening from COVID-19-related closures.

As part of Sightbit’s beach safety developments, the company had been training its network to spot how far apart people were to help gauge child safety.

This work also directly applies to monitoring social distancing and has attracted the attention of potential customers seeking ways to slow the spread of COVID-19. The Sightbit platform can provide them crowding alerts when a public area is overcrowded and proximity alerts for when individuals are too close to each other, said Shezaf.

The startup has put in extra hours to work with those interested in its tech to help monitor areas for ways to reduce the spread of the pathogen.

“If you want to change the world, you need to do something that is going to affect people immediately without any focus on profit,” said Eliav.

 

Sightbit is a member of NVIDIA Inception, a virtual accelerator program that helps startups in AI and data science get to market faster.

The post Sand Safety: Startup’s Lifeguard AI Hits the Beach to Save Lives appeared first on The Official NVIDIA Blog.

Read More

Floating on Creativity: SuperBlimp Speeds Rendering Workflows with NVIDIA RTX GPUs

Floating on Creativity: SuperBlimp Speeds Rendering Workflows with NVIDIA RTX GPUs

Rendering is a critical part of the design workflow. But as audiences and clients expect ever higher-quality graphics, agencies and studios must tap into the latest technology to keep up with rendering needs.

SuperBlimp, a creative production studio based just outside of London, knew there had to be a better way to achieve the highest levels of quality in the least amount of time. They’re leaving CPU rendering behind and moving to NVIDIA RTX GPUs, bringing significant acceleration to the rendering workflows for their unique productions.

After migrating to full GPU rendering, SuperBlimp experienced accelerated render times, making it easier to complete more iterations on their projects and develop creative visuals faster than before.

Blimping Ahead of Rendering With RTX

Because SuperBlimp is a small production studio, they needed the best performance at a low cost, so they turned to NVIDIA GeForce RTX 2080 Ti GPUs.

SuperBlimp had been using NVIDIA GPUs for the past few years, so they were already familiar with the power and performance of GPU acceleration. But they always had one foot in the CPU camp and needed to constantly switch between CPU and GPU rendering.

However, CPU render farms required too much storage space and took too much time. When SuperBlimp finally embraced full GPU rendering, they found RTX GPUs delivered the level of computing power they needed to create 3D graphics and animations on their laptops at a much quicker rate.

Powered by NVIDIA Turing, the most advanced GPU architecture for creators, RTX GPUs provide dedicated ray-tracing cores to help users speed up rendering performance and produce stunning visuals with photorealistic details.

And with NVIDIA Studio Drivers, the artists at SuperBlimp are achieving the best performance on their creative applications. NVIDIA Studio Drivers undergo extensive testing against multi-app creator workflows and multiple revisions of top creative applications, including Adobe Creative Cloud, Autodesk and more.

For one of their recent projects, an award-winning short film titled Playgrounds, SuperBlimp used Autodesk Maya for 3D modeling and Chaos Group’s V-Ray GPU software for rendering. V-Ray enabled the artists to create details that helped produce realistic surfaces, from metallic finishes to plastic materials.

“With NVIDIA GPUs, we saw render times reduce from 3 hours to 15 minutes. This puts us a great position to create compelling work,” said Antonio Milo, director at SuperBlimp. “GPU rendering opened the door for a tiny studio like us to design and produce even more eye-catching content than before.”

Image courtesy of SuperBlimp.

Now, SuperBlimp renders their projects using NVIDIA GeForce RTX 2080 Ti and GTX 1080 Ti GPUs to bring incredible speeds for rendering, so their artists can complete creative projects with the powerful, flexible and high-quality performance they need.

Learn how NVIDIA GPUs are powering the future of creativity.

The post Floating on Creativity: SuperBlimp Speeds Rendering Workflows with NVIDIA RTX GPUs appeared first on The Official NVIDIA Blog.

Read More

Heart of the Matter: AI Helps Doctors Navigate Pandemic

Heart of the Matter: AI Helps Doctors Navigate Pandemic

A month after it got FDA approval, a startup’s first product was saving lives on the front lines of the battle against COVID-19.

Caption Health develops software for ultrasound systems, called Caption AI. It uses deep learning to empower medical professionals, including those without prior ultrasound experience, to perform echocardiograms quickly and accurately. 

The results are images of the heart often worthy of an expert sonographer that help doctors diagnose and treat critically ill patients.

The coronavirus pandemic provided plenty of opportunities to try out the first dozen systems. Two doctors who used the new tool shared their stories on the condition that their patients remain anonymous.

In March, a 53-year-old diabetic woman with COVID-19 went into cardiac shock in a New York hospital. Without the images from Caption AI, it would have been difficult to clinch the diagnosis, said a doctor on the scene.

The system helped the physician identify heart problems in an 86-year-old man with the virus in the same hospital, helping doctors bring him back to health. It was another case among more than 200 in the facility that was effectively turned into a COVID-19 hospital this spring.

The Caption Health system made a tremendous impact for a staff spread thin, said the doctor. It would have been hard for a trained sonographer to keep up with the demand for heart exams, he added.

Heart Test Becomes Standard Procedure

Caption AI helped doctors in North Carolina determine that a 62-year-old man had COVID-19-related heart damage. Thanks, in part, to the ease of using the system, the hospital now performs echocardiograms for most patients with the virus.

At the height of the pandemic’s first wave, the hospital stationed ultrasound systems with Caption AI in COVID-19 wards. Rather than sending sonographers from unit to unit, which is the usual practice, staff stationed at the wards used the systems. The change reduced staff exposure to the virus and conserved precious protective gear. 

Beyond the pandemic, the system will help hospitals provide urgent services while keeping a lid on rising costs, said a doctor at that hospital.

“AI-enabled machines will be the next big wave in taking care of patients wherever they are,” said Randy Martin, chief medical officer of Caption Health and emeritus professor of cardiology at Emory University, in Atlanta.

Martin joined the startup about four years ago after meeting its founders, who shared expertise and passion for medicine and AI. Today their software “takes a user through 10 standard views of the heart, coaching them through some 90 fine movements experts make,” he said.

“We don’t intend to replace sonographers; we’re just expanding the use of portable ultrasound systems to the periphery for more early detection,” he added.

Coping with Unexpected Demand Spike

In the early days of the pandemic, that expansion couldn’t come fast enough.

In late March, the startup exhausted supplies that included NVIDIA Quadro P3000 GPUs that ran its AI software. In the early days of the global shutdown, the startup reached out to its supply chain.

“We are experiencing overwhelming demand for our product,” the company’s CEO wrote, after placing orders for 100 GPUs with a distributor.

Caption Health has systems currently in use at 11 hospitals. It expects to deploy Caption AI at several additional sites in the coming weeks. 

GPUs at the Heart of Automated Heart Tests

The startup currently integrates its software in a portable ultrasound from Terason. It intends to partner with more ultrasound makers in the future. And it advises partners to embed GPUs in their future ultrasound equipment.

The Quadro P3000 in Caption AI runs real-time inference tasks using deep convolutional neural networks. They provide operators guidance in positioning a probe that captures images. Then they automatically choose the highest-quality heart images and interpret them to help doctors make informed decisions.

The NVIDIA GPU also freed up four CPU cores, making space to process other tasks on the system, such as providing a smooth user experience.

The startup trained its AI models on a database of 1 million echocardiograms from clinical partners. An early study in partnership with Northwestern Medicine and the Minneapolis Heart Institute showed Caption AI helped eight registered nurses with no prior ultrasound experience capture highly accurate images on a wide variety of patients.

Inception Program Gives Startup Momentum

Caption Heath, formerly called Bay Labs, was founded in 2015 in Brisbane, Calif. It received a $125,000 prize at a 2017 GTC competition for members of NVIDIA’s Inception program, which gives startups access to technology, expertise and markets.

“Being part of the Inception program has provided us with increased recognition in the field of deep learning, a platform to share our AI innovations with healthcare and deep learning communities, and phenomenal support getting NVIDIA GPUs into our supply chain so we could deliver Caption AI,” said Charles Cadieu, co-founder and president of Caption Health.

Now that its tool has been tested in a pandemic, Caption Health looks forward to opportunities to help save lives across many ailments. The company aims to ride a trend toward more portable systems that extend availability and lower costs of diagnostic imaging.

“We hope to see our technology used everywhere from big hospitals to rural villages to examine people for a wide range of medical conditions,” said Cadieu.

To learn more about Caption Health and other companies like it, watch this webinar on healthcare startups working against COVID-19.

The post Heart of the Matter: AI Helps Doctors Navigate Pandemic appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Puts More Tools in Hands of Artists, Designers and Data Scientists Working Remotely

NVIDIA Puts More Tools in Hands of Artists, Designers and Data Scientists Working Remotely

For many organizations, the coronavirus pandemic has created a permanent shift in how their employees work. From now on, they’ll have the option to collaborate at home or in the office.

NVIDIA is giving these millions of professionals around the world a boost with a new version of our virtual GPU software, vGPU July 2020. The software adds support for more workloads and is loaded with features that improve operational efficiencies for IT administrators.

GPU virtualization is key to offering everyone from designers to data scientists a flexible way to collaborate on projects that require advanced graphics and computing power, wherever they are.

Employee productivity was the primary concern among organizations addressing remote work due to the COVID-19 pandemic, according to recent research by IDC. When the market intelligence firm interviewed NVIDIA customers using GPU-accelerated virtual desktops, it found organizations with 500-1,000 users experienced a 13 percent increase in productivity, resulting in approximately more than $1 million in annual savings.

According to Alex Herrera, an analyst with Jon Peddie Research/Cadalyst, “In a centralized computing environment with virtualized GPU technology, users no longer have to be tied to their physical workstations. As proven recently through remote work companies can turn on a dime, enabling anywhere/anytime access to big data without compromising on performance.”

Expanded Support in the Data Center and Cloud with SUSE

NVIDIA has expanded hypervisor support by partnering with SUSE on its Linux Enterprise Server, providing vGPU support on its kernel-based virtual machine platform.

Initial offerings will be supported with NVIDIA vComputeServer software, enabling GPU virtualization for AI and data science workloads. This will expand hypervisor platform options for enterprises and cloud service providers that are seeing an increased need to support GPUs.

“Demand for accelerated computing has grown beyond specialized HPC environments into virtualized data centers,” said Brent Schroeder, global chief technology officer at SUSE. “To ensure the needs of business leaders are met, SUSE and NVIDIA have worked to simplify the use of NVIDIA virtual GPUs in SUSE Linux Enterprise Server. These efforts modernize the IT infrastructure and accelerate AI and ML workloads to enhance high-performance and time-sensitive workloads for SUSE customers everywhere.”

Added Support for Immersive Collaboration

NVIDIA CloudXR technology uses NVIDIA RTX and vGPU software to deliver VR and augmented reality across 5G and Wi-Fi networks. vGPU July 2020 adds 120Hz VSync support at resolutions up to 4K, giving CloudXR users an even smoother immersive experience on untethered devices. It creates a level of fidelity that’s indistinguishable from native tethered configurations.

“Streaming AR/VR over Wi-Fi or 5G enables organizations to truly take advantage of its benefits, enabling immersive training, product design and architecture and construction,” said Matt Coppinger, director of AR/VR at VMware. “We’re partnering with NVIDIA to more securely deliver AR and VR applications running on VMware vSphere and NVIDIA Quadro Virtual Workstation, streamed using NVIDIA CloudXR to VMware’s Project VXR client application running on standalone headsets.”

The latest release of vGPU enables a better user experience and manageability needed for demanding workloads like the recently debuted Omniverse AEC Experience, which combines Omniverse, a real-time collaboration platform, with RTX Server and NVIDIA Quadro Virtual Workstation software for the data center. The reference design supports up to two virtual workstations on an NVIDIA Quadro RTX GPU, running multiple workloads such as collaborative, computer-aided design while also providing real-time photorealistic rendering of the model.

With Quadro vWS, an Omniverse-enabled virtual workstation can be provisioned in minutes to new users, anywhere in the world. Users don’t need specialized client hardware, just an internet-connected device, laptop or tablet, and data remains highly secured in the data center.

Improved Operational Efficiency for IT Administrators

New features in vGPU July 2020 help enterprise IT admins and cloud service providers streamline management, boosting their operational efficiency.

This includes cross-branch support, where the host and guest vGPU software can be on different versions, easing upgrades and large deployments.

IT admins can move quicker to the latest hypervisor versions to pick up fixes, security patches and new features, while staggering deployments for end-user images.

Enterprise data centers running VMware vSphere will see improved operational efficiency by having the ability to manage vGPU powered VMs with the latest release of VMware vRealize Operations.

As well, VMware recently added Distributed Resource Scheduler support for GPU-enabled VMs into vSphere. Now, vSphere 7 introduces a new feature called “Assignable Hardware,” which enhances initial placement so that a VM can be automatically “placed” on a host that has exactly the right GPU and  profile available before powering it on.

For IT managing large deployments, this means reducing deployment time of new VMs to a few minutes, as opposed to a manual process that can take hours. As well, this feature works with VMware’s vSphere High Availability, so if a host fails for any reason, a GPU-enabled VM can be automatically restarted on another host with the right GPU resources.

Availability

NVIDIA vGPU July 2020 release is coming soon. Learn more at nvidia.com/virtualization and watch this video.

The post NVIDIA Puts More Tools in Hands of Artists, Designers and Data Scientists Working Remotely appeared first on The Official NVIDIA Blog.

Read More