Startup’s AI Platform Allows Contact-Free Hospital Interactions

Startup’s AI Platform Allows Contact-Free Hospital Interactions

Hands-free phone calls and touchless soap dispensers have been the norm for years. Next up, contact-free hospitals.

San Francisco-based startup Ouva has created a hospital intelligence platform that monitors patient safety, acts as a patient assistant and provides a sensory experience in waiting areas — without the need for anyone to touch anything.

The platform uses the NVIDIA Clara Guardian application framework so its optical sensors can take in, analyze and provide healthcare professionals with useful information, like whether a patient with high fall-risk is out of bed. The platform is optimized on NVIDIA GPUs and its edge deployments use the NVIDIA Jetson TX1 module.

Ouva is a member of NVIDIA Inception, a program that provides AI startups go-to-market support, expertise and technology. Inception partners also have access to NVIDIA’s technical team.

Dogan Demir, founder and CEO of Ouva, said, “The Inception program informs us of hardware capabilities that we didn’t even know about, which really speeds up our work.”

Patient Care Automation 

The Ouva platform automates patient monitoring, which is critical during the pandemic.

“To prevent the spread of COVID-19, we need to minimize contact between staff and patients,” said Demir. “With our solution, you don’t need to be in the same room as a patient to make sure that they’re okay.”

More and more hospitals use video monitoring to ensure patient safety, he said, but without intelligent video analytics, this can entail a single nurse trying to keep an eye on up to 100 video feeds at once to catch an issue in a patient’s room.

By detecting changes in patient movement and alerting workers of them in real time, the Ouva platform allows nurses to pay attention to the right patient at the right time.

The Ouva platform alerts nurses to changes in patient movement.

“The platform minimizes the time that nurses may be in the dark about how a patient is doing,” said Demir. “This in turn reduces the need for patients to be transferred to the ICU due to situations that could’ve been prevented, like a fall or brain injury digression due to a seizure.”

According to Ouva’s research, the average hospitalization cost for a fall injury is $35,000, with an additional $43,000 estimated per person with a pressure injury like an ulcer from the hospital bed. This means that by preventing falls and monitoring a patient’s position changes, Ouva could help save $4 million per year for a 100-bed facility.

Ouva’s system also performs personal protective equipment checks and skin temperature screenings, as well as flags contaminated areas for cleaning, which can reduce a nurse’s hours and contact with patients.

Radboud University Medical Center in the Netherlands recently integrated Ouva’s platform for 10 of its neurology wards.

“Similar solutions typically require contact with the patient’s body, which creates an infection and maintenance risk,” said Dr. Harry van Goor from the facility. “The Ouva solution centrally monitors patient safety, room hygiene and bed turnover in real time while preserving patients’ privacy.”

Patient Assistant and Sensory Experience

The platform can also guide patients through a complex hospital facility by providing answers to voice-activated questions about building directions. Medical City Hospital in Dallas was the first to pick up this voice assistant solution for their Heart and Spine facilities at the start of COVID-19.

In waiting areas, patients can participate in Ouva’s touch-free sensory experience by gesturing at 60-foot video screens that wrap around walls, featuring images of gardens, beaches and other interactive locations.

The goal of the sensory experience, made possible by NVIDIA GPUs, is to reduce waiting room anxiety and improve patient health outcomes, according to Demir.

“The amount of pain that a patient feels during treatment can be based on their perception of the care environment,” said Demir. “We work with physical and occupational therapists to design interactive gestures that allow people to move their bodies in ways that both improve their health and their perception of the hospital environment.”

Watch Ouva’s sensory experience in action:

Stay up to date with the latest healthcare news from NVIDIA and check out our COVID-19 research hub.

The post Startup’s AI Platform Allows Contact-Free Hospital Interactions appeared first on The Official NVIDIA Blog.

Read More

DIY with AI: GTC to Host NVIDIA Deep Learning Institute Courses for Anyone, Anywhere

DIY with AI: GTC to Host NVIDIA Deep Learning Institute Courses for Anyone, Anywhere

The NVIDIA Deep Learning Institute is launching three new courses, which can be taken for the first time ever at the GPU Technology Conference next month. 

The new instructor-led workshops cover fundamentals of deep learning, recommender systems and Transformer-based applications. Anyone connected online can join for a nominal fee, and participants will have access to a fully configured, GPU-accelerated server in the cloud. 

DLI instructor-led trainings consist of hands-on remote learning taught by NVIDIA-certified experts in virtual classrooms. Participants can interact with their instructors and peers in real time. They can whiteboard ideas, tackle interactive coding challenges and earn a DLI certificate of subject competency to support their professional growth.

DLI at GTC is offered globally, with several courses available in Korean, Japanese and Simplified Chinese for attendees in their respective time zones.

New DLI workshops launching at GTC include:

  • Fundamentals of Deep Learning — Build the confidence to take on a deep learning project by learning how to train a model, work with common data types and model architectures, use transfer learning between models, and more.
  • Building Intelligent Recommender Systems — Create different types of recommender systems: content-based, collaborative filtering, hybrid, and more. Learn how to use the open-source cuDF library, Apache Arrow, alternating least squares, CuPy and TensorFlow 2 to do so.
  • Building Transformer-Based Natural Language Processing Applications — Learn about NLP topics like Word2Vec and recurrent neural network-based embeddings, as well as Transformer architecture features and how to improve them. Use pre-trained NLP models for text classification, named-entity recognition and question answering, and deploy refined models for live applications.

Other DLI offerings at GTC will include:

  • Fundamentals of Accelerated Computing with CUDA Python — Dive into how to use Numba to compile NVIDIA CUDA kernels from NumPy universal functions, as well as create and launch custom CUDA kernels, while applying key GPU memory management techniques.
  • Applications of AI for Predictive Maintenance — Leverage predictive maintenance and identify anomalies to manage failures and avoid costly unplanned downtimes, use time-series data to predict outcomes using machine learning classification models with XGBoost, and more.
  • Fundamentals of Accelerated Data Science with RAPIDS — Learn how to use cuDF and Dask to ingest and manipulate large datasets directly on the GPU, applying GPU-accelerated machine learning algorithms including XGBoost, cuGRAPH and cuML to perform data analysis at massive scale.
  • Fundamentals of Accelerated Computing with CUDA C/C++ — Find out how to accelerate CPU-only applications to run their latent parallelism on GPUs, using techniques like essential CUDA memory management to optimize accelerated applications.
  • Fundamentals of Deep Learning for Multi-GPUs — Scale deep learning training to multiple GPUs, significantly shortening the time required to train lots of data and making solving complex problems with deep learning feasible.
  • Applications of AI for Anomaly Detection — Discover how to implement multiple AI-based solutions to identify network intrusions, using accelerated XGBoost, deep learning-based autoencoders and generative adversarial networks.

With more than 2 million registered NVIDIA developers working on technological breakthroughs to solve the world’s toughest problems, the demand for deep learning expertise is greater than ever. The full DLI course catalog includes a variety of topics for anyone interested in learning more about AI, accelerated computing and data science.

Get a glimpse of the DLI experience:

Workshops have limited seating, with the early bird deadline on Sep 25. Register now.

The post DIY with AI: GTC to Host NVIDIA Deep Learning Institute Courses for Anyone, Anywhere appeared first on The Official NVIDIA Blog.

Read More

What Is MLOps?

What Is MLOps?

MLOps may sound like the name of a shaggy, one-eyed monster, but it’s actually an acronym that spells success in enterprise AI.

A shorthand for machine learning operations, MLOps is a set of best practices for businesses to run AI successfully.

MLOps is a relatively new field because commercial use of AI is itself fairly new.

MLOps: Taking Enterprise AI Mainstream

The Big Bang of AI sounded in 2012 when a researcher won an image-recognition contest using deep learning. The ripples expanded quickly.

Today, AI translates web pages and automatically routes customer service calls. It’s helping hospitals read X-rays, banks calculate credit risks and retailers stock shelves to optimize sales.

In short, machine learning, one part of the broad field of AI, is set to become as mainstream as software applications. That’s why the process of running ML needs to be as buttoned down as the job of running IT systems.

Machine Learning Layered on DevOps

MLOps is modeled on the existing discipline of DevOps, the modern practice of efficiently writing, deploying and running enterprise applications. DevOps got its start a decade ago as a way warring tribes of software developers (the Devs) and IT operations teams (the Ops) could collaborate.

MLOps adds to the team the data scientists, who curate datasets and build AI models that analyze them. It also includes ML engineers, who run those datasets through the models in disciplined, automated ways.

MLOps combine machine learning, applications development and IT operations. Source: Neal Analytics

It’s a big challenge in raw performance as well as management rigor. Datasets are massive and growing, and they can change in real time. AI models require careful tracking through cycles of experiments, tuning and retraining.

So, MLOps needs a powerful AI infrastructure that can scale as companies grow. For this foundation, many companies use NVIDIA DGX systems, CUDA-X and other software components available on NVIDIA’s software hub, NGC.

Lifecycle Tracking for Data Scientists

With an AI infrastructure in place, an enterprise data center can layer on the following elements of an MLOps software stack:

  • Data sources and the datasets created from them
  • A repository of AI models tagged with their histories and attributes
  • An automated ML pipeline that manages datasets, models and experiments through their lifecycles
  • Software containers, typically based on Kubernetes, to simplify running these jobs

It’s a heady set of related jobs to weave into one process.

Data scientists need the freedom to cut and paste datasets together from external sources and internal data lakes. Yet their work and those datasets need to be carefully labeled and tracked.

Likewise, they need to experiment and iterate to craft great models well torqued to the task at hand. So they need flexible sandboxes and rock-solid repositories.

And they need ways to work with the ML engineers who run the datasets and models through prototypes, testing and production. It’s a process that requires automation and attention to detail so models can be easily interpreted and reproduced.

Today, these capabilities are becoming available as part of cloud-computing services. Companies that see machine learning as strategic are creating their own AI centers of excellence using MLOps services or tools from a growing set of vendors.

Gartner on ML pipeline
Gartner’s view of the machine-learning pipeline

Data Science in Production at Scale

In the early days, companies such as Airbnb, Facebook, Google, NVIDIA and Uber had to build these capabilities themselves.

“We tried to use open source code as much as possible, but in many cases there was no solution for what we wanted to do at scale,” said Nicolas Koumchatzky, a director of AI infrastructure at NVIDIA.

“When I first heard the term MLOps, I realized that’s what we’re building now and what I was building before at Twitter,” he added.

Koumchatzky’s team at NVIDIA developed MagLev, the MLOps software that hosts NVIDIA DRIVE, our platform for creating and testing autonomous vehicles. As part of its foundation for MLOps, it uses the NVIDIA Container Runtime and Apollo, a set of components developed at NVIDIA to manage and monitor Kubernetes containers running across huge clusters.

Laying the Foundation for MLOps at NVIDIA

Koumchatzky’s team runs its jobs on NVIDIA’s internal AI infrastructure based on GPU clusters called DGX PODs.  Before the jobs start, the infrastructure crew checks whether they are using best practices.

First, “everything must run in a container — that spares an unbelievable amount of pain later looking for the libraries and runtimes an AI application needs,” said Michael Houston, whose team builds NVIDIA’s AI systems including Selene, a DGX SuperPOD recently ranked the most powerful industrial computer in the U.S.

Among the team’s other checkpoints, jobs must:

  • Launch containers with an approved mechanism
  • Prove the job can run across multiple GPU nodes
  • Show performance data to identify potential bottlenecks
  • Show profiling data to ensure the software has been debugged

The maturity of MLOps practices used in business today varies widely, according to Edwin Webster, a data scientist who started the MLOps consulting practice a year ago for Neal Analytics and wrote an article defining MLOps. At some companies, data scientists still squirrel away models on their personal laptops, others turn to big cloud-service providers for a soup-to-nuts service, he said.

Two MLOps Success Stories

Webster shared success stories from two of his clients.

One involves a large retailer that used MLOps capabilities in a public cloud service to create an AI service that reduced waste 8-9 percent with daily forecasts of when to restock shelves with perishable goods. A budding team of data scientists at the retailer created datasets and built models; the cloud service packed key elements into containers, then ran and managed the AI jobs.

Another involves a PC maker that developed software using AI to predict when its laptops would need maintenance so it could automatically install software updates. Using established MLOps practices and internal specialists, the OEM wrote and tested its AI models on a fleet of 3,000 notebooks. The PC maker now provides the software to its largest customers.

Many, but not all, Fortune 100 companies are embracing MLOps, said Shubhangi Vashisth, a senior principal analyst following the area at Gartner. “It’s gaining steam, but it’s not mainstream,” she said.

Vashisth co-authored a white paper that lays out three steps for getting started in MLOps: Align stakeholders on the goals, create an organizational structure that defines who owns what, then define responsibilities and roles — Gartner lists a dozen of them.

Gartner on MLOps which it here calls the machine learning development lifecycle
Gartner refers to the overall MLOps process as the machine learning development lifecycle (MLDLC).

Beware Buzzwords: AIOps, DLOps, DataOps, and More

Don’t get lost in a forest of buzzwords that have grown up along this avenue. The industry has clearly coalesced its energy around MLOps.

By contrast, AIOps is a narrower practice of using machine learning to automate IT functions. One part of AIOps is IT operations analytics, or ITOA. Its job is to examine the data AIOps generate to figure out how to improve IT practices.

Similarly, some have coined the terms DataOps and ModelOps to refer to the people and processes for creating and managing datasets and AI models, respectively. Those are two important pieces of the overall MLOps puzzle.

Interestingly, every month thousands of people search for the meaning of DLOps. They may imagine DLOps are IT operations for deep learning. But the industry uses the term MLOps, not DLOps, because deep learning is a part of the broader field of machine learning.

Despite the many queries, you’d be hard pressed to find anything online about DLOps. By contrast, household names like Google and Microsoft as well as up-and-coming companies like Iguazio and Paperspace have posted detailed white papers on MLOps.

MLOps: An Expanding Software and Services Smorgasbord

Those who prefer to let someone else handle their MLOps have plenty of options.

Major cloud-service providers like Alibaba, AWS and Oracle are among several that offer end-to-end services accessible from the comfort of your keyboard.

For users who spread their work across multiple clouds, DataBricks’ MLFlow supports MLOps services that work with multiple providers and multiple programming languages, including Python, R and SQL. Other cloud-agnostic alternatives include open source software such as Polyaxon and KubeFlow.

Companies that believe AI is a strategic resource they want behind their firewall can choose from a growing list of third-party providers of MLOps software. Compared to open-source code, these tools typically add valuable features and are easier to put into use.

NVIDIA certified products from six of them as part of its DGX-Ready Software program-:

  • Allegro AI
  • cnvrg.io
  • Core Scientific
  • Domino Data Lab
  • Iguazio
  • Paperspace

All six vendors provide software to manage datasets and models that can work with Kubernetes and NGC.

It’s still early days for off-the-shelf MLOps software.

Gartner tracks about a dozen vendors offering MLOps tools including ModelOp and ParallelM now part of DataRobot, said analyst Vashisth. Beware offerings that don’t cover the entire process, she warns. They force users to import and export data between programs users must stitch together themselves, a tedious and error-prone process.

The edge of the network, especially for partially connected or unconnected nodes, is another underserved area for MLOps so far, said Webster of Neal Analytics.

Koumchatzky, of NVIDIA, puts tools for curating and managing datasets at the top of his wish list for the community.

“It can be hard to label, merge or slice datasets or view parts of them, but there is a growing MLOps ecosystem to address this. NVIDIA has developed these internally, but I think it is still undervalued in the industry.” he said.

Long term, MLOps needs the equivalent of IDEs, the integrated software development environments like Microsoft Visual Studio that apps developers depend on. Meanwhile Koumchatzky and his team craft their own tools to visualize and debug AI models.

The good news is there are plenty of products for getting started in MLOps.

In addition to software from its partners, NVIDIA provides a suite of mainly open-source tools for managing an AI infrastructure based on its DGX systems, and that’s the foundation for MLOps. These software tools include:

Many are available on NGC and other open source repositories. Pulling these ingredients into a recipe for success, NVIDIA provides a reference architecture for creating GPU clusters called DGX PODs.

In the end, each team needs to find the mix of MLOps products and practices that best fits its use cases. They all share a goal of creating an automated way to run AI smoothly as a daily part of a company’s digital life.

 

The post What Is MLOps? appeared first on The Official NVIDIA Blog.

Read More

In a Class of Its Own: New Mercedes-Benz S-Class Sports Next-Gen AI Cockpit, Powered by NVIDIA

In a Class of Its Own: New Mercedes-Benz S-Class Sports Next-Gen AI Cockpit, Powered by NVIDIA

The Mercedes-Benz S-Class has always combined the best in engineering with a legendary heritage of craftsmanship. Now, the flagship sedan is adding intelligence to the mix, fusing AI with the embodiment of automotive luxury.

At a world premiere event, the legendary premium automaker debuted the redesigned flagship S-Class sedan. It features the all-new MBUX AI cockpit system, with an augmented reality head-up display, AI voice assistant and rich interactive graphics to enable every passenger in the vehicle, not just the driver, to enjoy personalized, intelligent features.

“This S-Class is going to be the most intelligent Mercedes ever,” said Mercedes-Benz CEO Ola Källenius during the virtual launch.

Like its predecessor, the next-gen MBUX system runs on the high-performance, energy-efficient compute of NVIDIA GPUs for instantaneous AI processing and sharp graphics.

“Mercedes-Benz is a perfect match for NVIDIA, because our mission is to use AI to solve problems no ordinary computers can,” said NVIDIA founder and CEO Jensen Huang, who took the new S-Class for a spin during the launch. “The technology in this car is remarkable.”

Jensen was featured alongside Grammy award-winning artist Alicia Keys and Formula One driver Lewis Hamilton at the premiere event, each showcasing the latest innovations of the premium sedan.

Watch NVIDIA founder and CEO Jensen Huang take the all new Mercedes-Benz S-Class for a spin.

The S-Class’s new intelligent system represents a significant step toward a software-defined, autonomous future. When more automated and self-driving features are integrated into the car, the driver and passengers alike can enjoy the same entertainment and productivity features, experiencing a personalized ride, no matter where they’re seated.

Unparalleled Performance

AI cockpits orchestrate crucial safety and convenience features, constantly learning to continuously deliver joy to the customer.

“For decades, the magic moment in car manufacturing was when the chassis received its engine,” Källenius said. “Today, there’s another magic moment that is incredibly important — the ‘marriage’ of the car’s body and its brain — the all-new head unit with the next-level MBUX-system.”

A vehicle’s cockpit typically requires a collection of electronic control units and switches to perform basic functions, such as powering entertainment or adjusting temperature. Leveraging NVIDIA technology, Mercedes-Benz was able to consolidate these components into an AI platform — removing 27 switches and buttons — to simplify the architecture while creating more space to add new features.

And the S-Class’s new compute headroom is as massive as its legroom. With NVIDIA at the helm, the premium sedan contains about the same computing power as 60 average vehicles. Just one chip each controls the 3D cluster, infotainment and rear seat displays.

“There’s more computing power packed into this car than any car, ever — three powerful computer chips with NVIDIA GPUs,” Jensen said. “Those three computer chips represent the brain and the nervous system of this car.”

Effortless Convenience

The new MBUX system makes the cutting edge in graphics, passenger detection and natural language processing seem effortless.

The S-Class features five large screens, each with brilliant displays — the 12.8-inch central infotainment with OLED technology — making vehicle and comfort controls even more user-friendly for every passenger. The new 3D driver display gives a spatial view at the touch of a button, providing a realistic view of the car in its surroundings.

The system delivers even more security, enabling fingerprint, face and voice recognition, alongside a traditional PIN to access personal features. Its cameras can detect if a passenger is about to exit into oncoming traffic and warn them before they open the door. The same technology is used to monitor whether a child seat is correctly attached and if the driver is paying attention to the road.

MBUX can even carry on more conversation. It can answer a wider range of questions, some without the key phrase “Hey Mercedes,” and can interact in 27 languages, including Thai and Czech.

These futuristic functions are the result of over 30 million lines of code written by hundreds of engineers, who are continuously developing new and innovative ways for customers to enjoy their drive.

“These engineers are practically in your garage and they’re constantly working on the software, improving it, enhancing it, creating more features, and will update it over the air,” Jensen said. “Your car can now get better and better over time.”

The post In a Class of Its Own: New Mercedes-Benz S-Class Sports Next-Gen AI Cockpit, Powered by NVIDIA appeared first on The Official NVIDIA Blog.

Read More

Up Your Creative Game: GeForce RTX 30 Series GPUs Amp Up Performance

Up Your Creative Game: GeForce RTX 30 Series GPUs Amp Up Performance

Creative workflows are riddled with hurry up and wait.

GeForce RTX 30 Series GPUs, powered by our second-generation RTX architecture, aim to reduce the wait, giving creators more time to focus on what matters: creating amazing content.

These new graphics cards deliver faster ray tracing and the next generation of AI-powered tools, turning the tedious tasks in creative workflows into things of the past.

With up to 24GB of new, blazing-fast GDDR6X memory, they’re capable of powering the most demanding multi-app workflows, 8K HDR video editing and working with extra-large 3D models.

Plus, two new apps, available to all NVIDIA RTX users, are joining NVIDIA Studio. NVIDIA Broadcast turns any room into a home broadcast studio with AI-enhanced video and voice comms. NVIDIA Omniverse Machinima enables creators to tell amazing stories with video game assets, animated by AI.

Ray Tracing at the Speed of Light

The next generation of dedicated ray tracing cores and improved CUDA performance on GeForce RTX 30 Series GPUs speeds up 3D rendering times by up to 2x across top renderers.

chart showing relative performance of geforce 30 series gpus on creative apps

The RT Cores also feature new hardware acceleration for ray-traced motion blur rendering, a common but computationally intensive technique. It’s used to enhance 3D visuals with cinematic flair. But to date, it requires using either an inaccurate motion vector-based post-process, or an accurate but time-consuming rendering step. Now with RTX 30 Series and RT Core accelerated apps like Blender Cycles, creators can enjoy up to 5x faster motion blur rendering than prior generation RTX.

motion blur in blender cycles
Motion blur effect rendered in Blender Cycles.

Next-Gen AI Means Less Wait and More Create

GeForce RTX 30 Series GPUs are enabling the next wave of AI-powered creative features, reducing or even eliminating repetitive creative tasks such as image denoising, reframing and retiming of video, and creation of textures and materials.

Along with the release of our next-generation RTX GPUs, NVIDIA is bringing DLSS — real-time super resolution that uses the power of AI to boost frame rates — to creative apps. D5 Render and SheenCity Mars are the first design apps to add DLSS support, enabling crisp, real-time exploration of designs.

Render of living space created by D5 Render using GeForce RTX 30 Series GPUs
Image courtesy of D5 Render.

Hardware That Zooms

Increasingly, complex digital content creation requires hardware that can run multiple apps concurrently. This requires a large frame buffer on the GPU. Without sufficient memory, systems start to chug, wasting precious time as they swap geometry and textures in and out of each app.

The new GeForce RTX 3090 GPU houses a massive 24GB of video memory. This lets animators and 3D artists work with the largest 3D models. Video editors can tackle the toughest 8K scenes. And creators of all types can stay hyper-productive in multi-app workflows.

GeForce RTX 3080 Series GPU
Model, edit and export larger scenes faster with GeForce RTX 30 Series GPUs.

The new GPUs also use PCIe 4.0, doubling the connection speed between the GPU and the rest of the PC. This improves performance when working with ultra-high-resolution and HDR video.

GeForce RTX 30 Series graphics cards are also the first discrete GPUs with decode support for the AV1 codec, enabling playback of high-resolution video streams up to 8K HDR using significantly less bandwidth.

AI-Accelerated Studio Apps

Two new Studio apps are making their way into creatives’ arsenals this fall. Best of all, they’re free for NVIDIA RTX users.

NVIDIA Broadcast upgrades any room into an AI-powered home broadcast studio. It transforms standard webcams and microphones into smart devices, offering audio noise removal, virtual background effects and webcam auto framing compatible with most popular live streaming, video conferencing and voice chat applications.

NVIDIA Broadcast feature
Access AI-powered features and download the new NVIDIA Broadcast app later this month.

NVIDIA Omniverse Machinima enables creators to tell amazing stories with video game assets, animated by NVIDIA AI technologies. Through NVIDIA Omniverse, creators can import assets from supported games or most third-party asset libraries, then automatically animate characters using an AI-based pose estimator and footage from their webcam. Characters’ faces can come to life with only a voice recording using NVIDIA’s new Audio2Face technology.

Screenshot of NVIDIA Omniverse Machinima
Master the art of storytelling using 3D objects with NVIDIA Omniverse Machinima powered by AI.

NVIDIA is also updating in September GeForce Experience, our companion app for GeForce GPUs, to support desktop and application capture for up to 8K and HDR, enabling creators to record video at incredibly high resolution and dynamic range.

These apps, like most of the world’s top creative apps, are supported by NVIDIA Studio Drivers, which provide optimal levels of performance and reliability.

GeForce RTX 30 Series: Get Creating Soon

GeForce RTX 30 Series graphics cards are available starting September 17.

While you wait for the next generation of creative performance, perfect your creative skillset by visiting the NVIDIA Studio YouTube channel to watch tutorials and tips and tricks from industry-leading artists.

The post Up Your Creative Game: GeForce RTX 30 Series GPUs Amp Up Performance appeared first on The Official NVIDIA Blog.

Read More

‘Giant Step into the Future’: NVIDIA CEO Unveils GeForce RTX 30 Series GPUs

‘Giant Step into the Future’: NVIDIA CEO Unveils GeForce RTX 30 Series GPUs

A decade ago, GPUs were judged on whether they could power through Crysis. How quaint.

The latest NVIDIA Ampere GPU architecture, unleashed in May to power the world’s supercomputers and hyperscale data centers, has come to gaming.

And with NVIDIA CEO Jensen Huang Tuesday unveiling the new GeForce RTX 30 Series GPUs, it’s delivering NVIDIA’s “greatest generational leap in company history.”

The GeForce RTX 30 Series, NVIDIA’s second-generation RTX GPUs, deliver up to 2x the performance and 1.9x the power efficiency over previous-generation GPUs.

NVIDIA CEO Jensen Huang spoke from the kitchen of his Silicon Valley home. 
NVIDIA CEO Jensen Huang spoke from the kitchen of his Silicon Valley home.

“If the last 20 years was amazing, the next 20 will seem like nothing short of science fiction,” Huang said, speaking from the kitchen of his Silicon Valley home. Today’s NVIDIA Ampere launch is “a giant step into the future,” he added.

In addition to the trio of new GPUs — the flagship GeForce RTX 3080, the GeForce RTX 3070 and the “ferocious” GeForce RTX 3090 — Huang introduced a slate of new tools for GeForce gamers.

They include NVIDIA Reflex — which makes competitive gamers quicker, NVIDIA Omniverse Machinima — for those using real-time computer graphics engines to create movies, and NVIDIA Broadcast — which harnesses AI to build virtual broadcast studios for streamers.

Up Close and Personal

A pair of demos tell the tale. In May, we released a demo called Marbles — featuring the world’s first fully path-traced photorealistic real-time graphics.

Chock-full of reflections, textures and light sources, the demo is basically a movie that’s rendered in real time as a marble rolls through an elaborate workshop rich with different materials and textures. It’s a stunningly realistic environment.

In May, Huang showed Marbles running on our top-end Turing architecture-based Quadro RTX 8000 graphics card at 25 frames per second at 720p resolution. An enhanced version of the demo, Marbles at Night, running on NVIDIA Ampere, runs at 30 fps at 1440p.

More telling is a demo that can’t be shown remotely — gameplay running at 60 fps on an 8K LG OLED TV — because video-streaming services don’t support that level of quality.

The GeForce RTX 3090 is the world’s first GPU able to play blockbuster games at 60 fps in 8K resolution, which is 4x the pixels of 4K and 16x the pixels of 1080p.

We showed it to a group of veteran streamers in person to get their reactions as they played through some of latest games.

“You can see wear and tear on the treads,” one said, tilting his head to the side and eyeing the screen in amazement.

“This feels like a Disneyland experience,” another added.

Gamers Battle COVID-19

Huang started his news-packed talk by thanking the more than one million gamers who pooled their GPUs through Folding@Home to fight the COVID-19 coronavirus.

The result was 2.8 exaflops of computing power, 5x the processing power of the world’s largest supercomputer, Huang said, capturing the moment the virus infects a human cell.

“Thank you all for joining this historic fight,” Huang said.

RTX a “Home Run”

For 40 years, since NVIDIA researcher Turner Whitted published his groundbreaking paper on ray tracing, computer science researchers have chased the dream of creating super-realistic virtual worlds with real-time ray tracing, Huang said.

NVIDIA focused intense effort over the past 10 years to realize real-time ray tracing on a large scale. At the SIGGRAPH graphics conference two years ago, NVIDIA unveiled the first NVIDIA RTX GPU.

Based on NVIDIA’s Turing architecture, it combined programmable shaders, RT Cores to accelerate ray-triangle and ray-bounding-box intersections, and the Tensor Core AI processing pipeline.

“Now, two years later, it is clear we have reinvented computer graphics,” Huang said, citing support from all major 3D APIs, tools and game engines, noting that hundreds of RTX-enabled games are now in development. “RTX is a home run,” he said.

Just ask your kids, if you can tear them away from their favorite games for a moment.

Fortnite, from Epic Games, is the latest global sensation to turn on NVIDIA RTX real-time ray-tracing technology, Huang announced.

Now Minecraft and Fortnite, two of the most popular games in the world, have RTX On.

No kid will miss the significance of these announcements.

A Giant Leap in Performance 

The NVIDIA Ampere architecture, the second generation of RTX GPUs, “is a giant leap in performance,” Huang said.

fBuilt on a custom 8N manufacturing process, the flagship GeForce RTX 3080 has 28 billion transistors. It connects to Micron’s new GDDR6X memory — the fastest graphics memory ever made.

“The days of just relying on transistor performance scaling is over,” Huang said.

GeForce RTX 3080: The New Flagship

Starting at $699, the RTX 3080 is the perfect mix of fast performance and cutting-edge capabilities, leading Huang to declare it NVIDIA’s “new flagship GPU.”

Designed for 4K gaming, the RTX 3080 features high-speed GDDR6X memory running at 19Gbps, resulting in performance that outpaces the RTX 2080 Ti by a wide margin.

It’s up to 2X faster than the original RTX 2080. It consistently delivers more than 60 fps at 4K resolution — with RTX ON.

GeForce RTX 3070: The Sweet Spot

NVIDIA CEO Jensen Huang introducing the GeForce RTX 3070. 
NVIDIA CEO Jensen Huang introducing the GeForce RTX 3070.

Making more power available to more people is the RTX 3070, starting at $499.

And it’s faster than the $1,200 GeForce RTX 2080 Ti — at less than half the price.

The RTX 3070 hits the sweet spot of performance for games running with eye candy turned up.

GeForce RTX 3090: A Big, Ferocious GPU

At the apex of the lineup is the RTX 3090. It’s the fastest GPU ever built for gaming and creative types and is designed to power next-generation content at 8K resolution.

“There is clearly a need for a giant GPU that is available all over the world,” Huang said. “So, we made a giant Ampere.”

And the RTX 3090 is a giant of a GPU. Its Herculean 24GB of GDDR6X memory running at 19.5Gbps can tackle the most challenging AI algorithms and feed massive data-hungry workloads for true 8K gaming.

“RTX 3090 is a beast — a big ferocious GPU,” Huang said. “A BFGPU.”

At 4K it’s up to 50 percent faster than the TITAN RTX before it.

It even comes with silencer — a three-slot, dual-axial, flow-through design — up to 10x quieter and keeps the GPU up to 30 degrees C cooler than the TITAN RTX.

“The 3090 is so big that for the very first time, we can play games at 60 frames per second in 8K,” Huang said. “This is insane.”

Faster Reflexes with NVIDIA Reflex

For the 75 percent of GeForce gamers who play esports, Huang announced the release of this month of NVIDIA Reflex with our Game Ready Driver.

In Valorant, a fast-paced action game, for example, Huang showed a scenario where an opponent, traveling at 1,500 pixels per second, is only visible for 180 milliseconds.

But a typical gamer has a reaction time of 150 ms — from photon to action. “You can only hit the opponent if your PC adds less than 30 ms,” Huang explained.

Yet right now, most gamers have latencies greater than 30 ms — many up to 100 ms, Huang noted.

NVIDIA Reflex optimizes the rendering pipeline across CPU and GPU to reduce latency by up to 50 ms, he said.

“Over 100 million GeForce gamers will instantly become more competitive,” Huang said.

Turn Any Room into a Broadcast Studio

For live streamers, Huang announced NVIDIA Broadcast. It transforms standard webcams and microphones into smart devices to turn “any room into a broadcast studio,” Huang said.

It does this with effects like Audio Noise Removal, Virtual Background Effects — whether for graphics or video — and Webcam Auto Framing, giving you a virtual cameraperson.

NVIDIA Broadcast runs AI algorithms trained by deep learning on NVIDIA’s DGX supercomputer — one of the world’s most potent.

“These AI effects are amazing,” Huang said.

NVIDIA Broadcast will be available for download in September and runs on any RTX GPU, Huang said.

Omniverse Machinima

For those now using video games to create movies and shorts — an art form known as Machinima — Huang introduced NVIDIA Omniverse Machinima, based on the Omniverse 3D workflow collaboration platform.

With Omniverse Machinima, creators can use their webcam to drive an AI-based pose-estimator to animate characters. Drive face animation AI with your voice. Add high-fidelity physics like particles and fluids. Make materials physically accurate.

When done with your composition and mixing, you can even render film-quality cinematics with your RTX GPU, Huang said.

The beta will be available in October. Sign up at nvidia.com/machinima.

Nothing Short of Science Fiction

Wrapping up, Huang noted that it’s been 20 years since the NVIDIA GPU introduced programmable shading. “The GPU revolutionized modern computer graphics,” Huang said.

Now the second-generation NVIDIA RTX — fusing programmable shading, ray tracing and AI — gives us photorealistic graphics and the highest frame rates simultaneously, Huang said.

“I can’t wait to go forward 20 years to see what RTX started,” Huang said.

“In this future, GeForce is your holodeck, your lightspeed starship, your time machine,” Huang said. “In this future, we will look back and realize that it started here.”

The post ‘Giant Step into the Future’: NVIDIA CEO Unveils GeForce RTX 30 Series GPUs appeared first on The Official NVIDIA Blog.

Read More

Speed Reader: Startup Primer Helps Analysts Make Every Second Count

Speed Reader: Startup Primer Helps Analysts Make Every Second Count

Expected to read upwards of 200,000 words daily from hundreds, if not thousands, of documents, financial analysts are asked to perform the impossible.

Primer is using AI to apply the equivalent of compression technology to this mountain of data to help make work easier for them as well as analysts across a range of other industries.

The five-year-old company, based in San Francisco, has built a natural language processing and machine learning platform that essentially does all the reading and collating for analysts in a tiny fraction of the time it would normally take them.

Whatever a given analyst might be monitoring, whether it’s a natural disaster, credit default or geo-political event, Primer slashes hours of human research into a few seconds of analysis.

The software combs through massive amounts of content, highlights pertinent information such as quotes and facts, and assembles them into related lists. It distills vast topics into the essentials in seconds.

“We train the models to mimic that human behavior,” said Barry Dauber, vice president of commercial sales at Primer. “It’s really a powerful analyst platform that uses natural language processing and machine learning to surface and summarize information at scale.”

The Power of 1,000 Analysts

Using Primer’s platform running on NVIDIA GPUs is akin to giving an analyst a virtual staff that delivers near-instantaneous results. The software can analyze and report on tens of thousands of documents from financial reports, internal proprietary content, social media, 30,000-40,000 news sources and elsewhere.

“Every time an analyst wants to know something about Syria, we cluster together documents about Syria, in real time,” said Ethan Chan, engineering manager and staff machine learning engineer at Primer. “The goal is to reduce the amount of effort an analyst has to expend to process more information.”

Primer has done just that to the relief of its customers, which includes financial services firms, government agencies and an array of Fortune 500 companies.

As powerful as Primer’s natural language processing algorithms are, up until two years ago they required 20 minutes to deliver results because of the complexity of the document clustering they were asking CPUs to support.

“The clustering was the bottleneck,” said Chan. “Because we have to compare every document with every other document, we’re looking at nearly a trillion flops for a million documents.”

GPUs Slash Analysis Times

Primer’s team added GPUs to the clustering process in 2018 after joining NVIDIA Inception — an accelerator program for AI startups — and quickly slashed those analysis times to mere seconds.

Primer’s GPU work unfolds in the cloud, where it makes equally generous use of AWS, Google Cloud and Microsoft Azure. For prototyping and training of its NLP algorithms such as Named Entity Recognition and Headline Generation (on public, open-source news datasets), Primer uses instances with NVIDIA V100 Tensor Core GPUs.

Model serving and clustering happens on instances with NVIDIA T4 GPUs, which can be dialed up and down based on clustering needs. The company also uses a wrapper called CuPy, which allows for CUDA-powered acceleration of GPUs on Python.

But what Chan believes is Primer’s most innovative use of GPUs is in acceleration of its clustering algorithms.

“Grouping documents together is not something anyone else is doing,” he said, adding that Primer’s success in this area further establishes that “you can use NVIDIA for new use cases and new markets.”

Flexible Delivery Model

With the cloud-based SaaS model, customers can increase or decrease their analysis speed, depending on how much they want to spend on GPUs.

Primer’s offering can also be deployed in a customer’s data center. There, the models can be trained on a customer’s IP and clustering can be performed on premises. This is an important consideration for those working in highly regulated or sensitive markets.

Analysts in finance and national security are currently Primer’s primary users, however, the company could help anyone tasked with combing through mounds of data actually make decisions instead of preparing to make decisions.

The post Speed Reader: Startup Primer Helps Analysts Make Every Second Count appeared first on The Official NVIDIA Blog.

Read More

Rise and Sunshine: NASA Uses Deep Learning to Map Flows on Sun’s Surface, Predict Solar Flares

Rise and Sunshine: NASA Uses Deep Learning to Map Flows on Sun’s Surface, Predict Solar Flares

Looking directly at the sun isn’t recommended — unless you’re doing it with AI, which is what NASA is working on.

The surface of the sun, which is the layer you can see with the eye, is actually bubbly: intense heat creates a boiling reaction, similar to water at high temperature. So when NASA researchers magnify images of the sun with a telescope, they can see tiny blobs, called granules, moving on the surface.

Studying the movement and flows of the granules helps the researchers better understand what’s happening underneath that outer layer of the sun.

The computations for tracking the motion of granules requires advanced imaging techniques. Using data science and GPU computing with NVIDIA Quadro RTX-powered HP Z8 workstations, NASA researchers have developed deep learning techniques to more easily track the flows on the sun’s surface.

RTX Flares Up Deep Learning Performance

When studying how storms and hurricanes form, meteorologists analyze the flows of winds in Earth’s atmosphere. For this same reason, it’s important to measure the flows of plasma in the sun’s atmosphere to learn more about the short- and long-term evolution of our nearest star.

This helps NASA understand and anticipate events like solar flares, which can affect power grids, communication systems like GPS or radios, or even put space travel at risk because of the intense radiation and charged particles associated with space weather.

“It’s like predicting earthquakes,” said Michael Kirk, research astrophysicist at NASA. “Since we can’t see very well beneath the surface of the sun, we have to take measurements from the flows on the exterior to infer what is happening subsurface.”

Granules are transported by plasma motions — hot ionized gas under the surface. To capture these motions, NASA developed customized algorithms best tailored to their solar observations, with a deep learning neural network that observes the granules using images from the Solar Dynamics Observatory, and then learns how to reconstruct their motions.

“Neural networks can generate estimates of plasma motions at resolutions beyond what traditional flow tracking methods can achieve,” said Benoit Tremblay from the National Solar Observatory. “Flow estimates are no longer limited to the surface — deep learning can look for a relationship between what we see on the surface and the plasma motions at different altitudes in the solar atmosphere.”

“We’re training neural networks using synthetic images of these granules to learn the flow fields, so it helps us understand precursor environments that surround the active magnetic regions that can become the source of solar flares,” said Raphael Attie, solar astronomer at NASA’s Goddard Space Flight Center.

NVIDIA GPUs were essential in training the neural networks because NASA needed to complete several training sessions with data preprocessed in multiple ways to develop robust deep learning models, and CPU power was not enough for these computations.

When using TensorFlow on a 72 CPU-core compute node, it took an hour to complete only one pass with the training data. Even in a CPU-based cloud environment, it would still take weeks to train all the models that the scientists needed for a single project.

With an NVIDIA Quadro RTX 8000 GPU, the researchers can complete one training in about three minutes — a 20x speedup. This allows them to start testing the trained models after a day instead of having to wait weeks.

“This incredible speedup enables us to try out different ways to train the models and make ‘stress tests,’ like preprocessing images at different resolutions or introducing synthetic errors to better emulate imperfections in the telescopes,” said Attie. “That kind of accelerated workflow completely changed the scope of what we can afford to explore, and it allows us to be much more daring and creative.”

With NVIDIA Quadro RTX GPUs, the NASA researchers can accelerate workflows for their solar physics projects, and they have more time to conduct thorough research with simulations to gain deeper understandings of the sun’s dynamics.

Learn more about NVIDIA and HP data science workstations, and listen to the AI Podcast with NASA.

The post Rise and Sunshine: NASA Uses Deep Learning to Map Flows on Sun’s Surface, Predict Solar Flares appeared first on The Official NVIDIA Blog.

Read More

Pixel Perfect: V7 Labs Automates Image Annotation for Deep Learning Models

Pixel Perfect: V7 Labs Automates Image Annotation for Deep Learning Models

Cells under a microscope, grapes on a vine and species in a forest are just a few of the things that AI can identify using the image annotation platform created by startup V7 Labs.

Whether a user wants AI to detect and label images showing equipment in an operating room or livestock on a farm, the London-based company offers V7 Darwin, an AI-powered web platform with a trained model that already knows what almost any object looks like, according to Alberto Rizzoli, co-founder of V7 Labs.

It’s a boon for small businesses and other users that are new to AI or want to reduce the costs of training deep learning models with custom data. Users can load their data onto the platform, which then segments objects and annotates them. It also allows for training and deploying models.

V7 Darwin is trained on several million images and optimized on NVIDIA GPUs. The startup is also exploring the use of NVIDIA Clara Guardian, which includes NVIDIA DeepStream SDK intelligent video analytics framework on edge AI embedded systems. So far, it’s piloted laboratory perception, quality inspection, and livestock monitoring projects, using the NVIDIA Jetson AGX Xavier and Jetson TX2 modules for the edge deployment of trained models.

V7 Labs is a member of NVIDIA Inception, a program that provides AI startups with go-to-market support, expertise and technology assistance.

Pixel-Perfect Object Classification

“For AI to learn to see something, you need to give it examples,” said Rizzoli. “And to have it accurately identify an object based on an image, you need to make sure the training sample captures 100 percent of the object’s pixels.”

Annotating and labeling an object based on such a level of “pixel-perfect” granular detail takes just two-and-a-half seconds for V7 Darwin — up to 50x faster than a human, depending on the complexity of the image, said Rizzoli.

Saving time and costs around image annotation is especially important in the context of healthcare, he said. Healthcare professionals must look at hundreds of thousands of X-ray or CT scans and annotate abnormalities, Rizzoli said, but this can be automated.

For example, during the COVID-19 pandemic, V7 Labs worked with the U.K.’s National Health Service and Italy’s San Matteo Hospital to develop a model that detects the severity of pneumonia in a chest X-ray and predicts whether a patient will need to enter an intensive care unit.

The company also published an open dataset with over 6,500 X-ray images showing pneumonia, 500 cases of which were caused by COVID-19.

V7 Darwin can be used in a laboratory setting, helping to detect protocol errors and automatically log experiments.

Application Across Industries

Companies in a wide variety of industries beyond healthcare can benefit from V7’s technology.

“Our goal is to capture all of computer vision and make it remarkably easy to use” said Rizzoli. “We believe that if we can identify a cell under a microscope, we can also identify, say, a house from a satellite. And if we can identify a doctor performing an operation or a lab technician performing an experiment, we can also identify a sculptor or a person preparing a cake.”

Global uses of the platform include assessing the damage of natural disasters, observing the growth of human and animal embryos, detecting caries in dental X-rays, creating autonomous machines to evaluate safety protocols in manufacturing, and allowing farming robots to count their harvests.

Stay up to date with the latest healthcare news from NVIDIA, and explore how AI, accelerated computing, and GPU technology contribute to the worldwide battle against the novel coronavirus on our COVID-19 research hub.

The post Pixel Perfect: V7 Labs Automates Image Annotation for Deep Learning Models appeared first on The Official NVIDIA Blog.

Read More

More Than a Wheeling: Boston Band of Roboticists Aim to Rock Sidewalks With Personal Bots

More Than a Wheeling: Boston Band of Roboticists Aim to Rock Sidewalks With Personal Bots

With Lime and Bird scooters covering just about every major U.S. city, you’d think all bets were off for walking. Think again.

Piaggio Fast Forward is staking its future on the idea that people will skip e-scooters or ride-hailing once they take a stroll with its gita robot. A Boston-based subsidiary of the iconic Vespa scooter maker, the company says the recent focus on getting fresh air and walking during the COVID-19 pandemic bodes well for its new robotics concept.

The fashionable gita robot — looking like a curvaceous vintage scooter — can carry up to 40 pounds and automatically keeps stride so you don’t have to lug groceries, picnic goodies or other items on walks. Another mark in gita’s favor: you can exercise in the fashion of those in Milan and Paris, walking sidewalks to meals and stores. “Gita” means short trip in Italian.

The robot may turn some heads on the street. That’s because Piaggio Fast Forward parent Piaggio Group, which also makes Moto Guzzi motorcycles, expects sleek, flashy designs under its brand.

The first idea from Piaggio Fast Forward was to automate something like a scooter to autonomously deliver pizzas. “The investors and leadership came from Italy, and we pitched this idea, and they were just horrified,” quipped CEO and founder Greg Lynn.

If the company gets it right, walking could even become fashionable in the U.S. Early adopters have been picking up gita robots since the November debut. The stylish personal gita robot, enabled by the NVIDIA Jetson TX2 supercomputer on a module, comes in signal red, twilight blue or thunder gray.

Gita as Companion

The robot was designed to follow a person. That means the company didn’t have to create a completely autonomous robot that uses simultaneous localization and mapping, or SLAM, to get around fully on its own, said Lynn. And it doesn’t use GPS.

Instead, a gita user taps a button and the robot’s cameras and sensors immediately capture images that pair it with its leader to follow the person.

Using neural networks and the Jetson’s GPU to perform complex image processing tasks, the gita can avoid collisions with people by understanding how people move  in sidewalk traffic, according to the company. “We have a pretty deep library of what we call ‘pedestrian etiquette,’ which we use to make decisions about how we navigate,” said Lynn.

Pose-estimation networks with 3D point cloud processing allow it to see the gestures of people to anticipate movements, for example. The company recorded thousands of hours of walking data to study human behavior and tune gita’s networks. It used simulation training much the way the auto industry does, using virtual environments. Piaggio Fast Forward also created environments in its labs for training with actual gitas.

“So we know that if a person’s shoulders rotate at a certain degree relative to their pelvis, they are going to make a turn,” Lynn said. “We also know how close to get to people and how close to follow.”

‘Impossible’ Without Jetson 

The robot has a stereo depth camera to understand the speed and distance of moving people, and it has three other cameras for seeing pedestrians for help in path planning. The ability to do split-second inference to make sidewalk navigation decisions was important.

“We switched over and started to take advantage of CUDA for all the parallel processing we could do on the Jetson TX2,” said Lynn.

Piaggio Fast Forward used lidar on its early design prototype robots, which were tethered to a bulky desktop computer, in all costing tens of thousands of dollars. It needed to find a compact, energy-efficient and affordable embedded AI processor to sell its robot at a reasonable price.

“We have hundreds of machines out in the world, and nobody is joy-sticking them out of trouble. It would have been impossible to produce a robot for $3,250 if we didn’t rely on the Jetson platform,” he said.

Enterprise Gita Rollouts

Gita robots have been off to a good start in U.S. sales with early technology adopters, according to the company, which declined to disclose unit sales. They have also begun to roll out in enterprise customer pilot tests, said Lynn.   

Cincinnati-Northern Kentucky International Airport is running gita pilots for delivery of merchandise purchased in airports as well as food and beverage orders from mobile devices at the gates.

Piaggio Fast Forward is also working with some retailers who are experimenting with the gita robots for handling curbside deliveries, which have grown in popularity for avoiding the insides of stores.

The company is also in discussions with residential communities exploring usage of gita robots for the replacement of golf carts to encourage walking in new developments.

Piaggio Fast Forward plans to launch several variations in the gita line of robots by next year.

“Rather than do autonomous vehicles to move people around, we started to think about a way to unlock the walkability of people’s neighborhoods and of businesses,” said Lynn.

 

Piaggio Fast Forward is a member of NVIDIA Inception, a virtual accelerator program that helps startups in AI and data science get to market faster.

The post More Than a Wheeling: Boston Band of Roboticists Aim to Rock Sidewalks With Personal Bots appeared first on The Official NVIDIA Blog.

Read More