Train Spotting: Startup Gets on Track With AI and NVIDIA Jetson to Ensure Safety, Cost Savings for Railways

Preventable train accidents like the 1985 disaster outside Tel Aviv in which a train collided with a school bus, killing 19 students and several adults, motivated Shahar Hania and Elen Katz to help save lives with technology.

They founded Rail Vision, an Israeli startup that creates obstacle-detection and classification systems for the global railway industry

The systems use advanced electro-optic sensors to alert train drivers and railway control centers when a train approaches potential obstacles — like humans, vehicles, animals or other objects — in real time, and in all weather and lighting conditions.

Rail Vision is a member of NVIDIA Inception — a program designed to nurture cutting-edge startups — and an NVIDIA Metropolis partner. The company uses the NVIDIA Jetson AGX Xavier edge AI platform, which provides GPU-accelerated computing in a compact and energy-efficient module, and the NVIDIA TensorRT software development kit for high-performance deep learning inference.

Pulling the Brakes in Real Time

A train’s braking distance — or the distance a train travels between when its brakes are pulled and when it comes to a complete stop — is usually so long that by the time a driver spots a railway obstacle, it could be too late to do anything about it.

For example, the braking distance for a train traveling 100 miles per hour is 800 meters, or about a half-mile, according to Hania. Rail Vision systems can detect objects on and along tracks from up to two kilometers, or 1.25 miles, away.

By sending alerts, both visual and acoustic, of potential obstacles in real time, Rail Vision systems give drivers over 20 seconds to respond and make decisions on braking.

The systems can also be integrated with a train’s infrastructure to automatically apply brakes when an obstacle is detected, even without a driver’s cue.

“Tons of deep learning inference possibilities are made possible with NVIDIA GPU technology,” Hania said. “The main advantage of using the NVIDIA Jetson platform is that there are lots of goodies inside — compressors, modules for optical flow — that all speed up the embedding process and make our systems more accurate.”

Boosting Maintenance, in Addition to Safety

In addition to preventing accidents, Rail Vision systems help save operational time and costs spent on railway maintenance — which can be as high as $50 billion annually, according to Hania.

If a railroad accident occurs, four to eight hours are typically spent handling the situation — which prevents other trains from using the track, said Hania.

Rail Vision systems use AI to monitor the tracks and prevent such workflow slow-downs, or quickly alert operators when they do occur — giving them time to find alternate routes or plans of action.

The systems are scalable and deployable for different use cases — with some focused solely on these maintenance aspects of railway operations.

Watch a Rail Vision system at work.

The post Train Spotting: Startup Gets on Track With AI and NVIDIA Jetson to Ensure Safety, Cost Savings for Railways appeared first on The Official NVIDIA Blog.

Read More

How Smart Hospital Technology Can Help Cut Down on Medical Errors

Despite the feats of modern medicine, as many as 250,000 Americans die from medical errors each year — more than 6 times the number killed in car accidents.

Smart hospital AI can help avoid some of these fatalities in healthcare, just as computer vision-based driver assistance systems can improve road safety, according to AI leader Fei-Fei Li.

Whether through surgical instrument omission, a wrong drug prescription or a patient safety issue when clinicians aren’t present, “there’s just all kinds of errors that could be introduced, unintended, despite protocols that have been put together to avoid them,” said Li, computer science professor and co-director of the Stanford Institute for Human-Centered Artificial Intelligence, in a talk at the recent NVIDIA GTC. “Humans are still humans.”

By endowing healthcare spaces with smart sensors and machine learning algorithms, Li said, clinicians can help cut down medical errors and provide better patient care.

“We have to make sense of what we sense” with sensor data, said Li. “This brings in machine learning and deep learning algorithms that can turn sensed data into medical insights that are really important to keep our patients safe.”

To hear from other experts in deep learning and medicine, register free for the next GTC, running online March 21-24. GTC features talks from dozens of healthcare researchers and innovators harnessing AI for smart hospitals, drug discovery, genomics and more.

Sensor Solutions Bring Ambient Intelligence to Clinicians

Li’s interest in AI for healthcare delivery was sparked a decade ago when she was caring for a sick parent.

“The more I spent my time in ICUs and hospital rooms and even at home caring for my family, the more I saw the analogy between self-driving technology and healthcare delivery,” she said.

Her vision of sensor-driven “ambient intelligence,” outlined in a Nature paper, covers both the hospital and the home. It offers insights in operating rooms as well as the daily living spaces of individuals with chronic disease.

For example, ICU patients need a certain amount of movement to help their recovery. To ensure that patients are getting the right amount of mobility, researchers are developing smart sensor systems to automatically tag patient movements and understand their mobility levels while in critical care.

Another project used depth sensors and convolutional neural networks to assess whether clinicians were properly using hand sanitizer when entering and exiting patient rooms.

Outside of the hospital, as the global population continues to age, wearable sensors can help ensure seniors are aging healthily by monitoring mobility, sleep and medicine compliance.

The next challenge, Li said, is advancing computer vision to classify more complex human movement.

“We’re not content with these coarse activities like walking and sleeping,” she said. “What’s more important clinically are fine-grained activities.”

Protecting Patient, Caregiver Privacy 

When designing smart hospital technology, Li said, it’s important that developers prioritize privacy and security of patients, clinicians and caretakers.

“From a computer vision point of view, blurring and masking has become more and more important when it comes to human signals,” she said. “These are really important ways to mitigate private information and personal identity from being inadvertently leaked.”

In the field of data privacy, Li said, federated learning is another promising solution to protect confidential information.

Throughout the process of developing AI for healthcare, she said, developers must take a multi-stakeholder approach, involving patients, clinicians, bioethicists and government agencies in a collaborative environment.

“At the end of the day, healthcare is about humans caring for humans,” said Li. “This technology should not replace our caretakers, replace our families or replace our nurses and doctors. It’s here to augment and enhance humanity and give more dignity back to our patients.”

Watch the full talk on NVIDIA On-Demand, and sign up for GTC to learn about the latest in AI and healthcare.

The post How Smart Hospital Technology Can Help Cut Down on Medical Errors appeared first on The Official NVIDIA Blog.

Read More

Nearly 80 Percent of Financial Firms Use AI to Improve Services, Reduce Fraud

From the largest firms trading on Wall Street to banks providing customers with fraud protection to fintechs recommending best-fit products to consumers, AI is driving innovation across the financial services industry.

New research from NVIDIA found that 78 percent of financial services professionals state that their company uses accelerated computing to deliver AI-enabled applications through machine learning, deep learning or high performance computing.

The survey results, detailed in NVIDIA’s “State of AI in Financial Services” report, are based on responses from over 500 C-suite executives, developers, data scientists, engineers and IT teams working in financial services.

AI Prevents Fraud, Boosts Investments

With more than 70 billion real-time payment transactions processed globally in 2020, financial institutions need robust systems to prevent fraud and reduce costs. Accordingly, fraud detection involving payments and transactions was the top AI use case across all respondents at 31 percent, followed by conversational AI at 28 percent and algorithmic trading at 27 percent.

There was a dramatic increase in the percentage of financial institutions investing in AI use cases year-over-year. AI for underwriting increased fourfold, from 3 percent penetration in 2021 to 12 percent this year. Conversational AI jumped from 8 to 28 percent year-over-year, a 3.5x rise.

Meanwhile, AI-enabled applications for fraud detection, know your customer (KYC) and anti-money laundering (AML) all experienced growth of at least 300 percent in the latest survey. Nine of 13 use cases are now utilized by over 15 percent of financial services firms, whereas none of the use cases exceeded that penetration mark in last year’s report.

Future investment plans remain steady for top AI cases, with enterprise investment priorities for the next six to 12 months marked in green.

Top Current AI Use Cases in Financial Services (Ranked by Industry Sector)

Top Current AI Use Cases in Financial Services (Ranked by Industry Sector)
Green highlighted text signifies top AI use cases for investment in next six to 12 months.

Overcoming AI Challenges

Financial services professionals highlighted the main benefits of AI in yielding more accurate models, creating a competitive advantage and improving customer experience. Overall, 47 percent said that AI enables more accurate models for applications such as fraud detection, risk calculation and product recommendations.

However, there are challenges in achieving a company’s AI goals. Only 16 percent of survey respondents agreed that their company is spending the right amount of money on AI, and 37 percent believed “lack of budget” is the primary challenge in achieving their AI goals. Additional obstacles included too few data scientists, lack of data, and explainability, with a third of respondents listing each option.

Financial institutions such as Munich Re, Scotiabank and Wells Fargo have developed explainable AI models to explain lending decisions and construct diversified portfolios.

Biggest Challenges in Achieving Your Company’s AI Goals (by Role)Biggest Challenges in Achieving Your Company’s AI Goals (by Role)

Cybersecurity, data sovereignty, data gravity and the option to deploy on-prem, in the cloud or using hybrid cloud are areas of focus for financial services companies as they consider where to host their AI infrastructure. These preferences are extrapolated from responses to where companies are running most of their AI projects, with over three-quarters of the market operating on either on-prem or hybrid instances.

Where Financial Services Companies Run Their AI WorkloadsWhere Financial Services Companies Run Their AI Workloads

Executives Believe AI Is Key to Business Success

Over half of C-suite respondents agreed that AI is important to their company’s future success. The top total responses to the question “How does your company plan to invest in AI technologies in the future?” were:

  1. Hiring more AI experts (43 percent)
  2. Identifying additional AI use cases (36 percent)
  3. Engaging third-party partners to accelerate AI adoption (36 percent)
  4. Spending more on infrastructure (36 percent)
  5. Providing AI training to staff (32 percent)

However, only 23 percent overall of those surveyed believed their company has the capability and knowledge to move an AI project from research to production. This indicates the need for an end-to-end platform to develop, deploy and manage AI in enterprise applications.

Read the full “State of AI in Financial Services 2022” report to learn more.

Explore NVIDIA’s AI solutions and enterprise-level AI platforms driving the future of financial services.

The post Nearly 80 Percent of Financial Firms Use AI to Improve Services, Reduce Fraud appeared first on The Official NVIDIA Blog.

Read More

Let Me Upgrade You: GeForce NOW Adds Resolution Upscaling and More This GFN Thursday

GeForce NOW is taking cloud gaming to new heights.

This GFN Thursday delivers an upgraded streaming experience as part of an update that is now available to all members. It includes new resolution upscaling options to make members’ gaming experiences sharper, plus the ability to customize streaming settings in session.

The GeForce NOW app is fully releasing on select LG TVs, following a successful beta. To celebrate the launch, for a limited time, those who purchase a qualifying LG TV will also receive a six-month Priority membership to kickstart their cloud gaming experience.

Additionally, this week brings five games to the GeForce NOW library.

Upscale Your Gaming Experience

The newest GeForce NOW update delivers new resolution upscaling options — including an AI-powered option for members with select NVIDIA GPUs.

Resolution upscaling update on GeForce NOW
New year, new options for January’s GeForce NOW update.

This feature, now available to all members with the 2.0.37 update, gives gamers with network bandwidth limitation or higher resolution displays sharper graphics that match the native resolution of their monitor or laptop.

Resolution upscaling works by applying sharpening effects that reduce visible blurriness while streaming. It can be applied to any game and enabled via the GeForce NOW settings in native PC and Mac apps.

Three upscaling modes are now available. Standard is enabled by default and has minimal impact on system performance. Enhanced provides a higher quality upscale, but may cause some latency depending on your system specifications. AI Enhanced, available to members playing on PC with select NVIDIA GPUs and SHIELD TVs, leverages a trained neural network model along with image sharpening for a more natural look. These new options can be adjusted mid-session.

Learn more about the new resolution upscaling options.

Stream Your Way With Custom Settings

The upgrade brings some additional benefits to members.

Custom streaming quality settings on the PC and Mac apps have been a popular way for members to take control of their streams — including bit rate, VSync and now the new upscaling modes. The update now enables members to adjust some of the streaming quality settings in session using the GeForce NOW in-game overlay. Bring up the overlay by pressing Ctrl+G > Settings > Gameplay to access these settings while streaming.

The update also comes with an improved web-streaming experience on play.geforcenow.com by automatically assigning the ideal streaming resolution for devices that are unable to decode at high streaming bitrates. Finally, there’s also a fix for launching directly into games from desktop shortcuts.

LG Has Got Game With the GeForce NOW App

LG Electronics, the first TV manufacturer to release the GeForce NOW app in beta, is now bringing cloud gaming to several LG TVs at full force.

Owners of LG 2021 4K TV models including OLED, QNED, NanoCell and UHD TVs can now download the fully launched GeForce NOW app in the LG Content Store. The experience requires a gamepad and gives gamers instant access to nearly 35 free-to-play games, like Apex Legends and Destiny 2, as well as more than 800 PC titles from popular digital stores like Steam, Epic Games Store, Ubisoft Connect and Origin.

The GeForce NOW app on LG OLED TVs delivers responsive gameplay and gorgeous, high-quality graphics at 1080p and 60 frames per second. On these select LG TVs, with nothing more than a gamepad, you can enjoy stunning ray-traced graphics and AI technologies with NVIDIA RTX ON. Learn more about support for the app for LG TVs on the system requirements page under LG TV.

GeForce NOW app for LG
Get your game on directly through an LG TV with a six-month GeForce NOW Priority membership.

In celebration of the app’s full launch and the expansion of devices supported by GeForce NOW, qualifying LG purchases from Feb. 1 to March 27 in the United States come bundled with a sweet six-month Priority membership to the service.

Priority members experience legendary GeForce PC gaming across all of their devices, as well as benefits including priority access to gaming servers, extended session lengths and RTX ON for cinematic-quality in-game graphics.

To collect a free six-month Priority membership, purchase a qualifying LG TV and submit a claim. ​Upon claim approval, you’ll receive a GeForce NOW promo code via email. Create an NVIDIA account for free or sign in to your existing GeForce NOW account to redeem the gifted membership.

This offer is available to those who purchase applicable 2021 model LG 4K TVs in select markets during the promotional period. Current GeForce NOW promotional members are not eligible for this offer. Availability and deadline to claim free membership varies by market. Consult LG’s official country website, starting Feb. 1, for full details. Terms and conditions apply.

It’s Playtime

Mortal Online 2 on GeForce NOW
Explore a massive open world and choose your own path in Mortal Online 2.

Start your weekend with the following five titles coming to the cloud this week:

While you kick off your weekend with gaming fun, we’ve got a question for you this week:

The post Let Me Upgrade You: GeForce NOW Adds Resolution Upscaling and More This GFN Thursday appeared first on The Official NVIDIA Blog.

Read More

Hatch Me If You Can: Startup’s Sorting Machines Use AI to Protect Healthy Fish Eggs

Fisheries collect millions upon millions of fish eggs, protecting them from predators to increase fish yield and support the propagation of endangered species — but an issue with gathering so many eggs at once is that those infected with parasites can put healthy ones at risk.

Jensorter, an Oregon-based startup, has created AI-powered fish egg sorters that can rapidly identify healthy versus unhealthy eggs. The machines, built on the NVIDIA Jetson Nano module, can also detect egg characteristics such as size and fertility status.

The devices then automatically sort the eggs based on these characteristics, allowing Jensorter’s customers in Alaska, the Pacific Northwest and Russia to quickly separate viable eggs from unhealthy ones — and protect them accordingly.

Jensorter is a member of NVIDIA Inception, a program that nurtures cutting-edge startups revolutionizing industries with advancements in AI, data science, high performance computing and more.

Picking Out the Good Eggs

According to Curt Edmondson, patent counsel and CTO of Jensorter, many fisheries aim to quickly dispose of unhealthy eggs to lower the risk of infecting healthy ones.

Using AI, Jensorter machines look at characteristics like color to discern an egg’s health status and determine whether it’s fertilized — at a speed of about 30 milliseconds per egg.

“Our fish egg sorters are achieving a much higher accuracy with the addition of AI powered by NVIDIA Jetson, which is allowing us to create advanced capabilities,” Edmondson said.

The startup offers several machines, each tailored to varying volumes of eggs to be sorted. The Model JH device, optimal for egg volumes of three to 10 million, can sort nearly 200,000 eggs per hour, eliminating the slow and laborious process of hand-picking.

“Using AI to capture and process images of eggs in real time could have great value over the long term,” Edmondson said. “If hatcheries come together and centralize their images in a database, we could identify patterns of egg characteristics that lead to healthy eggs.”

This could help propagate salmon and trout, species that play important roles in their ecosystems and are common food sources for humans, and which are on the decline in many areas, he added.

The Oregon Hatchery Research Center recently used Jensorter devices to conduct an alpha test examining whether smaller eggs lead to healthier fish. In the spring, the center will use the machines to proceed with beta testing in hatcheries, before publishing study results.

Jensorter also plans to create next-generation sorters that are faster still and can detect, count and separate eggs based on their sex, number of zygotes and other metrics that would be useful to fisheries.

Watch a tutorial on how Jensorter equipment works and learn more about NVIDIA Inception.

The post Hatch Me If You Can: Startup’s Sorting Machines Use AI to Protect Healthy Fish Eggs appeared first on The Official NVIDIA Blog.

Read More

UK Biobank Advances Genomics Research with NVIDIA Clara Parabricks

UK Biobank is broadening scientists’ access to high-quality genomic data and analysis by making its massive dataset available in the cloud alongside NVIDIA GPU-accelerated analysis tools.

Used by more than 25,000 registered researchers around the world, UK Biobank is a large-scale biomedical database and research resource with deidentified genetic datasets, along with medical imaging and health record data, from more than 500,000 participants across the U.K.

Regeneron Genetics Center, the high-throughput sequencing center of biotech leader Regeneron, recently teamed up with UK Biobank to sequence and analyze the exomes — all protein-coding portions of the genome — of all the biobank participants.

The Regeneron team used NVIDIA Clara Parabricks, a software suite for secondary genomic analysis of next-generation sequencing data, during the exome sequencing process.

UK Biobank has released 450,000 of these exomes for access by approved researchers, and is now providing scientists six months of free access to Clara Parabricks through its cloud-based Research Analysis Platform. It was developed by bioinformatics platform DNAnexus, which lets scientists use Clara Parabricks running on NVIDIA GPUs in the AWS cloud.

“As demonstrated by Regeneron, GPU acceleration with Clara Parabricks achieves the throughputs, speed and reproducibility needed when processing genomic datasets at scale,” said Dr. Mark Effingham, deputy CEO of UK Biobank. “There are a number of research groups in the U.K. who were pushing for these accelerated tools to be available in our platform for use with our extensive dataset.”

Regeneron Exome Research Accelerated by Clara Parabricks

Regeneron’s researchers used the DeepVariant Germline Pipeline from NVIDIA Clara Parabricks to run their analysis with a model specific to the genetic center’s workflow.

Its researchers identified 12 million coding variants and hundreds of genes associated with health-related traits — certain genes were associated with increased risk for liver disease and eye disease, and others were linked to lower risk of diabetes and asthma.

The unique set of tools the researchers used for high-quality variant detection is available to UK Biobank registered users through the Research Analysis Platform. This capability will allow scientists to harmonize their own exome data with sequenced exome data from UK Biobank by running the same bioinformatics pipeline used to generate the initial reference dataset.

Cloud-Based Platform Improves Equity of Access

Researchers deciphering the genetic codes of humans — and of the viruses and bacteria that infect humans — can often be limited by the computational resources available to them.

UK Biobank is democratizing access by making its dataset open to scientists around the world, with a focus on further extending use by early-career researchers and those in low- and middle-income countries. Instead of researchers needing to download this huge dataset to use on their own compute resources, they can instead tap into UK Biobank’s cloud platform through a web browser.

“We were being contacted by researchers and clinicians who wanted to access UK Biobank data, but were struggling with access to the basic compute needed to work with even relatively small-scale data,” said Effingham. “The cloud-based platform provides access to the world-class technology needed for large-scale exome sequencing and whole genome sequencing analysis.”

Researchers using the platform pay only for the computational cost of their analyses and for storage of new data they generate from the biobank’s petabyte-scale dataset, Effingham said.

Using Clara Parabricks on DNAnexus helps reduce both the time and cost of this genomic analysis, delivering a whole exome analysis that would take nearly an hour of computation on a 32-vCPU machine in less than five minutes — while also reducing cost by approximately 40 percent.

Exome Sequencing Provides Insights for Precision Medicine

For researchers studying links between genetics and disease, exome sequencing is a critical tool — and the UK Biobank dataset includes nearly half a million participant exomes to work with.

The exome is approximately 1.5 percent of the human genome, and consists of all the known genes and their regulatory elements. By studying genetic variation in exomes across a large, diverse population, scientists can better understand the population’s structure, helping researchers address evolutionary questions and describe how the genome works.

With a dataset as large as UK Biobank’s, it is also possible to identify the specific genetic variants associated with inherited diseases, including cardiovascular disease, neurodegenerative conditions and some kinds of cancer.

Exome sequencing can even shed light on potential genetic drivers that might increase or decrease an individual’s risk of severe disease from COVID-19 infection, Effingham said. As the pandemic continues, UK Biobank is adding COVID case data, vaccination status, imaging data and patient outcomes for thousands of participants to its database.

Get started with NVIDIA Clara Parabricks on the DNAnexus-developed UK Biobank Research Analysis Platform. Learn more about the exome sequencing project by registering for this webinar, which takes place Feb. 17 at 8am Pacific.

Subscribe to NVIDIA healthcare news here

Main image shows the freezer facility at UK Biobank where participant samples are stored. Image courtesy of UK Biobank. 

The post UK Biobank Advances Genomics Research with NVIDIA Clara Parabricks appeared first on The Official NVIDIA Blog.

Read More

Animator Lets 3D Characters Get Their Groove on With NVIDIA Omniverse and Reallusion

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to boost their artistic or engineering processes.

Benny Dee

Benjamin Sokomba Dazhi, aka Benny Dee, has learned the ins and outs of the entertainment industry from many angles — first as a rapper, then as a music video director and now as a full-time animator.

After eight years of self-teaching, Dazhi has mastered the art of animation — landing roles as head animator for the film The Legend of Oronpoto, and as creator and director of the Cartoon Network Africa Dance Challenge, a series of dance-along animations that teaches children African-inspired choreography.

Based in north-central Nigeria, Dazhi is building a team for his indie animation studio, JUST ART, which creates animation films focused on action, sci-fi, horror and humor.

Dazhi uses NVIDIA Omniverse — a physically accurate 3D design collaboration platform available with RTX-powered GPUs and part of the NVIDIA Studio suite of tools for creators — with Reallusion’s iClone and Character Creator to supercharge his artistic workflow.

He uses Omniverse Connectors for Reallusion apps for character and prop creation and animation, set dressing and cinematics.

Music, Movies and Masterful Rendering

From animated music videos to clips for action films, Dazhi has a multitude of projects — and accompanying deadlines.

“The main challenges I faced when trying to meet deadlines were long render times and difficulties with software compatibility, but using an Omniverse Connector for Reallusion’s iClone app has been game-changing for my workflow,” he said.

Using Omniverse, Dazhi accomplishes lighting and materials setup, rendering, simulation and post-production processes.

With these tools, it took Dazhi just four minutes to render this clip of a flying car — a task, he said, that would have otherwise taken hours.

“The rendering speed and photorealistic output quality of Omniverse is a breakthrough — and Omniverse apps like Create and Machinima are very user-friendly,” he said.

Such 3D graphics tools are especially important for the development of indie artists, Dazhi added.

“In Nigeria, there are very few animation studios, but we are beginning to grow in number thanks to easy-to-use tools like Reallusion’s iClone, which is the main animation software I use,” he said.

Dazhi plans to soon expand his studio, working with other indie artists via Omniverse’s real-time collaboration feature. Through his films, he hopes to show viewers “that it’s more than possible to make high-end content as an indie artist or small company.”

See Dazhi’s work in the NVIDIA Omniverse Gallery, and hear more about his creative workflow live during a Twitch stream on Jan. 26 at 11 a.m. Pacific.

Creators can download NVIDIA Omniverse for free and get started with step-by-step tutorials on the Omniverse YouTube channel. For additional resources and inspiration, follow Omniverse on Instagram, Twitter and Medium. To chat with the community, check out the Omniverse forums and join our Discord Server.

The post Animator Lets 3D Characters Get Their Groove on With NVIDIA Omniverse and Reallusion appeared first on The Official NVIDIA Blog.

Read More

Vulkan Fan? Six Reasons to Run It on NVIDIA

Many different platforms, same great performance. That’s why Vulkan is a very big deal.

With the release Tuesday of Vulkan 1.3, NVIDIA continues its unparalleled record of day one driver support for this cross-platform GPU application programming interface for 3D graphics and computing.

Vulkan has been created by experts from across the industry working together at the Khronos Group, an open standards consortium. From the start, NVIDIA has worked to advance this effort. NVIDIA’s Neil Trevett has been Khronos president since its earliest days.

“NVIDIA has consistently been at the forefront of computer graphics with new, enhanced tools, and technologies for developers to create rich game experiences,” said Jon Peddie, president of Jon Peddie Research.

“Their guidance and support for Vulkan 1.3 development, and release of a new compatible driver on day one across NVIDIA GPUs contributes to the successful cross-platform functionality and performance for games and apps this new API will bring,” he said.

With a simpler, thinner driver and efficient CPU multi-threading capabilities, Vulkan has less latency and overhead than alternatives, such as OpenGL or older versions of Direct3D.

If you use Vulkan, NVIDIA GPUs are a no-brainer. Here’s why:

  1. NVIDIA consistently provides industry leadership to evolve new Vulkan functionality and is often the first to make leading-edge computer graphics techniques available to developers. This ensures cutting-edge titles are supported on Vulkan and, by extension, made available to more gamers.
  2. NVIDIA designs hardware to provide the fastest Vulkan performance for your games and applications. For example, NVIDIA GPUs perform up over 30 percent faster than the nearest competition on games such as Doom Eternal with advanced rendering techniques such as ray tracing.
  3. NVIDIA provides the broadest range of Vulkan functionality to ensure you can run the games and apps that you want and need. NVIDIA’s production drivers support advanced features such as ray-tracing and DLSS AI rendering across multiple platforms, including Windows and popular Linux distributions like Ubuntu, Kylin and RHEL.
  4. NVIDIA works hard to be the platform of choice for Vulkan development with tools that are often the first to support the latest Vulkan functionality, encouraging apps and games to be optimized first for NVIDIA. NVIDIA Nsight, our suite of development tools, has integrated support for Vulkan, including debugging and optimizing of applications using full ray-tracing functionality. NVIDIA also provides extensive Vulkan code samples, tutorials and best practice guidance so developers can get the very best performance from their code.
  5. NVIDIA makes Vulkan available across a wider range of platforms and hardware than anyone else for easier cross-platform portability. NVIDIA ships Vulkan on PCs, embedded platforms, automotive and the data center. And gamers enjoy ongoing support of the latest Vulkan API changes with older GPUs.
  6. NVIDIA aims to bulletproof your games with highly reliable game-ready drivers. NVIDIA treats Vulkan as a first-class citizen API with focused development and support. In fact, developers can download our zero-day Vulkan 1.3 drivers right now at https://developer.nvidia.com/vulkan-driver.

Look for more details about our commitment and leadership in Vulkan on NVIDIA’s Vulkan web page. And if you’re not already a member of NVIDIA’s Developer Program, sign up. Developers can download new tools and drivers from NVIDIA for Vulkan 1.3 today. 

The post Vulkan Fan? Six Reasons to Run It on NVIDIA appeared first on The Official NVIDIA Blog.

Read More

Meta Works with NVIDIA to Build Massive AI Research Supercomputer

Meta Platforms gave a big thumbs up to NVIDIA, choosing our technologies for what it believes will be its most powerful research system to date.

The AI Research SuperCluster (RSC), announced today, is already training new models to advance AI.

Once fully deployed, Meta’s RSC is expected to be the largest customer installation of NVIDIA DGX A100 systems.

“We hope RSC will help us build entirely new AI systems that can, for example, power real-time voice translations to large groups of people, each speaking a different language, so they could seamlessly collaborate on a research project or play an AR game together,” the company said in a blog.

Training AI’s Largest Models

When RSC is fully built out, later this year, Meta aims to use it to train AI models with more than a trillion parameters. That could advance fields such as natural-language processing for jobs like identifying harmful content in real time.

In addition to performance at scale, Meta cited extreme reliability, security, privacy and the flexibility to handle “a wide range of AI models” as its key criteria for RSC.

Meta RSC system
Meta’s AI Research SuperCluster features hundreds of NVIDIA DGX systems linked on an NVIDIA Quantum InfiniBand network to accelerate the work of its AI research teams.

Under the Hood

The new AI supercomputer currently uses 760 NVIDIA DGX A100 systems as its compute nodes. They pack a total of 6,080 NVIDIA A100 GPUs linked on an NVIDIA Quantum 200Gb/s InfiniBand network to deliver 1,895 petaflops of TF32 performance.

Despite challenges from COVID-19, RSC took just 18 months to go from an idea on paper to a working AI supercomputer (shown in the video below) thanks in part to the NVIDIA DGX A100 technology at the foundation of Meta RSC.



20x Performance Gains

It’s the second time Meta has picked NVIDIA technologies as the base for its research infrastructure. In 2017, Meta built the first generation of this infrastructure for AI research with 22,000 NVIDIA V100 Tensor Core GPUs that handles 35,000 AI training jobs a day.

Meta’s early benchmarks showed RSC can train large NLP models 3x faster and run computer vision jobs 20x faster than the prior system.

In a second phase later this year, RSC will expand to 16,000 GPUs that Meta believes will deliver a whopping 5 exaflops of mixed precision AI performance. And Meta aims to expand RSC’s storage system to deliver up to an exabyte of data at 16 terabytes per second.

A Scalable Architecture

NVIDIA AI technologies are available to enterprises of any size.

NVIDIA DGX, which includes a full stack of NVIDIA AI software, scales easily from a single system to a DGX SuperPOD running on-premises or at a colocation provider. Customers can also rent DGX systems through NVIDIA DGX Foundry.

The post Meta Works with NVIDIA to Build Massive AI Research Supercomputer appeared first on The Official NVIDIA Blog.

Read More

How the Intelligent Supply Chain Broke and AI Is Fixing It

Let’s face it, the global supply chain may not be the most scintillating subject matter. Yet in homes and businesses around the world, it’s quickly become the topic du jour: empty shelves; record price increases; clogged ports and sick truckers leading to disruptions near and far.

The business of organizing resources to supply a product or service to its final user feels like it’s never been more challenged by so many variables. Shortages of raw materials, everything from resin and aluminum to paint and semiconductors, are nearing historic levels. Products that do get manufactured sit on cargo ships or in warehouses due to shortages of containers and workers and truck drivers that help deliver them to their final destinations. And consumer pocketbooks and paychecks are getting squeezed by rising prices.

The $9 trillion logistics industry is responding by investing in automation and using AI and big data to gain more insights throughout the supply chain. Big money is being poured into supply-chain technology startups, which raised $24.3 billion in venture funding in the first three quarters of 2021, 58 percent more than the full-year total for 2020, according to analytics firm PitchBook Data Inc.

Investing in AI

Behind these investments, businesses see technology and accelerated computing as key to finding firmer ground. At Manifest 2022, a logistics and supply chain conference taking place in Las Vegas, the industry is discussing how to refine supply chains and create cost efficiencies using AI and machine learning. Among their goals: address labor shortages, improve throughput in distribution centers, and route deliveries more efficiently.

Take a box of cereal. Getting it from the warehouse to a home has never been more expensive. Employee turnover rates of 30 percent to 46 percent in warehouses and distribution centers are just part of the problem.

To mitigate the challenge, Dematic, a global materials-handling company, is evaluating software from companies like Kinetic Vision, which has developed computer vision applications on the NVIDIA AI platform that add intelligence to automated warehouse systems.

Companies like Kinetic Vision and SF Technology use video data from cameras to optimize every step of the package lifecycle, accelerating throughput by up to 20 percent and reducing conveyor downtime, which can cost retailers $3,000 to $5,000 a minute.

Autonomous robot companies such as Gideon, 6 River Systems and Symbotic also use the NVIDIA AI platform to improve distribution center throughput with their autonomous guided vehicles that transport material efficiently within the warehouse or distribution centers.

And with NVIDIA Fleet Command, which securely deploys, manages and scales AI applications via the cloud across distributed edge infrastructure, these solutions can be remotely deployed and managed securely and at scale across hundreds of distribution centers.

Digital Twins and Simulation

Improving layouts of stores and distribution centers also has become key to achieving cost efficiencies. NVIDIA Omniverse, a virtual world simulation and 3D design collaboration platform, makes it possible to virtually design and simulate distribution centers at full fidelity. Users can improve workflows and throughput with photorealistic, physically accurate virtual environments.

Retailers could, for example, develop a solution on the Omniverse platform to design, test and simulate the flow of material and employee processes in digital twins of their distribution centers and then bring those optimizations into the real world.

Digital human simulations could test new workflows for employee ergonomics and productivity. And robots are trained and operated with the NVIDIA Isaac robotics platform, creating the most efficient layout and workflows.

Kinetic Vision is using NVIDIA Omniverse to deliver digital twins technology and simulation to optimize factories and retail and consumer packaged goods distribution centers.

Leaning In

While manufacturers, supply chain operators and retailers each will have their own approaches to solving challenges, they’re leaning in on AI as a key differentiator.

Successfully implementing AI-enabled supply-chain management has enabled early adopters to improve logistics costs by 15 percent, inventory levels by 35 percent and service levels by 65 percent, compared with slower-moving competitors, according to McKinsey.

With some experts predicting the global supply chain won’t return to a new normal until at least 2023, companies are moving to take measures that matter most to the bottom line.

For more on how NVIDIA AI is powering the most innovative AI solutions for the supply chain and logistics industry attend the following talks at Manifest:

  • A fireside chat, “Bringing Agility and Flexibility to Distribution Centers with AI,” on Wednesday, Jan. 26, at 2 p.m. Pacific, in Champagne 4 with Azita Martin, vice president and general manager of AI for retail at NVIDIA, and Michael Larsson, CEO of North America region at Dematic.
  • A presentation “The Next Frontier in Warehouse Intelligence” on the same date, at 11:30 a.m. Pacific, in Champagne 4 with Azita Martin and Omer Rashid, vice president of Solutions Designs at DHL Supply Chain, and Renato Bottiglieri, chief logistics officer at Eggo Kitchen & House.

The post How the Intelligent Supply Chain Broke and AI Is Fixing It appeared first on The Official NVIDIA Blog.

Read More