Artem Cherkasov and Olexandr Isayev on Democratizing Drug Discovery With NVIDIA GPUs

It may seem intuitive that AI and deep learning can speed up workflows — including novel drug discovery, a typically years-long and several-billion-dollar endeavor.

But professors Artem Cherkasov and Olexandr Isayev were surprised to find that no recent academic papers provided a comprehensive, global research review of how deep learning and GPU-accelerated computing impact drug discovery.

In March, they published a paper in Nature to fill this gap, presenting an up-to-date review of the state of the art for GPU-accelerated drug discovery techniques.

Cherkasov, a professor in the department of urologic sciences at the University of British Columbia, and Isayev, an assistant professor of chemistry at Carnegie Mellon University, join NVIDIA AI Podcast host Noah Kravitz this week to discuss how GPUs can help democratize drug discovery.

In addition, the guests cover their inspiration and process for writing the paper, talk about NVIDIA technologies that are transforming the role of AI in drug discovery, and give tips for adopting new approaches to research.

You Might Also Like

Lending a Helping Hand: Jules Anh Tuan Nguyen on Building a Neuroprosthetic

Is it possible to manipulate things with your mind? Possibly. University of Minnesota postdoctoral researcher Jules Anh Tuan Nguyen discusses allowing amputees to control their prosthetic limbs with their thoughts, using neural decoders and deep learning.

AI of the Tiger: Conservation Biologist Jeremy Dertien on Real-Time Poaching Prevention

Fewer than 4,000 tigers remain in the wild due to a combination of poaching, habitat loss and environmental pressures. Clemson University’s Jeremy Dertien discusses using AI-equipped cameras to monitor poaching to protect a majority of the world’s remaining tiger populations.

Wild Things: 3D Reconstructions of Endangered Species with NVIDIA’s Sifei Liu

Studying endangered species can be difficult, as they’re elusive, and the act of observing them can disrupt their lives. Sifei Liu, a senior research scientist at NVIDIA, discusses how scientists can avoid these pitfalls by studying AI-generated 3D representations of these endangered species.

Subscribe to the AI Podcast: Now Available on Amazon Music

You can now listen to the AI Podcast through Amazon Music.

Also get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out our listener survey.

 

The post Artem Cherkasov and Olexandr Isayev on Democratizing Drug Discovery With NVIDIA GPUs appeared first on NVIDIA Blog.

Read More

AI in the Big Easy: NVIDIA Research Lets Content Creators Improvise With 3D Objects

Jazz is all about improvisation — and NVIDIA is paying tribute to the genre with AI research that could one day enable graphics creators to improvise with 3D objects created in the time it takes to hold a jam session.

The method, NVIDIA 3D MoMa, could empower architects, designers, concept artists and game developers to quickly import an object into a graphics engine to start working with it, modifying scale, changing the material or experimenting with different lighting effects.

NVIDIA Research showcased this technology in a video celebrating jazz and its birthplace, New Orleans, where the paper behind 3D MoMa will be presented this week at the Conference on Computer Vision and Pattern Recognition.

Extracting 3D Objects From 2D Images

Inverse rendering, a technique to reconstruct a series of still photos into a 3D model of an object or scene, “has long been a holy grail unifying computer vision and computer graphics,” said David Luebke, vice president of graphics research at NVIDIA.

“By formulating every piece of the inverse rendering problem as a GPU-accelerated differentiable component, the NVIDIA 3D MoMa rendering pipeline uses the machinery of modern AI and the raw computational horsepower of NVIDIA GPUs to quickly produce 3D objects that creators can import, edit and extend without limitation in existing tools,” he said.

To be most useful for an artist or engineer, a 3D object should be in a form that can be dropped into widely used tools such as game engines, 3D modelers and film renderers. That form is a triangle mesh with textured materials, the common language used by such 3D tools.

trumpet mesh
Triangle meshes are the underlying frames used to define shapes in 3D graphics and modeling.

Game studios and other creators would traditionally create 3D objects like these with complex photogrammetry techniques that require significant time and manual effort. Recent work in neural radiance fields can rapidly generate a 3D representation of an object or scene, but not in a triangle mesh format that can be easily edited.

NVIDIA 3D MoMa generates triangle mesh models within an hour on a single NVIDIA Tensor Core GPU. The pipeline’s output is directly compatible with the 3D graphics engines and modeling tools that creators already use.

The pipeline’s reconstruction includes three features: a 3D mesh model, materials and lighting. The mesh is like a papier-mâché model of a 3D shape built from triangles. With it, developers can modify an object to fit their creative vision. Materials are 2D textures overlaid on the 3D meshes like a skin. And NVIDIA 3D MoMa’s estimate of how the scene is lit allows creators to later modify the lighting on the objects.

Tuning Instruments for Virtual Jazz Band

To showcase the capabilities of NVIDIA 3D MoMa, NVIDIA’s research and creative teams started by collecting around 100 images each of five jazz band instruments — a trumpet, trombone, saxophone, drum set and clarinet — from different angles.

NVIDIA 3D MoMa reconstructed these 2D images into 3D representations of each instrument, represented as meshes. The NVIDIA team then took the instruments out of their original scenes and imported them into the NVIDIA Omniverse 3D simulation platform to edit.

editing the 3D trumpet in NVIDIA Omniverse

In any traditional graphics engine, creators can easily swap out the material of a shape generated by NVIDIA 3D MoMa, as if dressing the mesh in different outfits. The team did this with the trumpet model, for example, instantly converting its original plastic to gold, marble, wood or cork.

Creators can then place the newly edited objects into any virtual scene. The NVIDIA team dropped the instruments into a Cornell box, a classic graphics test for rendering quality. They demonstrated that the virtual instruments react to light just as they would in the physical world, with the shiny brass instruments reflecting brightly, and the matte drum skins absorbing light.

These new objects, generated through inverse rendering, can be used as building blocks for a complex animated scene — showcased in the video’s finale as a virtual jazz band.

The paper behind NVIDIA 3D MoMa will be presented in a session at CVPR on June 22 at 1:30 p.m. Central time. It’s one of 38 papers with NVIDIA authors at the conference. Learn more about NVIDIA Research at CVPR.

The post AI in the Big Easy: NVIDIA Research Lets Content Creators Improvise With 3D Objects appeared first on NVIDIA Blog.

Read More

NVIDIA Joins Forum to Help Lay the Foundation of the Metaverse

The metaverse is the next big step in the evolution of the internet — the 3D web — which presents a major opportunity for every industry from entertainment to automotive to manufacturing, robotics and beyond.

That’s why NVIDIA is joining our partners in the Metaverse Standards Forum, an open venue for all interested parties to discuss and debate how best to build the foundations of the metaverse.

From a 2D to a 3D Internet 

The early internet of the ’70s and ’80s was accessed purely through text-based interfaces, UNIX shells and consoles. The ’90s introduced the World Wide Web, which made the internet accessible to millions by providing a more natural and intuitive interface with images and text combined into 2D worlds in the form of web pages.

The metaverse that is coming into existence is a 3D spatial overlay of the internet. It continues the trend of making the internet more accessible and more natural for humans by making the interface to the internet indistinguishable from our interface to the real world.

The 3D computer graphics and simulation technologies developed over the past three decades in CAD/CAM, visual effects and video games, combined with the computing power now available, have converged to a point where we can now start building such an interface.

A Place for Both Work and Play

For most people, the term metaverse primarily evokes thoughts of gaming or socializing. They’ll definitely be big, important use cases of the metaverse, but just like with the internet, it won’t be limited to them.

We use the internet for far more than play. Companies and industries run on the internet; it’s part of their essential infrastructure. We believe the same will be true for the emerging metaverse.

For example, retailers are opening virtual shops to sell real and virtual goods. Researchers are using digital twins to design and simulate fusion power plants.

BMW Group is developing a digital twin of an entire factory to more rapidly design and operate efficient and safe factories. NVIDIA is building an AI supercomputer to power a digital twin of the Earth to help researchers study and solve climate change.

A Lesson From the Web

The key to the success of the web from the very start in 1993 was the introduction of a standard and open way of describing a web page — HyperText Markup Language, or HTML. Without HTML’s adoption, we would’ve had disconnected islands on the web, each only linking within themselves.

Fortunately, the creators of the early web and internet understood that open standards — particularly for data formats — were accelerators of growth and a network effect.

The metaverse needs an equivalent to HTML to describe interlinked 3D worlds in glorious detail. Moving between 3D worlds using various tools, viewers and browsers must be seamless and consistent.

The solution is Pixar’s Universal Scene Description (USD) — an open and extensible format, library and composition engine.

USD is one of many of the building blocks we’ll need to build the metaverse. Another is glTF, a 3D transmission format developed within Khronos Group. We see USD and glTF as compatible technologies and hope to see them coevolve as such.

A Constellation of Standards

Neil Trevett, vice president of developer ecosystems at NVIDIA and the president of The Khronos Group, the forum’s host, says the metaverse will require a constellation of standards.

The forum won’t set them, but it’ll be a place where designers and users can learn about and try ones they want to use and identify any that are missing or need to be expanded.

We’re thrilled to see the formation of the Metaverse Standards Forum — a free and open venue where people from every domain can gather to contribute to the exciting new era of the internet: the metaverse!

The post NVIDIA Joins Forum to Help Lay the Foundation of the Metaverse appeared first on NVIDIA Blog.

Read More

3D Artist Jae Solina Goes Cyberpunk This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

3D artist Jae Solina, who goes by the stage name JSFILMZ, steps In the NVIDIA Studio this week to share his unique 3D creative workflow in the making of Cyberpunk Short Film — a story shrouded in mystery with a tense exchange between two secretive contacts.

As an avid movie buff, JSFILMZ takes inspiration from innovative movie directors Christopher Nolan, David Fincher and Georce Lucas. He admires their abilities to combine technical skill with storytelling heightened by exciting plot twists.

The Cyberpunk Short Film setting displays stunning realism with ray-traced lighting, shadows and reflections — complemented by rich, vibrant colors.

Astonishingly, JSFILMZ created the film in just one day with the NVIDIA Omniverse platform for 3D design collaboration and world simulation, using the Omniverse Machinima app and the Reallusion iClone Connector. He alternated between systems that use an NVIDIA RTX A6000 GPU and a GeForce RTX 3070 Laptop GPU.

The #MadeinMachinima contest ends soon. Omniverse users can build and animate cinematic short stories with Omniverse Machinima for a chance to win RTX-accelerated NVIDIA Studio laptops. Entries are being accepted until Monday, June 27. 

An Omniverse Odyssey With Machinima 

JSFILMZ’s creative journey starts with scene building in Omniverse Machinima, plugging and moving background objects to create the futuristic cyberpunk diner. His RTX GPUs power Omniverse’s built-in RTX renderer to achieve fast, interactive movement within the viewport while preserving photorealistic detail. The reduction of distracting denoising allows JSFILMZ to focus on creating without having to wait for his scenes to render.

Ray-traced light reflects off the rim of the character’s glasses, achieving impressive photorealism.

Pulling assets from the NVIDIA MDL material library, JSFILMZ achieved peak realism with every surface, material and texture.

 

The artist then populated the scene with human character models downloaded from the Reallusion content store.

Automated facial animation in Reallusion iClone.

Vocal animation was generated in the Reallusion iClone Connector using the AccuLips feature. It simulates human speech behavior with each mouth shape, naturally taking on the qualities of those that precede or follow them. JSFILMZ simply uploads voiceover files from his actors, and the animations are automatically generated.

 

To capture animations while sitting, JSFILMZ turned to an Xsens Awinda starter body-motion-capture suit, acting out movements for both characters. Using the Xsens software, he processed, cleaned up and exported the visual effects data.

 

JSFILMZ integrated unique walking animations for each character by searching and selecting the perfect animation sequences in the Reallusion actorcore store. He returned to the iClone Connector to import and apply separate motion captures to the characters, completing animations for the scene.

The last 3D step was to adjust lighting. For tips on how to light in Omniverse, check out JSFILMZ’s live-streamed tutorial, which offers Omniverse know-how and his lighting technique.

“Cyberpunk Short Film” by 3D artist JSFILMZ.

According to JSFILMZ, adding and manipulating lights revealed another advantage of using Machinima: the ability to conveniently switch between real-time ray-traced mode for more fluid movement in the viewport and the interactive path-traced mode for the most accurate, detailed view.

He then exported final renders with ray tracing using the Omniverse RTX Renderer, which is powered by NVIDIA RTX or GeForce RTX GPUs.

Working with multiple 3D applications connected by Omniverse saved JSFILMZ countless hours of rendering, downloading files, converting file types, reuploading and more. “It’s so crazy that I can do all this, all at home,” he said.

Completing Cyberpunk Short Film required editing and color correction in DaVinci Resolve.

The NVIDIA hardware encoder enables speedy exports.

Color grading, video editing and color scope features deployed by JSFILMZ are all accelerated with his GPU, allowing for quick edits. And the NVIDIA hardware encoder and decoder makes the GPU-accelerated export very fast.

And with that, Cyberpunk Short Film was ready for viewing.

3D artists can benefit from JSFILMZ’s NVIDIA Omniverse tutorial YouTube playlist. It’s an extensive overview of the Omniverse platform for creators, covering the basics from installation and set up to in-app features such as lighting, rendering and animating.

3D artist and YouTube content creator Jae Solina, aka JSFILMZ.

JSFILMZ teaches 3D creative workflows specializing in NVIDIA Omniverse and Unreal Engine 5 on his YouTube channel and via Udemy courses.

Learn more about NVIDIA Omniverse, including tips, tricks and more on the Omniverse YouTube channel. For additional support, explore the Omniverse forums or join the Discord server to chat with the community. Check out the Omniverse Twitter, Instagram and Medium page to stay up to date.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

The post 3D Artist Jae Solina Goes Cyberpunk This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

NVIDIA Accelerates Open Data Center Innovation

NVIDIA today became a founding member of the Linux Foundation’s Open Programmable Infrastructure (OPI) project, while making its NVIDIA DOCA networking software APIs widely available to foster innovation in the data center.

Businesses are embracing open data centers, which require applications and services that are easily integrated with other solutions for simplified, lower-cost and sustainable management. Moving to open NVIDIA DOCA will help develop and nurture broad and vibrant DPU ecosystems and power unprecedented data center transformation.

The OPI project aims to create a community-driven, standards-based, open ecosystem for accelerating networking and other data center infrastructure tasks using DPUs.

DOCA includes drivers, libraries, services, documentation, sample applications and management tools to speed up and simplify the development and performance of applications. It allows for flexibility and portability for BlueField applications written using accelerated drivers or low-level libraries, such as DPDK, SPDK, Open vSwitch or Open SSL. We plan to continue this support. As part of OPI, developers will be able to create a common programming layer to support many of these open drivers and libraries  with DPU acceleration.

DOCA library APIs are already publicly available and documented for developers. Open licensing of these APIs will ensure that applications developed using DOCA will support BlueField DPUs as well as those from other providers.

NVIDIA DOCA stack
DOCA has always been built on an open foundation. Now NVIDIA is opening the APIs to the DOCA libraries and plans to add OPI support.

Expanding Use of DPUs

AI, containers and composable infrastructure are increasingly important for enterprise and cloud data centers. This is driving the use of DPUs in servers to support software-defined, hardware-accelerated networking, east-west traffic and zero-trust security.

Only the widespread deployment of DPUs such as NVIDIA BlueField can support the ability to offload, accelerate and isolate data center workloads, including networking, storage, security and DevOps management.

NVIDIA’s history of open innovation over the decades includes engaging with leading consortiums, participating in standards committees and contributing to a range of open source software and communities.

We contribute frequently to open source and open-license projects and software such as the Linux kernel, DPDK, SPDK, NVMe over Fabrics, FreeBSD, Apache Spark, Free Range Routing, SONiC, Open Compute Project and other areas covering networking, virtualization, containers, AI, data science and data encryption.

NVIDIA is often among the top three code contributors to many releases of Linux and DPDK. And we’ve historically included an open source version of our networking drivers in the Linux kernel.

With OPI, customers, ISVs, infrastructure appliance vendors and systems integrators will be able to create applications for BlueField DPUs using DOCA to gain the best possible performance and easiest developer experience for accelerated data center infrastructure.

The post NVIDIA Accelerates Open Data Center Innovation appeared first on NVIDIA Blog.

Read More

The King’s Swedish: AI Rewrites the Book in Scandinavia

If the King of Sweden wants help drafting his annual Christmas speech this year, he could ask the same AI model that’s available to his 10 million subjects.

As a test, researchers prompted the model, called GPT-SW3, to draft one of the royal messages, and it did a pretty good job, according to Magnus Sahlgren, who heads research in natural language understanding at AI Sweden, a consortium kickstarting the country’s journey into the machine learning era.

“Later, our minister of digitalization visited us and asked the model to generate arguments for political positions and it came up with some really clever ones — and he intuitively understood how to prompt the model to generate good text,” Sahlgren said.

Early successes inspired work on an even larger and more powerful version of the language model they hope will serve any citizen, company or government agency in Scandinavia.

A Multilingual Model

The current version packs 3.6 billion parameters and is smart enough to do a few cool things in Swedish. Sahlgren’s team aims to train a state-of-the-art model with a whopping 175 billion parameters that can handle all sorts of language tasks in the Nordic languages of Swedish, Danish, Norwegian and, it hopes, Icelandic, too.

For example, a startup can use it to automatically generate product descriptions for an e-commerce website given only the products’ names. Government agencies can use it to quickly classify and route questions from citizens.

Companies can ask it to rapidly summarize reports so they can react fast. Hospitals can run distilled versions of the model privately on their own systems to improve patient care.

“It’s a foundational model we will provide as a service for whatever tasks people want to solve,” said Sahlgren, who’s been working at the intersection of language and machine learning since he earned his Ph.D. in computational linguistics in 2006.

Permission to Speak Freely

It’s a capability increasingly seen as a strategic asset, a keystone of digital sovereignty in a world that speaks thousands of languages across nearly 200 countries.

Most language services today focus on Chinese or English, the world’s two most-spoken tongues. They’re typically created in China or the U.S., and they aren’t free.

“It’s important for us to have models built in Sweden for Sweden,” Sahlgren said.

Small Team, Super System

“We’re a small country and a core team of about six people, yet we can build a state-of-the-art resource like this for people to use,” he added.

That’s because Sweden has a powerful engine in BerzeLiUs, a 300-petaflops AI supercomputer at Linköping University. It trained the initial GPT-SW3 model using just 16 of the 60 nodes in the NVIDIA DGX SuperPOD.

The next model may exercise all the system’s nodes. Such super-sized jobs require super software like the NVIDIA NeMo Megatron framework.

“It lets us scale our training up to the full supercomputer, and we’ve been lucky enough to have access to experts in the NeMo development team — without NVIDIA it would have been so much more complicated to come this far,” he said.

A Workflow for Any Language

NVIDIA’s engineers created a recipe based on NeMo and an emerging process called p-tuning that optimizes massive models fast, and it’s geared to work with any language.

In one early test, a model nearly doubled its accuracy after NVIDIA engineers applied the techniques.

Magnus Sahlgren, AI Sweden
Magnus Sahlgren

What’s more, it requires one-tenth the data, slashing the need for tens of thousands of hand-labeled records. That opens the door for users to fine-tune a model with the relatively small, industry-specific datasets they have at hand.

“We hope to inspire a lot of entrepreneurship in industry, startups and the public using our technology to develop their own apps and services,” said Sahlgren.

Writing the Next Chapter

Meanwhile, NVIDIA’s developers are already working on ways to make the enabling software better.

One test shows great promise for training new capabilities using widely available English datasets into models designed for any language. In another effort, they’re using the p-tuning techniques in inference jobs so models can learn on the fly.

Zenodia Charpy, a senior solutions architect at NVIDIA based in Gothenburg, shares the enthusiasm of the AI Sweden team she supports. “We’ve only just begun trying new and better methods to tackle these large language challenges — there’s much more to come,” she said.

The GPT-SW3 model will be made available by the end of year via an early access program. To apply, contact francisca.hoyer@ai.se.

The post The King’s Swedish: AI Rewrites the Book in Scandinavia appeared first on NVIDIA Blog.

Read More

Smart Utility Vehicle: NIO ES7 Redefines Category with Intelligent, Versatile EV Powered by NVIDIA DRIVE Orin

Accounting for nearly half of global vehicle sales in 2021, SUVs have grown in popularity given their versatility. Now, NIO aims to amp up the volume further.

This week, the electric automaker unveiled the ES7 SUV, purpose-built for the intelligent vehicle era. Its sporty yet elegant body houses an array of cutting-edge technology, including the Adam autonomous driving supercomputer, powered by NVIDIA DRIVE Orin.

SUVs gained a foothold among consumers in the late 1990s as useful haulers for people and cargo. As powertrain and design technology developed, the category has flourished, with some automakers converting their fleets to mostly SUVs and trucks.

With the ES7, NIO is adding even more to the SUV category, packing it with plenty of features to please any driver.

The intelligent EV sports 10 driving modes, in addition to autonomous capabilities that will gradually cover expressways, urban areas, parking, and battery swapping. It also includes a camping mode that maintains a comfortable cabin temperature with lower power consumption and immersive audio and lighting.

Utility Meets Technology

The technology inside the ES7 is the core of what makes it a category-transforming vehicle.

The SUV is the first to incorporate NIO’s watchtower sensor design, combining 33 high-performance lidars, radars, cameras and ultrasonics arranged in around the vehicle. Data from these sensors is fused and processed by the centralized Adam supercomputer for robust surround perception.

With more than 1,000 trillion operations per second (TOPS) of performance provided by four DRIVE Orin systems-on-a-chip (SoCs), Adam can power a wide range of intelligent features in addition to perception, with enough headroom to add new capabilities over the air.

Using multiple SoCs, Adam integrates the redundancy and diversity necessary for safe autonomous operation. The first two SoCs process the 8 gigabytes of data produced every second by the vehicle’s sensor set.

The third Orin serves as a backup to ensure the system can operate safely in any situation. And the fourth enables local training, improving the vehicle with fleet learning and personalizing the driving experience based on individual user preferences.

With high-performance compute at its center, the ES7 delivers everything an SUV customer could need, and more.

A Growing Lineup

The ES7 joins the ET7 and ET5 as the third NIO vehicle built on the DRIVE Orin-powered Adam supercomputer, adding even greater selection for customers seeking a more intelligent driving experience.

NIO intends to have vehicle offerings in more than two dozen countries and regions by 2025 to bring one of the most advanced AI platforms to more customers.

Preorders for the ES7 SUV are now on the NIO app, with deliveries slated to begin in August.

The post Smart Utility Vehicle: NIO ES7 Redefines Category with Intelligent, Versatile EV Powered by NVIDIA DRIVE Orin appeared first on NVIDIA Blog.

Read More

AI for Personalized Health: Startup Advances Precision Medicine for COVID-19, Chronic Diseases

At a time when much about COVID-19 remained a mystery, U.K.-based PrecisionLife used AI and combinatorial analytics to discover new genes associated with severe symptoms and hospitalizations for patients.

The techbio company’s study, published in June 2020, pinpoints 68 novel genes associated with individuals who experienced severe disease from the virus. Over 70 percent of these targets have since been independently validated in global scientific literature as genetic risk factors for severe COVID-19 symptoms.

The startup was able to perform this early and accurate analysis using the first small COVID-19 patient dataset reported in the UK Biobank, with the help of AI, trained on NVIDIA A40 GPUs and backed by CUDA software libraries. PrecisionLife’s combinatorial analytics approach identifies interactions between genetic variants and other clinical or epidemiological factors in patients.

Results are shown in the featured image above, which depicts the disease architecture stratification of a severe COVID-19 patient population at the pandemic’s outset. Colors represent patient subgroups. Circles represent disease-associated genetic variants. And lines represent co-associated variants.

PrecisionLife technology helps researchers better understand complex disease biology at a population and personal level. Beyond COVID-19, the PrecisionLife analytics platform has been used to identify targets for precision medicine for more than 30 chronic diseases, including type 2 diabetes and ALS.

The company is a member of NVIDIA Inception, a free program that supports startups revolutionizing industries with cutting-edge technology.

Unique Disease Findings

Precision medicine considers an individual’s genetics, environment and lifestyle when selecting the treatment that could work best for them. PrecisionLife focuses on identifying how combinations of such factors impact chronic diseases.

The PrecisionLife platform enables a deeper understanding of the biology that leads to chronic disease across subgroups of patients. It uses combinatorial analytics to draw insights from the genomics and clinical history of patients — pulled from datasets provided by national biobanks, research consortia, patient charities and more.

Due to the inherent heterogeneity of chronic diseases, patients with the same diagnosis don’t necessarily experience the same causes, trajectories or treatments of disease.

The PrecisionLife platform identifies subgroups — within large patient populations — that have matching disease drivers, disease progression and treatment response. This can help researchers to select the right targets for drug development, treatments for individuals, as well as patients for clinical trials.

“Chronic disease is a complex space — a multi-genetic, multi-environmental problem with multiple patient subgroups,” said Mark Strivens, chief technology officer at PrecisionLife. “We work on technology to tackle problems that previous techniques couldn’t solve, and our unique disease findings will lead to a different set of therapeutic opportunities to best treat individuals.”

PrecisionLife technology is different from traditional analytical methods, like genome-wide association studies, which work best when single genetic variants are responsible for most of the disease risk. Instead, PrecisionLife offers combinatorial analytics, discovering significant combinations of multiple genetic and environmental factors.

The PrecisionLife platform can analyze data from 100,000 patients in just hours using NVIDIA A40 GPUs, a previously impossible feat, according to Strivens.

Plus, being a member of NVIDIA Inception gives the PrecisionLife team access to technical resources, hardware discounts and go-to-market support.

“Inception gives us access to technical expertise and connects us with other data-driven organizations that are a part of NVIDIA’s biotechnology AI ecosystem,” Strivens said. “Training from the NVIDIA Deep Learning Institute reduces the time it takes for our team members to ramp up learning a specific branch of programming.”

As a part of the groundbreaking U.K. life sciences community, PrecisionLife has access to a hub of healthcare innovation and specialist talent, Strivens said. Looking forward, the company plans to deliver new disease insights based on combinatorial analytics all across the globe.

Learn more about PrecisionLife and apply to join NVIDIA Inception.

Subscribe to NVIDIA healthcare news.

The post AI for Personalized Health: Startup Advances Precision Medicine for COVID-19, Chronic Diseases appeared first on NVIDIA Blog.

Read More

Get Your Wish: Genshin Impact Coming to GeForce NOW

Greetings, Traveler.

Prepare for adventure. Genshin Impact, the popular open-world action role-playing game, is leaving limited beta and launching for all GeForce NOW members next week.

Gamers can get their game on today with the six total games joining the GeForce NOW library.

As announced last week, Warhammer 40,000: Darktide is coming to the cloud at launch — with GeForce technology. This September, members will be able to leap thousands of years into the future to the time of the Space Marines, streaming on GeForce NOW with NVIDIA DLSS and more.

Plus, the 2.0.41 GeForce NOW app update brings a highly requested feature: in-stream copy-and-paste support from the clipboard while streaming from the PC and Mac apps — so there’s no need to enter a long, complex password for the digital store. Get to your games even faster with this new capability.

GeForce NOW is also giving mobile gamers more options by bringing the perks of RTX 3080 memberships and PC gaming at 120 frames per second to all devices with support for 120Hz phones. The capability is rolling out in the coming weeks.

Take a Trip to Teyvat

After the success of a limited beta and receiving great feedback from members, Genshin Impact is coming next week to everyone streaming on GeForce NOW.

Embark on a journey as a traveler from another world, stranded in the fantastic land of Teyvat. Search for your missing sibling in a vast continent made up of seven nations. Master the art of elemental combat and build a dream team of over 40 uniquely skilled playable characters – like the newest additions of Yelan and Kuki Shinobu – each with their own rich stories, personalities and combat styles.

Experience the immersive campaign, dive deep into rich quests alongside iconic characters and complete daily challenges. Charge head-on into battles solo or invite friends to join the adventures. The world is constantly expanding, so bring it wherever you go across devices, streaming soon to underpowered PCs, Macs and Chromebooks on GeForce NOW.

RTX 3080 members can level up their gaming for the best experience by streaming in 4K resolution and 60 frames per second on the PC and Mac apps.

Let the Gaming Commence

All of the action this GFN Thursday kicks off with six new games arriving on the cloud. Members can also gear up for Rainbow Six Siege Year 7 Season 2.

Rainbow Six Siege Year 7 Season 2
Get ready for a new Operator, Team Deathmatch map and more in “Rainbow Six Siege” Year 7 Season 2.

Members can look for the following streaming this week:

Finally, members still have a chance to stream the PC Building Simulator 2 open beta before it ends on Monday, June 20. Experience deeper simulation, an upgraded career mode and powerful new customization features to bring your ultimate PC to life.

To start your weekend gaming adventures, we’ve got a question. Let us know your thoughts on Twitter or in the comments below.

The post Get Your Wish: Genshin Impact Coming to GeForce NOW appeared first on NVIDIA Blog.

Read More

All-In-One Financial Services? Vietnam’s MoMo Has a Super-App for That

For younger generations, paper bills, loan forms and even cash might as well be in a museum. Smartphones in hand, their financial services largely take place online.

The financial-technology companies that serve them are in a race to develop AI that can make sense of the vast amount of data the companies collect — both to provide better customer service and to improve their own backend operations.

Vietnam-based fintech company MoMo has developed a super-app that includes payment and financial transaction processing in one self-contained online commerce platform. The convenience of this all-in-one mobile platform has already attracted over 30 million users in Vietnam.

To improve the efficiency of the platform’s chatbots, know-your-customer (eKYC) systems and recommendation engines, MoMo uses NVIDIA GPUs running in Google Cloud. It uses NVIDIA DGX systems for training and batch processing.

In just a few months, MoMo has achieved impressive results in speeding development of solutions that are more robust and easy to scale. Using NVIDIA GPUs for eKYC inference tasks has resulted in a 10x speedup compared to using CPU, the company says. For the MoMo Face Payment service, using TensorRT has reduced training and inference time by 10x.

AI Offers a Different Perspective

Tuan Trinh, director of data science at MoMo, describes his company’s use of AI as a way to get a different perspective on its business. One such project processes vast amounts of data and turns it into computerized visuals or graphs that can then be analyzed to improve connectivity between users in the app.

MoMo developed its own AI algorithm that uses over a billion data points to direct recommendations of additional services and products to its customers. These offerings help maintain a line of communication with the company’s user base that helps boost engagement and conversion.

The company also deploys a recommendation box on the home screen of its super-app. This caused its click-through rate to improve dramatically as the AI prompts customers with useful recommendations and keeps them engaged.

With AI, MoMo says it can process the habits of 10 million active users over the course of the last 30-60 days to train its predictive models. In addition, NVIDIA Triton Inference Server helps unify the serving flows for recommendation engines, which significantly reduces the effort to deploy AI applications in production environments. In addition, TensorRT has contributed to 3x performance improvement of MoMo’s payment services AI model inference, boosting the customer experience.

Chatbots Advance the Conversation

MoMo’s will use AI-powered chatbots to allow it to scale up faster when accommodating and engaging with users. Chatbot services are especially effective on mobile device apps, which tend to be popular with younger users, who often prefer them over making phone calls to customer service.

Chatbot users can inquire about a product and get the support they need to evaluate it before purchasing — all from one interface — which is essential for a super-app like MoMo’s that functions as a one-stop-shop.

The chatbots are also an effective vehicle for upselling or suggesting additional services, MoMo says. When combined with machine learning, it’s possible to categorize target audiences for different products or services to customize their experience with the app.

AI chatbots have the additional benefit of freeing up MoMo’s customer service team to handle other important tasks.

Better Credit Scoring

Credit history data from all of MoMo’s 30 million-plus users can be applied to models used for risk control of financial services by using AI algorithms. MoMo has applied credit scoring to the lending services within its super-app. Since the company doesn’t solely depend on traditional deep learning for tasks that are less complex, MoMo’s development team has been able to obtain higher accuracy with shorter processing times.

The MoMo app takes less than 2 seconds to make a lending decision but is still able to reduce taking on risky lending targets with more accurate predictions from AI. This helps keep customers from taking on too much debt, and helps MoMo from missing out on potential revenue.

Since AI is capable of processing both structured and unstructured data, it’s able to incorporate information beyond traditional credit scores, like whether customers spend their money on necessities or luxuries, to assess a borrower’s risk more accurately.

Future of AI in Fintech

With fintechs increasingly applying AI to their massive data stores, MoMo’s team predicts the industry will need to evaluate how to do so in a way that keeps user data safe — or risk losing customer loyalty. MoMo already plans to expand its use of graph neural networks and models based on its proven ability to dramatically improve its operations.

The MoMo team also believes that AI could one day make credit scores obsolete. Since AI is able to make decisions based on broader unstructured data, it’s possible to determine loan approval by considering other risks besides a credit score. This would help open up the pool of potential users on fintech apps like MoMo’s to people in underserved and underbanked communities, who may not have credit scores, let alone “good” ones.

With around one in four American adults “underbanked,” which makes it more difficult for them to get a loan or credit card, and more than half of Africa’s population completely “credit invisible,” which refers to people without a bank or a credit score, MoMo believes AI could bring banking access to communities like these and open up a new user base for fintech apps at the same time.

Explore NVIDIA’s AI solutions and enterprise-level AI platforms driving innovation in financial services. 

The post All-In-One Financial Services? Vietnam’s MoMo Has a Super-App for That appeared first on NVIDIA Blog.

Read More