Banks require more than cash in the vault these days, they also need accelerated computing in the back room.
“The boost we’re getting with GPUs not only significantly improved our performance at the same cost, it helped us redefine our business and sharpen our focus on customers,” said Marco Airoldi, who’s been head of financial engineering for more than 20 years at Mediobanca, a Milan-based banking group that provides lending and investment services in Europe.
High performance computing is especially important for investment banks whose services involve computationally intensive transactions on baskets of securities and derivative products.
Thanks, in part, to its GPU-powered systems, Mediobanca is thriving amid the current market downturn.
“We can’t disclose numbers, but I can tell you with a good degree of confidence I don’t think we’ve had more than a dozen negative days in the last 250 trading days,” said Stefano Dova, a Ph.D. in finance and head of markets at Mediobanca.
That’s, in part, because Airoldi’s team enabled real-time risk management on GPUs early in the year.
“It’s a fundamental step forward,” said Dova, who plays his electric piano or clarinet to unwind at the end of a stressful day. “You can lose money on a daily basis in the current market volatility, but we’ve been very happy with the results we’ve had in the last six months.”
Sharing the Wealth
Now, Mediobanca is preparing to offer its customers the same computing capabilities it enjoys.
“Because the GPUs are so fast, we can offer clients the ability to build their own products and see their risk profiles in real time, so they can decide where and when to invest — you can only do this if you have the computational power for live pricing,” Dova said.
The service, now in final testing, puts customers at the center of the bank’s business. It uses automation made possible by the parallel computing capabilities of the bank’s infrastructure, Airoldi notes.
Next Stop: Machine Learning
Looking further ahead, Airoldi’s group is mapping the investment bank’s journey into AI.
It starts with sentiment analysis, powered by natural language processing. That will help the bank understand market trends more deeply, so it can make even better investment decisions.
“AI will give us useful ways to map customer and investor behaviors, and we will invest in the technology to develop more AI apps for finance,” said Dova.
Mediobanca’s headquarters is in central Milan, around the corner from the famed Teatro alla Scala.
Their work comes as banks of all sorts are starting to apply AI to dozens of use cases.
“AI is one of the most promising technologies in finance,” said Airoldi, who foresees using it for classical quantitative problems, too.
It’s All About the Math
In the last few years, the bank has added dozens of GPUs to its infrastructure. Each offers up to 100x the performance of a CPU, he said.
That means Mediobanca can do more with less. It reduces its total cost of ownership while accelerating workloads that create competitive advantages such as Monte Carlo simulations used to create and price advanced investment products.
Under the hood, great financial performance is based on excellence in math, said Airoldi, who earned his Ph.D. in theoretical condensed matter physics.
“The mathematical models and numeric methods of finance are closely related to those found in theoretical physics, so investment banking is a great job for a physicist,” he said.
When Airoldi needs a break from work, you might find him playing chess in the Piazza della Scala, across from the famed opera house, just around the corner from the bank’s headquarters.
Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.
The short film I Am Not Afraid! by creative studio Fabian&Fred embodies childlike wonder, curiosity and imagination this week In the NVIDIA Studio.
Plus, the NVIDIA Studio #WinterArtChallenge shows no signs of letting up, so learn more and check out featured artwork at the end of this blog. Keep those posts coming.
For inspiration, watch NVIDIA artist Jeremy Lightcap and Adobe Substance 3D expert Pierre Maheut create a winter-themed scene in NVIDIA Omniverse, a platform for creating and operating metaverse applications that enables artists to connect their favorite 3D design tools for a more seamless workflow.
Invoke Emotion
For almost a decade, Fabian&Fred co-founders Fabian Driehost and Frederic Schuld have focused on relatable narratives — stories understood by audiences of all ages — while not shying away from complex emotional and social topics.
The short film’s hero Vanja faces her fears.
One of their latest works, I Am Not Afraid!, features a little girl named Vanja who discovers that being brave means facing your own fears and that everyone, even the bigger personalities in this world, are scared now and again.
“Everybody knows how it feels to be afraid of the dark,” said Fabian&Fred.
The concept for the film started when director and Norwegian native Marita Mayor shared her childhood experiences with the team. These emotional moments had a profound artistic impact on the work’s visual-layer-based, flat, minimal style and an appropriate color system.
“We combined structures from nature, brush strokes used for texture, and a kid’s voice — all designed to ensure the feeling of fear was authentic,” said the team.
With the script in hand, pre-production work included various sketches, moodboards and photographs of urban neighborhoods, people, animals and plants to match the narrative tone.
Moodboards and sketches assist in tone.
Work began in the Adobe Creative Cloud suite of creative apps, starting with the creation of multiple characters in Adobe Photoshop. These characters were then prepared and rigged in Adobe Animate.
Animated characters were used in Premiere Pro to create an animatic to test out voices and sounds. With the new GeForce RTX 40 Series GPUs, studios like Fabian&Fred can deploy NVIDIA’s dual encoders to cut export times nearly in half, speeding up review cycles for teams.
3D assets were modeled in Blender, with Blender Cycles RTX-accelerated OptiX ray tracing in the viewport, ensuring interactive modeling with sharp graphical output.
Preliminary sketches in Adobe Photoshop.
In parallel, large, detailed backgrounds were created in Adobe Illustrator with the GPU-accelerated canvas. Fabian&Fred were able to smoothly and interactively pan across, and zoom in and out of, their complex vector graphics, thanks to their GeForce RTX 3090 GPU.
Stunning backgrounds detailed in Adobe Illustrator.
Fabian&Fred returned to Adobe Animate to stage all assets and backgrounds with a mix of frame-by-frame and rig animation techniques. Sound production was done in the digital audio app ProTools, and final composite work completed in Adobe After Effects with more than 45 RTX GPU-accelerated features and effects at the duo’s disposal.
Finally, Fabian&Fred color corrected I Am Not Afraid! using Blackmagic Design’s DaVinci Resolve RTX GPU-accelerated, AI-powered, auto-color-correct feature to improve hues and contrast with ease. They then applied some final manual touches.
The new GeForce RTX 40 Series GPUs speed up AI tools in DaVinci Resolve, including Object Select Mask, which rotoscopes or highlights parts of motion footage frame by frame 70% faster than the previous generation, thanks to close collaboration with Blackmagic Design.
“We have worked closely with NVIDIA for many years, and we look forward to continuing our collaboration to produce even more groundbreaking tools and performance for creators,” said Rohit Gupta, director of software development at Blackmagic Design.
“Each project in our portfolio has benefited from reliable GeForce RTX GPU performance, whether it’s 2D animation or a photogrammetry-based, real-time 3D project.” – Fabian&Fred
Virtually every stage in Fabian&Fred’s creative workflow was made faster and easier with their GeForce RTX GPU. And while these powerful graphics cards are well known for accelerating the most difficult and complex workflows, they’re a boon for efficiency in smaller projects, as well.
Reflecting on their shared experiences, Fabian&Fred agreed that teamwork and diversity are their strengths. “In our studio, we come together from multicultural roots and make unique films as a team, with different methods, but the films have a truth in their heart that works for many people.”
Creative Studio Fabian&Fred.
View more of Fabian&Fred’s work on their Instagram page.
The Weather Outside Is Frightful, the #WinterArtChallenge Is Delightful
Enter NVIDIA Studio’s #WinterArtChallenge, running through the end of the year, by sharing winter-themed art on Instagram, Twitter or Facebook for a chance to be featured on our social media channels.
Like @mtw75 with Santa Claus and his faithful elves preparing gifts for all the good little boys and girls this holiday season.
Imagine trying to teach a toddler what a unicorn is. A good place to start might be by showing the child images of the creature and describing its unique features.
Now imagine trying to teach an artificially intelligent machine what a unicorn is. Where would one even begin?
Pretrained AI models offer a solution.
A pretrained AI model is a deep learning model — an expression of a brain-like neural algorithm that finds patterns or makes predictions based on data — that’s trained on large datasets to accomplish a specific task. It can be used as is or further fine-tuned to fit an application’s specific needs.
Why Are Pretrained AI Models Used?
Instead of building an AI model from scratch, developers can use pretrained models and customize them to meet their requirements.
To build an AI application, developers first need an AI model that can accomplish a particular task, whether that’s identifying a mythical horse, detecting a safety hazard for an autonomous vehicle or diagnosing a cancer based on medical imaging. That model needs a lot of representative data to learn from.
This learning process entails going through several layers of incoming data and emphasizing goals-relevant characteristics at each layer.
To create a model that can recognize a unicorn, for example, one might first feed it images of unicorns, horses, cats, tigers and other animals. This is the incoming data.
Then, layers of representative data traits are constructed, beginning with the simple — like lines and colors — and advancing to complex structural features. These characteristics are assigned varying degrees of relevance by calculating probabilities.
As opposed to a cat or tiger, for example, the more like a horse a creature appears, the greater the likelihood that it is a unicorn. Such probabilistic values are stored at each neural network layer in the AI model, and as layers are added, its understanding of the representation improves.
To create such a model from scratch, developers require enormous datasets, often with billions of rows of data. These can be pricey and challenging to obtain, but compromising on data can lead to poor performance of the model.
Precomputed probabilistic representations — known as weights — save time, money and effort. A pretrained model is already built and trained with these weights.
Using a high-quality pretrained model with a large number of accurate representative weights leads to higher chances of success for AI deployment. Weights can be modified, and more data can be added to the model to further customize or fine-tune it.
Developers building on pretrained models can create AI applications faster, without having to worry about handling mountains of input data or computing probabilities for dense layers.
In other words, using a pretrained AI model is like getting a dress or a shirt and then tailoring it to fit your needs, rather than starting with fabric, thread and needle.
Pretrained AI models are often used for transfer learning and can be based on several model architecture types. One popular architecture type is the transformer model, a neural network that learns context and meaning by tracking relationships in sequential data.
According to Alfredo Ramos, senior vice president of platform at AI company Clarifai — a Premier partner in the NVIDIA Inception program for startups — pretrained models can cut AI application development time by up to a year and lead to cost savings of hundreds of thousands of dollars.
How Are Pretrained Models Advancing AI?
Since pretrained models simplify and quicken AI development, many developers and companies use them to accelerate various AI use cases.
Top areas in which pretrained models are advancing AI include:
Natural language processing. Pretrained models are used for translation, chatbots and other natural language processing applications. Large language models, often based on the transformer model architecture, are an extension of pretrained models. One example of a pretrained LLM is NVIDIA NeMo Megatron, one of the world’s largest AI models.
Computer vision. Like in the unicorn example above, pretrained models can help AI quickly recognize creatures — or objects, places and people. In this way, pretrained models accelerate computer vision, giving applications human-like vision capabilities across sports, smart cities and more.
Cybersecurity. Pretrained models provide a starting point to implement AI-based cybersecurity solutions and extend the capabilities of human security analysts to detect threats faster. Examples include digital fingerprinting of humans and machines, and detection of anomalies, sensitive information and phishing.
Art and creative workflows. Bolstering the recent wave of AI art, pretrained models can help accelerate creative workflows through tools like GauGAN and NVIDIA Canvas.
Pretrained AI models can be applied across industries beyond these, as their customization and fine-tuning can lead to infinite possibilities for use cases.
Where to Find Pretrained AI Models
Companies like Google, Meta, Microsoft and NVIDIA are inventing cutting-edge model architectures and frameworks to build AI models.
These are sometimes released on model hubs or as open source, enabling developers to fine-tune pretrained AI models, improve their accuracy and expand model repositories.
NVIDIA NGC — a hub for GPU-optimized AI software, models and Jupyter Notebook examples — includes pretrained models as well as AI benchmarks and training recipes optimized for use with the NVIDIA AI platform.
NVIDIA AI Enterprise, a fully managed, secure, cloud-native suite of AI and data analytics software, includes pretrained models without encryption. This allows developers and enterprises looking to integrate NVIDIA pretrained models into their custom AI applications to view model weights and biases, improve explainability and debug easily.
Thousands of open-source models are also available on hubs like GitHub, Hugging Face and others.
It’s important that pretrained models are trained using ethical data that’s transparent and explainable, privacy compliant, and obtained with consent and without bias.
NVIDIA Pretrained AI Models
To help more developers move AI from prototype to production, NVIDIA offers several pretrained models that can be deployed out of the box, including:
NVIDIA SegFormer, a transformer model for simple, efficient, powerful semantic segmentation — available on GitHub.
NVIDIA NeMo Megatron, the world’s largest customizable language model, as part of NVIDIA NeMo, an open-source framework for building high-performance and flexible applications for conversational AI, speech AI and biology.
NVIDIA StyleGAN, a style-based generator architecture for generative adversarial networks, or GANs. It uses transfer learning to generate infinite paintings in a variety of styles.
In addition, NVIDIA Riva, a GPU-accelerated software development kit for building and deploying speech AI applications, includes pretrained models in ten languages.
And MONAI, an open-source AI framework for healthcare research developed by NVIDIA and King’s College London, includes pretrained models for medical imaging.
It’s a wild GFN Thursday — The Witcher 3: Wild Hunt next-gen update will stream on GeForce NOW day and date, starting next week. Today, members can stream new seasons of Fortnite and Genshin Impact, alongside eight new games joining the library.
In addition, the newest GeForce NOW app is rolling out this week with support for syncing members’ Ubisoft Connect library of games, which helps them get into their favorite Ubisoft games even quicker.
Plus, gamers across the U.K., Netherlands and Poland have the first chance to pick up the new HP Chromebook x360, 13.3 inches, built for extreme multitasking with an adaptive 360-degree design and great for cloud gaming. Each Chromebook purchase comes with a one-month GeForce NOW Priority membership for free.
Triss the Season
“Hmm”
CD PROJEKT RED releases the next-gen update for The Witcher 3: Wild Hunt — Complete Edition on Wednesday, Dec. 14. The update is free for anyone who owns the game on Steam, Epic Games, or GOG.com, and GeForce NOW members can take advantage of upgraded visuals across nearly all of their devices.
The next-gen update brings vastly improved visuals, a new photo mode, and content inspired by Netflix’s The Witcher series. It also adds RTX Global Illumination, as well as ray-traced ambient occlusion, shadows and reflections that add cinematic detail to the game.
Play as Geralt of Rivia on a quest to track down his adopted daughter Ciri, the Child of Prophecy — and the carrier of the powerful Elder Blood — across all your devices without needing to wait for the update to download and install. GeForce NOW RTX 3080 and Priority members can play with RTX ON and NVIDIA DLSS to explore the beautiful open world of The Witcher at high frame rates on nearly any device — from Macs to mobile devices and more.
Get in Sync
The GeForce NOW 2.0.47 app update begins rolling out this week with support for syncing Ubisoft Connect accounts with your GeForce NOW library.
The 2.0.47 app update brings Ubisoft Connect library syncing.
Members will be able to get to their Ubisoft games faster and easier with this new game-library sync for Ubisoft Connect. Once synced, members will be automatically logged into their Ubisoft account across all devices when streaming supported GeForce NOW games purchased directly from the Ubisoft or Epic Games Store. These include titles like Rainbow Six Siege and Far Cry 6.
The update also adds improvements to voice chat with Chromebook built-in mics, as well as bug fixes. Look for the update to hit PC, Mac and browser clients in the coming days.
‘Tis the Seasons
Fortnite Chapter 4 is available to play in the cloud.
The action never stops on GeForce NOW. This week brings updates to some of the hottest titles streaming from the cloud, and eight new games to play.
Members can jump into Fortnite Chapter 4, now available on GeForce NOW. The chapter features a new island, newly forged weapons, a new realm and new ways to get around, whether riding a dirt bike or rolling around in a snowball. A new cast of combatants is also available, including Geralt of Rivia himself.
Genshin Impact’s Version 3.3 “All Senses Clear, All Existence Void” is also available to stream on GeForce NOW, bringing a new season of events, a new card game called the Genius Invokation TCG, and two powerful allies — the Wanderer and Faruzan — for more stories, fun and challenges.
Here’s the full list of games coming to the cloud this week:
A GeForce NOW paid membership makes a great present for the gamer in your life, so give the gift of gaming with a GeForce NOW gift card. It’s the perfect stocking stuffer or last-minute treat for yourself or a buddy.
Finally, with The Witcher 3: Wild Hunt — Complete Edition on the way, we need to know – Which Geralt are you today? Tell us on Twitter or in the comments below.
As the autonomous vehicle industry enters the next year, it will start navigating into even greater technology frontiers.
Next-generation vehicles won’t just be defined by autonomous driving capabilities. Everything from the design and production process to the in-vehicle experience is entering a new era of digitization, efficiency, safety and intelligence.
Delivering up to 2,000 trillion floating operations per second, DRIVE Thor unifies autonomous driving and cockpit functions on a single computer for unprecedented speed and efficiency.
In the coming year, the industry will see even more wide-ranging innovations begin to take hold, as industrial metaverse and cloud technologies become more prevalent.
Simulation technology for AV development has also flourished in the past year. New tools and techniques on NVIDIA DRIVE Sim, including using AI tools for training and validation, have narrowed the gap between the virtual and real worlds.
Here’s what to expect for intelligent transportation in 2023.
Enter the Metaverse
The same NVIDIA Omniverse platformthat serves as the foundation of DRIVE Sim for AV development is also revolutionizing the automotive product cycle. Automakers can leverage Omniverse to unify the 3D design and simulation pipelines for vehicles, and build persistent digital twins of their production facilities.
Designers can collaborate between 3D software ecosystems from anywhere in the world, in real time, with Omniverse. With full fidelity RTX ray tracing showing physically accurate lighting and reflections and physical behavior, vehicle designs can be more acutely evaluated and tested before physical prototyping ever begins.
Production is the next step in this process, and it requires thousands of parts and workers moving in harmony. With Omniverse, automakers can develop a unified view of their manufacturing processes across plants to streamline operations.
Planners can access the full-fidelity digital twin of the factory, reviewing and optimizing as needed. Every change can be quickly evaluated and validated in virtual, then implemented in the real world to ensure maximum efficiency and ergonomics for factory workers.
Customers can also benefit from enhanced product experiences. Full-fidelity, real-time car configurators, 3D simulations of vehicles, demonstrations in augmented reality and virtual test drives all help bring the vehicle to the customer.
These technologies bridge the gap between the digital and the physical, as the buying experience evolves to include both physical retail spaces and online engagement.
Cloud Migration
As remote work becomes a permanent fixture, cloud capabilities are proving vital to growing industries, including transportation.
Looking ahead, AV developers will be able to access a comprehensive suite of services using NVIDIA Omniverse Cloud to design, deploy and experience metaverse applications anywhere. These applications include simulation, in-vehicle experiences and car configurators.
With cloud-based simulation, AV engineers can generate physically based sensor data and traffic scenarios to test and validate self-driving technology. Developers can also use simulation to design intelligent vehicle interiors.
An autonomous test vehicle running in simulation.
These next-generation cabins will feature personalized entertainment, including streaming content. With the NVIDIA GeForce NOW cloud gaming service, passengers will be able to stream over 1,000 titles from the cloud into the vehicle while charging or waiting to pick up passengers.
Additionally, Omniverse Cloud enables automakers to offer a virtual showroom for an immersive experience to customize a vehicle before purchasing it from anywhere in the world.
Individualized Interiors
Autonomous driving capabilities will deliver a smoother, safer driving experience for all road users. As driving functions become more automated across the industry, vehicle interiors are taking on a bigger role for automakers to create branded experiences.
In addition to gaming, advances in AI and in-vehicle compute are enabling a range of new infotainment technologies, including digital assistants, occupant monitoring, AV visualization, video conferencing and more.
AI and cloud technologies provide personalized infotainment experiences for every passenger.
With NVIDIA DRIVE Concierge, automakers can provide these features across multiple displays in the vehicle. And with software-defined, centralized compute, they can continuously add new capabilities over the air.
This emerging cloud-first approach is transforming every segment of the AV industry, from developing vehicles and self-driving systems to operating global fleets.
Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.
An earlier version of the blog incorrectly noted that the December Studio Driver was available today. Stay tuned for updates on this month’s driver release.
Time to tackle one of the most challenging tasks for aspiring movie makers — creating aesthetically pleasing visual effects — courtesy of visual effects artist and filmmaker Jay Lippman this week In the NVIDIA Studio.
In addition, the new NVIDIA Omniverse Unreal Engine Connector 200.2 allows Unreal users to send and live-sync assets in the Omniverse Nucleus server — unlocking the ability to open, edit and sync Unreal with other creative apps — to build more expansive virtual worlds in complete ray-traced fidelity.
Plus, the NVIDIA Studio #WinterArtChallenge is delivering un-brr-lievable entries. Check out some of the featured artwork and artists at the end of this post.
(Inter)Stellar Video-Editing Tips and Tricks
A self-taught filmmaker, Lippman grew up a big fan of science fiction and horror. Most of the short sketches on his YouTube channel are inspired by movies and shows in that genre, he said, but his main inspiration for content derives from his favorite punk bands.
“I always admired that they didn’t rely on big record deals to get their music out there,” he said. “That whole scene was centered around this culture of DIY, and I’ve tried to bring that same mentality into the world of filmmaking, figuring out how to create what you want with the tools that you have.”
That independent spirit drives Make Your VFX Shots Look REAL — a sci-fi cinematic and tutorial that displays the wonder behind top-notch graphics and the know-how for making them your own.
Lippman uses Blackmagic Design’s DaVinci Resolve software for video editing, color correction, visual effects, motion graphics and audio post-production. His new GeForce RTX 4080 GPU enables him to edit footage while applying effects freely and easily.
Lippman took advantage of the new AV1 encoder found in DaVinci Resolve, OBS Studio and Adobe Premiere Pro via the Voukoder plug-in, encoding 40% faster and unlocking higher resolutions and crisper image quality.
“The GeForce RTX 4080 GPU is a no-brainer for anyone who does graphics-intensive work, video production or high-end streaming,” Lippman said.
The majority of Make Your VFX Shots Look REAL was built in DaVinci Resolve’s Fusion page, featuring a node-based workflow with hundreds of 2D and 3D tools. He uploaded footage from his Blackmagic Pocket Cinema Camera in 6K resolution then proceeded to composite VFX.
The artist started by refining motion blur, a key element of any movement in camera footage shot in 24 frames per second or higher. Animated elements like the blue fireball must include motion blur, or they’ll look out of place. Applying a transform node with motion blur, done faster with a GeForce RTX GPU, created the necessary realism, Lippman said.
Lippman then lit the scene and enhanced elements in the composition by emitting absent light in the original footage. He creates lighting and adds hues by using a probe modifier on the popular DaVinci Resolve color corrector, a GPU-accelerated task.
The artist then matches movement, critical for adding 2D or 3D effects to footage. In this case, Lippman replaced the straightforward blue sky with a haunting, cloudy, gloomy gray. Within Fusion, Lippman selected the merge mode, connecting the sky with the composition. He then right clicked the center of the video and used the Merge:1 Center, Modify With and Tracker position features with minor adjustments to complete tracking movement.
Lippman rounds out his creative workflow with color matching. He said it’s critical to have the proper mental approach alongside realistic expectations while applying VFX composition.
“Our goal is not to make our VFX shots look real, it’s to make them look like they were shot on the same camera, on the same lens, at the same time of the original footage,” said Lippman. “A big part of it is matching colors, contrast and overall brightness with all of the scene elements.”
Lippman color matched the sky, clouds and UFO by adding a color-corrector node to a single cloud node, tweaking the hue and likeness to match the rest of the sky. Edits were then applied to the remaining clouds. Lippman also applied a color-correction node to the UFO, tying up the scene with matching colors.
When it came time for final exports, the exclusive NVIDIA dual encoders found in GeForce RTX 40 Series GPUs slashed Lippman’s export time by half. This can help freelancers like him meet sharp deadlines. The dual encoders can be found in Adobe Premiere Pro (via the popular Voukoder plug-in), Jianying Pro (China’s top video-editing app) and DaVinci Resolve.
“The GeForce RTX 4080 is a powerhouse and definitely gives you the ability to do more with less,” he said. “It’s definitely faster than the dual RTX 2080 GPU setup I’d been using and twice as fast as the RTX 3080 Ti, while using less power and costing around the same. Plus, it unlocks the AV1 Codec in DaVinci Resolve and streaming in AV1.”
Visual effects artist, filmmaker and gearhead Jay Lippman.
As AI plays a greater role in creative workflows, video editors can explore DaVinci Resolve’s vast suite of RTX-accelerated, AI-powered features that are an incredible boon for efficiency.
These include Face Refinement, which detects facial features for fast touch-ups such as sharpening eyes and subtle relighting; Speed Warp, which quickly creates super-slow-motion videos; and Detect Scene Cuts, which uses DaVinci Resolve’s neural engine to predict video cuts without manual edits.
The Unreal Engine Connector Arrives
NVIDIA Omniverse Unreal Engine Connector 200.2 supports enhancements to non-destructive live sessions with Omni Live 2.0, allowing for more robust real-time collaboration of Universal Scene Description (USD) assets within Unreal Engine. Thumbnails are now supported in its content browser from Nucleus for Omniverse USD and open-source material definition language (MDL), which creates a more intuitive user experience.
The Omniverse Unreal Engine Connector also supports updates to Unreal Engine 5.1, including:
Lumen — Unreal Engine 5’s fully dynamic global illumination and reflections system that renders diffuse interreflection with infinite bounces and indirect specular reflections in large, detailed environments at massive scale.
Nanite — the virtualized geometry system using internal mesh formats and rendering technology to render pixel-scale detail at high object counts.
Virtual Shadow Maps — to deliver consistent, high-resolution shadowing that works with film-quality assets and large, dynamically lit open worlds.
Omniverse Unreal Engine Connector supports versions 4.27, 5.0 and 5.1 of Unreal Editor. View the complete release notes.
Weather It’s Cold or Not, the #WinterArtChallenge Carries On
Enter NVIDIA Studio’s #WinterArtChallenge, running through the end of the year, by sharing winter-themed art on Instagram, Twitter or Facebook for a chance to be featured on our social media channels.
@lowpolycurls’ 3D winter scene — with its unique, 2D painted-on style textures — gives us all the warm feels during a cold winter night.
Share your winter-themed art with us for a chance to be featured on our social channels and in an upcoming Studio Standouts YouTube video! pic.twitter.com/aWAPoczok8
From rapidly fluctuating demand to staffing shortages and supply chain complexity, enterprises have navigated numerous challenges the past few years. Many companies seeking strong starts to 2023 are planning to use AI and accelerated computing to drive growth while saving costs.
To support these early adopters — as well as those just beginning their AI journey — NVIDIA has announced a new version of its NVIDIA AI Enterprise software suite to support businesses worldwide across a wide range of domain and industry-specific workloads.
NVIDIA AI Enterprise 3.0 will introduce workflows for contact center intelligent virtual assistants, audio transcription and digital fingerprinting for cybersecurity — some of the most common applications for enterprises adopting AI to better serve customers.
Expected to be available later this month, NVIDIA AI Enterprise 3.0 also expands support for more than 50 NVIDIA AI software frameworks and pretrained models available on the NVIDIA NGC software catalog, supercharging and simplifying AI deployments for organizations globally.
Deutsche Bank Announces Innovation Partnership With NVIDIA
The new software release arrives as Deutsche Bank today announced its plans to partner with NVIDIA to accelerate the use of AI in financial services as part of its strategy for developing AI-powered speech, vision and fraud detection applications within the industry.
“AI and machine learning will redefine banking, and we’re already working closely with NVIDIA to lead the industry in leveraging these technologies to improve customer service and mitigate risk,” said Gil Perez, chief innovation officer and head of Cloud & Innovation Network at Deutsche Bank. “Accelerated computing enables traders to manage risk and run more scenarios faster while also improving energy efficiency, and NVIDIA AI Enterprise provides the flexibility to support AI development across our hybrid infrastructure.”
NVIDIA AI Enterprise includes best-in-class development tools, frameworks and pretrained models for AI practitioners and reliable management and orchestration for IT professionals to ensure performance, high availability and security.
New NVIDIA AI Enterprise Workflows Speed Success for Businesses
This latest version of our secure, cloud-native suite of AI software enables organizations to solve business challenges while increasing operational efficiency. It accelerates the data science pipeline and streamlines the development and deployment of AI models to automate essential processes and gain rapid insights from data.
The new AI workflows for contact center intelligent virtual assistants, audio transcription and cybersecurity digital fingerprinting in NVIDIA AI Enterprise 3.0 leverage NVIDIA expertise to reduce development time and costs to speed time to deployment.
The workflows run as cloud-native microservices using NVIDIA AI frameworks and pretrained models, as well as Helm charts, Jupyter notebooks and more. Enterprises can deploy the microservices as standalone Kubernetes containers or combine them with other services to create production-ready applications with greater accuracy and performance.
The contact center intelligent virtual assistant AI solution workflow enables enterprises to respond to customers around the clock to reduce wait times and free up time for human contact center agents to support more complex inquiries — all while reducing costs. Using the workflow, enterprises can develop agents that deliver personalized and precise responses in natural-sounding voices. By leveraging AI, the agents can better understand customers even on calls with poor audio quality.
With the audio transcription AI solution workflow, enterprises can rapidly create accurate transcripts in English, Spanish, Mandarin, Hindi, Russian, Korean, German, French and Portuguese using NVIDIA automatic speech recognition technology, with Japanese, Arabic and Italian expected to be added soon. The transcription workflow leverages fully customizable GPU-optimized models to enable better understanding, contextual insights and sentiment analysis with real-time accuracy. Enterprises can use the completed transcripts to improve product development and speed training of contact center agents.
Using unsupervised learning, the digital fingerprinting AI solution workflow employs threat detection to achieve comprehensive data visibility. It improves security by helping enterprises uniquely fingerprint every user, service, account and machine across the network to detect anomalous behavior. Once deployed, the workflow provides intelligent alerts and actionable information to reduce detection time from weeks to minutes to help security analysts quickly identify and act on threats.
Pretrained Models Support Explainability and Understanding
NVIDIA AI Enterprise 3.0 also features unencrypted pretrained models and source code from the latest release of NVIDIA TAO Toolkit, a low-code AI development solution for creating highly accurate, customized, production-ready AI models for speech and computer vision AI applications.
The unencrypted models are exclusively available with NVIDIA AI Enterprise and support a variety of imaging and vision AI tasks for healthcare, smart cities and retail, such as pathology tumor detection, people detection, vehicle detection, pose estimation and action recognition.
Using the pretrained models without encryption enables developers to view the weights and biases of the model, which can help in model explainability and understanding model bias. In addition, unencrypted models are easier to debug and easier to integrate into custom AI applications.
NVIDIA AI Enterprise 3.0 also introduces support for a broad range of NVIDIA AI frameworks and infrastructure options:
NVIDIA Clara Parabricks and MONAI improve healthcare: New support for NVIDIA Clara Parabricks enables faster, more accurate genomic analysis for sequencing centers, clinical labs, genomics researchers and genomics instrument manufacturers. NVIDIA AI Enterprise also supports MONAI, a domain-specific medical imaging AI framework that provides pretrained models and a collaborative, scalable workflow for data labeling and training robust AI models.
NVIDIA AI frameworks to boost customer service, safety, sales and more: The 50+ frameworks and pretrained models now supported in NVIDIA AI Enterprise 3.0 include NVIDIA Riva, a GPU-accelerated speech AI software development kit for building and deploying fully customizable, real-time AI pipelines that deliver world-class accuracy in all leading clouds, on premises, at the edge and on embedded devices. NVIDIA Morpheus enables cybersecurity developers to create optimized AI pipelines for filtering, processing and classifying large volumes of real-time data. SDKs in the NVIDIA Metropolis intelligent video analytics platform, such as TAO Toolkit and NVIDIA DeepStream for vision AI, are supported, as is the NVIDIA Merlin open-source framework for building high-performing recommender systems at scale.
Expanded certification for the cloud: With NVIDIA AI Enterprise 3.0, organizations with a hybrid cloud strategy now have the flexibility to run the software on GPU-accelerated instances from Oracle Cloud Infrastructure. Customers who purchase a license through one of NVIDIA’s channel partners can deploy in OCI with full certification and support from NVIDIA on designated OCI instances. This is in addition to existing NVIDIA AI Enterprise certification for accelerated instances from Amazon Web Services, Microsoft Azure and more.
Hewlett Packard Enterprise and NVIDIA extend AI support for hybrid data centers: HPE and NVIDIA will deliver a joint offering that provides support for the NVIDIA AI Enterprise 3.0 on HPE GreenLake and HPE Ezmeral. The solution allows customers to speed up AI application development, securely, by easily procuring and deploying NVIDIA AI Enterprise on a managed HPE GreenLake instance.
Broadened storage and virtualization support: NVIDIA AI Enterprise 3.0 now supports NVIDIA Magnum IO GPUDirect Storage, which provides a direct data path between local or remote storage and GPU memory to further speed AI workloads. It also delivers expanded virtualization options, including Red Hat Enterprise Linux with KVM and VMware vSphere 8.
NVIDIA AI Enterprise is available now. Customers can contact NVIDIA partners worldwide for pricing. NVIDIA AI Enterprise 3.0 is expected to be available for customers with current and new subscriptions later this month. A license for NVIDIA AI Enterprise is also included with servers from NVIDIA partners that feature NVIDIA H100 PCIe GPUs, including systems from Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro.
Enterprises can grow their AI expertise by trying NVIDIA AI workflows and frameworks supported in NVIDIA AI Enterprise on NVIDIA LaunchPad at no charge.
The announcement follows months of testing to explore use cases that could support the bank’s strategic ambitions to 2025 and beyond.
“Accelerated computing and AI are at a tipping point, and we’re bringing them to the world’s enterprises through the cloud,” said NVIDIA founder and CEO Jensen Huang. “Every aspect of future business will be supercharged with insight and intelligence operating at the speed of light.
“Together with Deutsche Bank, we are modernizing and reimagining the way financial services are operated and delivered,” he added.
The potential is enormous. McKinsey estimates that AI technologies could deliver up to $1 trillion of additional value yearly for global banking.
Frankfurt-based Deutsche Bank is a leading global investment bank with more than 80,000 employees in 58 countries worldwide.
Deutsche Bank’s initiatives promise to speed efforts to serve customers worldwide, develop new data-driven products and services, increase efficiency and recruit tech talent.
Together, Deutsche Bank and NVIDIA have initially focused on three potential implementations with a multi-year ambition to expand this to over a hundred, which the companies are exploring.
With NVIDIA AI Enterprise software, Deutsche Bank’s AI developers, data scientists and IT professionals will be able to build and run AI workflows anywhere, including in its hosted on-premises data centers and on Google Cloud, the bank’s public cloud provider. (In related news, NVIDIA today announced NVIDIA AI Enterprise 3.0.)
Next-Generation Risk Management
Price discovery, risk valuation and model backtesting require computationally intensive calculations on massive traditional CPU-driven server grid farms. Accelerated compute delivers more accurate results in real time, helping provide more value to customers while lowering total costs by as much as 80%.
Many bank functions that typically process overnight, like risk valuation, can now be run in real time on accelerated compute.
This represents a leap forward in how traders can manage risk by running more scenarios faster on a more energy-efficient grid farm.
Redefining Personalized Customer Service With Interactive Avatars
Deutsche Bank is exploring how to engage employees, potential recruits and customers more interactively, improving experiences using 3D virtual avatars in real time, 24 hours a day, seven days a week.
An early potential implementation enabled Deutsche Bank to create a 3D virtual avatar to help employees navigate internal systems and respond to HR-related questions.
Future use cases will explore immersive metaverse experiences with banking clients.
Deriving Insights Out of Unstructured Data
Extracting critical information from unstructured data has long been challenging. But existing large language models don’t perform well on financial texts.
Transformers, a type of neural network that learns context and, thus, meaning from data, introduced in 2017, could change this.
A single pretrained model can perform amazing feats — including text generation, translation and even software programming — and is the basis of the new generation of AI.
Deutsche Bank and NVIDIA are testing a collection of large language models called Financial Transformers, or Finformers.
These systems will have the potential to provide early warning signs of counterparty risk, retrieve data faster and identify data quality issues.
Training, testing and validating autonomous vehicles requires a continuous pipeline — or data factory — to introduce new scenarios and refine deep neural networks.
A key component of this process is simulation. AV developers can test a virtually limitless number of scenarios, repeatably and at scale, with high-fidelity, physically based simulation. And like much of the technology related to AI, simulation is constantly evolving and improving, getting ever nearer to closing the gap between the real and virtual worlds.
Matt Cragun, senior product manager for AV simulation at NVIDIA, joined the AI Podcast to discuss the development of simulation for self-driving technology, detailing the origins and inner workings of DRIVE Sim.
He also provided a sneak peek into the frontiers researchers are exploring for this critical testing and validation technology.
Neural Reconstruction Engine in NVIDIA DRIVE Sim
NVIDIA researchers have developed an AI pipeline, known as the Neural Reconstruction Engine, that constructs a 3D scene from recorded sensor data in NVIDIA DRIVE Sim.
First demonstrated at GTC22, these AI tools bring the real world directly in simulation to increase realism and speed up autonomous vehicle production.
NRE uses multiple AI networks to create interactive 3D test environments where developers can modify the scene and see how the world reacts. Developers can change scenarios, add synthetic objects, and apply randomizations—such as a child following a bouncing ball into the road—making the initial scenarios even more challenging.
Teaching the AI brains of autonomous vehicles to understand the world as humans do requires billions of miles of driving experience. The road to achieving this astronomical level of driving leads to the virtual world. Learn how Waabi uses powerful high-fidelity simulations to train and develop production-level autonomous vehicles.
Driving enjoyment and autonomous driving capabilities can complement one another in intelligent, sustainable vehicles. Learn about the automaker’s plans to unveil its third vehicle, the Polestar 3, the tech inside it, and what the company’s racing heritage brings to the intersection of smarts and sustainability.
Humans playing games against machines is nothing new, but now computers can develop their own games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.
Subscribe to the AI Podcast: Now Available on Amazon Music
For every minute that a stroke is left untreated, the average patient loses nearly 2 million neurons. This means that for each hour in which treatment fails to occur, the brain loses as many neurons as it does in more than three and a half years of normal aging.
With one of the world’s first portable brain scanners for stroke diagnosis, Australia-based healthcare technology developer EMVision is on a mission to enable quicker triage and treatment to reduce such devastating impacts.
The NVIDIA Inception member’s EMVision device fits like a helmet and can be used at the point of care and in ambulances for prehospital stroke diagnosis. It relies on electromagnetic imaging technology and uses NVIDIA-powered AI to distinguish between ischaemic and haemorrhagic strokes — clots and bleeds — in just minutes.
A cart-based version of the device, built using the NVIDIA Jetson edge AI platform and NVIDIA DGX systems, can also help with routine monitoring of a patient post-intervention to inform their progress and recovery.
“With EMVision, the healthcare community can access advanced, portable solutions that will assist in making critical decisions and interventions earlier, when time is of the essence,” said Ron Weinberger, CEO of EMVision. “This means we can provide faster stroke diagnosis and treatment to ensure fewer disability outcomes and an improved quality of life for patients.”
Point-of-Care Diagnosis
Traditional neuroimaging techniques, like CT scans and MRIs, produce excellent images but require large, stationary, complex machines and specialist operators, Weinberger said. This limits point-of-care accessibility.
The EMVision device is designed to scan the brain wherever the patient may be — in an ambulance or even at home if monitoring a patient who has a history of stroke.
“Whether for a new, acute stroke or a complication of an existing stroke, urgent brain imaging is required before correct triage, treatment or intervention decisions can be made,” Weinberger said.
The startup has developed and validated novel electromagnetic brain scanner hardware and AI algorithms capable of classifying and localizing a stroke, as well as creating an anatomical reconstruction of the patient’s brain.
“NVIDIA accelerated computing has played an important role in the development of EMVision’s technology, from hardware verification and algorithm development to rapid image reconstruction and AI-powered decision making,” Weinberger said. “With NVIDIA’s support, we are set to transform stroke diagnosis and care for patients around the world.”
EMVision uses NVIDIA DGX for hardware verification and optimization, as well as for prototyping and training AI models. EMVision has trained its AI models 10x faster using NVIDIA DGX compared with other systems, according to Weinberger.
Each brain scanner has an NVIDIA Jetson AGX Xavier module on board for energy-efficient AI inference at the edge. And the startup is looking to use NVIDIA Jetson Orin Nano modules for next-generation edge AI.
“The interactions between low-energy electromagnetic signals and brain tissue are incredibly complex,” Weinberger said. “Making sense of these signal interactions to identify if pathologies are present and recreate quality images wouldn’t be possible without the massive power of NVIDIA GPU-accelerated computing.”
As a member of NVIDIA Inception, a free, global program for cutting-edge startups, EMVision has shortened product development cycles and go-to-market time, Weinberger added.