Artificial intelligence is now a household term. Responsible AI is hot on its heels.
Julia Stoyanovich, associate professor of computer science and engineering at NYU and director of the university’s Center for Responsible AI, wants to make the terms “AI” and “responsible AI” synonymous.
In the latest episode of the NVIDIA AI Podcast, host Noah Kravitz spoke with Stoyanovich about responsible AI, her advocacy efforts and how people can help.
Stoyanovich started her work at the Center for Responsible AI with basic research. She soon realized that what was needed were better guardrails, not just more algorithms.
As AI’s potential has grown, along with the ethical concerns surrounding its use, Stoyanovich clarifies that the “responsibility” lies with people, not AI.
“The responsibility refers to people taking responsibility for the decisions that we make individually and collectively about whether to build an AI system and how to build, test, deploy and keep it in check,” she said.
AI ethics is a related concern, used to refer to “the embedding of moral values and principles into the design, development and use of the AI,” she added.
Lawmakers have taken notice. For example, New York recently implemented a law that makes job candidate screening more transparent.
According to Stoyanovich, “the law is not perfect,” but “we can only learn how to regulate something if we try regulating” and converse openly with the “people at the table being impacted.”
Stoyanovich wants two things: for people to recognize that AI can’t predict human choices and that AI systems be transparent and accountable, carrying a “nutritional label.”
That process should include considerations on who is using AI tools, how they’re used to make decisions and who is subjected to those decisions, she said.
Stoyanovich urges people to “start demanding actions and explanations to understand” how AI is used at local, state and federal levels.
“We need to teach ourselves to help others learn about what AI is and why we should care,” she said. “So please get involved in how we govern ourselves, because we live in a democracy. We have to step up.”
Editor’s note: This post is part of Into the Omniverse, a series focused on how artists and developers from startups to enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.
Real-time rendering, animation and texture baking are essential workflows for 3D art production. Using the Marmoset Toolbag software, 3D artists can enhance their creative workflows and build complex 3D models without disruptions to productivity.
The latest release of Marmoset Toolbag, version 4.06, brings increased support for Universal Scene Description, aka OpenUSD, enabling seamless compatibility with NVIDIA Omniverse, a development platform for connecting and building OpenUSD-based tools and applications.
3D creators and technical artists using Marmoset can now enjoy improved interoperability, accelerated rendering, real-time visualization and efficient performance — redefining the possibilities of their creative workflows.
Enhancing Cross-Platform Creativity With OpenUSD
Creators are taking their workflows to the next level with OpenUSD.
Berlin-based Armin Halač works as a principal animator at Wooga, a mobile games development studio known for projects like June’s Journey and Ghost Detective. The nature of his job means Halač is no stranger to 3D workflows — he gets hands-on with animation and character rigging.
For texturing and producing high-quality renders, Marmoset is Halač’s go-to tool, providing a user-friendly interface and powerful features to simplify his workflow. Recently, Halač used Marmoset to create the captivating cover image for his book, A Complete Guide to Character Rigging for Games Using Blender.
Using the added support for USD, Halač can seamlessly send 3D assets from Blender to Marmoset, creating new possibilities for collaboration and improved visuals.
The cover image of Halač’s book.
Nkoro Anselem Ire, a.k.a askNK, is a popular YouTube creator as well as a media and visual arts professor at a couple of universities who is also seeing workflow benefits from increased USD support.
As a 3D content creator, he uses Marmoset Toolbag for the majority of his PBR workflow — from texture baking and lighting to animation and rendering. Now, with USD, askNK is enjoying newfound levels of creative flexibility as the framework allows him to “collaborate with individuals or team members a lot easier because they can now pick up and drop off processes while working on the same file.”
Halač and askNK recently joined an NVIDIA-hosted livestream where community members and the Omniverse team explored the benefits of a Marmoset- and Omniverse-boosted workflow.
Daniel Bauer is another creator experiencing the benefits of Marmoset, OpenUSD and Omniverse. A SolidWorks mechanical engineer with over 10 years of experience, Bauer works frequently in CAD software environments, where it’s typical to assign different materials to various scene components. The variance can often lead to shading errors and incorrect geometry representation, but using USD, Bauer can avoid errors by easily importing versions of his scene from Blender to Marmoset Toolbag to Omniverse USD Composer.
A Kuka Scara robot simulation with 10 parallel small grippers for sorting and handling pens.
Additionally, 3D artists Gianluca Squillace and Pasquale Scionti are harnessing the collaborative power of Omniverse, Marmoset and OpenUSD to transform their workflows from a convoluted series of exports and imports to a streamlined, real-time, interconnected process.
Squillace crafted a captivating 3D character with Pixologic ZBrush, Autodesk Maya, Adobe Substance 3D Painter and Marmoset Toolbag — aggregating the data from the various tools in Omniverse. With USD, he seamlessly integrated his animations and made real-time adjustments without the need for constant file exports.
Simultaneously, Scionti constructed a stunning glacial environment using Autodesk 3ds Max, Adobe Substance 3D Painter, Quixel and Unreal Engine, uniting the various pieces from his tools in Omniverse. His work showcased the potential of Omniverse to foster real-time collaboration as he was able to seamlessly integrate Squillace’s character into his snowy world.
Advancing Interoperability and Real-Time Rendering
Marmoset Toolbag 4.06 provides significant improvements to interoperability and image fidelity for artists working across platforms and applications. This is achieved through updates to Marmoset’s OpenUSD support, allowing for seamless compatibility and connection with the Omniverse ecosystem.
The improved USD import and export capabilities enhance interoperability with popular content creation apps and creative toolkits like Autodesk Maya and Autodesk 3ds Max, SideFX Houdini and Unreal Engine.
RTX-accelerated rendering and baking: Toolbag’s ray-traced renderer and texture baker are accelerated by NVIDIA RTX GPUs, providing up to a 2x improvement in render times and a 4x improvement in bake times.
Real-time denoising with OptiX: With NVIDIA RTX devices, creators can enjoy a smooth and interactive ray-tracing experience, enabling real-time navigation of the active viewport without visual artifacts or performance disruptions.
High DPI performance with DLSS image upscaling: The viewport now renders at a reduced resolution and uses AI-based technology to upscale images, improving performance while minimizing image-quality reductions.
Download Toolbag 4.06 directly from Marmoset to explore USD support and RTX-accelerated production tools. New users are eligible for a full-featured, 30-day free trial license.
Get Plugged Into the Omniverse
Learn from industry experts on how OpenUSD is enabling custom 3D pipelines, easing 3D tool development and delivering interoperability between 3D applications in sessions from SIGGRAPH 2023, now available on demand.
Anyone can build their own Omniverse extension or Connector to enhance their 3D workflows and tools. Explore the Omniverse ecosystem’s growing catalog of connections, extensions, foundation applications and third-party tools.
Share your Marmoset Toolbag and Omniverse work as part of the latest community challenge, #SeasonalArtChallenge. Use the hashtag to submit a spooky or festive scene for a chance to be featured on the @NVIDIAStudio and @NVIDIAOmniverse social channels.
NVIDIA founder and CEO Jensen Huang joined Hon Hai (Foxconn) Chairman and CEO Young Liu to unveil the latest in their ongoing partnership to develop the next wave of intelligent electric vehicle (EV) platforms for the global automotive market.
This latest move, announced today at the fourth annual Hon Hai Tech Day in Taiwan, will help Foxconn realize its EV vision with a range of NVIDIA DRIVE solutions — including NVIDIA DRIVE Orin today and its successor, DRIVE Thor, down the road.
In addition, Foxconn will be a contract manufacturer of highly automated and autonomous, AI-rich EVs featuring the upcoming NVIDIA DRIVE Hyperion 9 platform, which includes DRIVE Thor and a state-of-the-art sensor architecture.
Next-Gen EVs With Extraordinary Performance
The computational requirements for highly automated and fully self-driving vehicles are enormous. NVIDIA offers the most advanced, highest-performing AI car computers for the transportation industry, with DRIVE Orin selected for use by more than 25 global automakers.
Already a tier-one manufacturer of DRIVE Orin-powered electronic control units (ECUs), Foxconn will also manufacture ECUs featuring DRIVE Thor, once available.
The upcoming DRIVE Thor superchip harnesses advanced AI capabilities first deployed in NVIDIA Grace CPUs and Hopper and Ada Lovelace architecture-based GPUs — and is expected to deliver a staggering 2,000 teraflops of high-performance compute to enable functionally safe and secure intelligent driving.
Next-generation NVIDIA DRIVE Thor.
Heightened Senses
Unveiled at GTC last year, DRIVE Hyperion 9 is the latest evolution of NVIDIA’s modular development platform and reference architecture for automated and autonomous vehicles. Set to be powered by DRIVE Thor, it will integrate a qualified sensor architecture for level 3 urban and level 4 highway driving scenarios.
With a diverse and redundant array of high-resolution camera, radar, lidar and ultrasonic sensors, DRIVE Hyperion can process an extraordinary amount of safety-critical data to enable vehicles to deftly navigate their surroundings.
Another advantage of DRIVE Hyperion is its compatibility across generations, as it retains the same compute form factor and NVIDIA DriveWorks application programming interfaces, enabling a seamless transition from DRIVE Orin to DRIVE Thor and beyond.
Plus, DRIVE Hyperion can help speed development times and lower costs for electronics manufacturers like Foxconn, since the sensors available on the platform have cleared NVIDIA’s rigorous qualification processes.
The shift to software-defined vehicles with a centralized electronic architecture will drive the need for high-performance, energy-efficient computing solutions such as DRIVE Thor. By coupling it with the DRIVE Hyperion sensor architecture, Foxconn and its automotive customers will be better equipped to realize a new era of safe and intelligent EVs.
Since its inception, Hon Hai Tech Day has served as a launch pad for Foxconn to showcase its latest endeavors in contract design and manufacturing services and new technologies. These accomplishments span the EV sector and extend to the broader consumer electronics industry.
Generative AI is one of the most important trends in the history of personal computing, bringing advancements to gaming, creativity, video, productivity, development and more.
And GeForce RTX and NVIDIA RTX GPUs, which are packed with dedicated AI processors called Tensor Cores, are bringing the power of generative AI natively to more than 100 million Windows PCs and workstations.
Today, generative AI on PC is getting up to 4x faster via TensorRT-LLM for Windows, an open-source library that accelerates inference performance for the latest AI large language models, like Llama 2 and Code Llama. This follows the announcement of TensorRT-LLM for data centers last month.
NVIDIA has also released tools to help developers accelerate their LLMs, including scripts that optimize custom models with TensorRT-LLM, TensorRT-optimized open-source models and a developer reference project that showcases both the speed and quality of LLM responses.
TensorRT acceleration is now available for Stable Diffusion in the popular Web UI by Automatic1111 distribution. It speeds up the generative AI diffusion model by up to 2x over the previous fastest implementation.
LLMs are fueling productivity — engaging in chat, summarizing documents and web content, drafting emails and blogs — and are at the core of new pipelines of AI and other software that can automatically analyze data and generate a vast array of content.
TensorRT-LLM, a library for accelerating LLM inference, gives developers and end users the benefit of LLMs that can now operate up to 4x faster on RTX-powered Windows PCs.
At higher batch sizes, this acceleration significantly improves the experience for more sophisticated LLM use — like writing and coding assistants that output multiple, unique auto-complete results at once. The result is accelerated performance and improved quality that lets users select the best of the bunch.
TensorRT-LLM acceleration is also beneficial when integrating LLM capabilities with other technology, such as in retrieval-augmented generation (RAG), where an LLM is paired with a vector library or vector database. RAG enables the LLM to deliver responses based on a specific dataset, like user emails or articles on a website, to provide more targeted answers.
To show this in practical terms, when the question “How does NVIDIA ACE generate emotional responses?” was asked of the LLaMa 2 base model, it returned an unhelpful response.
Better responses, faster.
Conversely, using RAG with recent GeForce news articles loaded into a vector library and connected to the same Llama 2 model not only returned the correct answer — using NeMo SteerLM — but did so much quicker with TensorRT-LLM acceleration. This combination of speed and proficiency gives users smarter solutions.
TensorRT-LLM will soon be available to download from the NVIDIA Developer website. TensorRT-optimized open source models and the RAG demo with GeForce news as a sample project are available at ngc.nvidia.com and GitHub.com/NVIDIA.
Automatic Acceleration
Diffusion models, like Stable Diffusion, are used to imagine and create stunning, novel works of art. Image generation is an iterative process that can take hundreds of cycles to achieve the perfect output. When done on an underpowered computer, this iteration can add up to hours of wait time.
TensorRT is designed to accelerate AI models through layer fusion, precision calibration, kernel auto-tuning and other capabilities that significantly boost inference efficiency and speed. This makes it indispensable for real-time applications and resource-intensive tasks.
And now, TensorRT doubles the speed of Stable Diffusion.
Compatible with the most popular distribution, WebUI from Automatic1111, Stable Diffusion with TensorRT acceleration helps users iterate faster and spend less time waiting on the computer, delivering a final image sooner. On a GeForce RTX 4090, it runs 7x faster than the top implementation on Macs with an Apple M2 Ultra. The extension is available for download today.
The TensorRT demo of a Stable Diffusion pipeline provides developers with a reference implementation on how to prepare diffusion models and accelerate them using TensorRT. This is the starting point for developers interested in turbocharging a diffusion pipeline and bringing lightning-fast inferencing to applications.
Video That’s Super
AI is improving everyday PC experiences for all users. Streaming video — from nearly any source, like YouTube, Twitch, Prime Video, Disney+ and countless others — is among the most popular activities on a PC. Thanks to AI and RTX, it’s getting another update in image quality.
RTX VSR is a breakthrough in AI pixel processing that improves the quality of streamed video content by reducing or eliminating artifacts caused by video compression. It also sharpens edges and details.
Available now, RTX VSR version 1.5 further improves visual quality with updated models, de-artifacts content played in its native resolution and adds support for RTX GPUs based on the NVIDIA Turing architecture — both professional RTX and GeForce RTX 20 Series GPUs.
Retraining the VSR AI model helped it learn to accurately identify the difference between subtle details and compression artifacts. As a result, AI-enhanced images more accurately preserve details during the upscaling process. Finer details are more visible, and the overall image looks sharper and crisper.
RTX Video Super Resolution v1.5 improves detail and sharpness.
New with version 1.5 is the ability to de-artifact video played at the display’s native resolution. The original release only enhanced video when it was being upscaled. Now, for example, 1080p video streamed to a 1080p resolution display will look smoother as heavy artifacts are reduced.
RTX VSR now de-artifacts video played at its native resolution.
RTX VSR 1.5 is available today for all RTX users in the latest Game Ready Driver. It will be available in the upcoming NVIDIA Studio Driver, scheduled for early next month.
RTX VSR is among the NVIDIA software, tools, libraries and SDKs — like those mentioned above, plus DLSS, Omniverse, AI Workbench and others — that have helped bring over 400 AI-enabled apps and games to consumers.
The AI era is upon us. And RTX is supercharging at every step in its evolution.
NVIDIA today announced an update to RTX Video Super Resolution (VSR) that delivers greater overall graphical fidelity with preserved details, upscaling for native videos and support for GeForce RTX 20 Series desktop and laptop GPUs.
For AI assists from RTX VSR and more — from enhanced creativity and productivity to blisteringly fast gaming — check out the RTX for AI page.
Plus, this week In the NVIDIA Studio, Twitch personality Runebee shares her inspiration, streaming tips and how she uses AI and RTX GPU acceleration.
And don’t forget to join the #SeasonalArtChallenge by submitting spooky Halloween-themed art in October and harvest- and fall-themed pieces in November. For inspiration, check out the hauntingly adorable work of artists like iryna.blender3d on Twitter.
The #SeasonalArtChallenge continues on with an incredible render from iryna.blender3d (IG).
RTX VSR’s AI model has been retrained to more accurately identify the difference between subtle details and compression artifacts to better preserve image details during the upscaling process. Finer details are more visible, and the overall image looks sharper and crisper than before.
RTX VSR v1.5 improves detail and sharpness.
RTX VSR version 1.5 will also de-artifact videos played at their native resolution — prior, only upscaled video could be enhanced. Providing a leap in graphical fidelity for laptop owners with 1080p screens, the updated RTX VSR makes 1080p resolution, which is popular for content and displays, look smoother at its native resolution, even with heavy artifacts.
RTX VSR now de-artifacts video played at native resolution.
And with expanded RTX VSR support, owners of GeForce RTX 20 Series GPUs can benefit from the same AI-enhanced video as those using RTX 30 and 40 Series GPUs.
RTX VSR 1.5 is available as part of the latest Game Ready Driver, available for download today. Content creators downloading NVIDIA Studio Drivers — designed to enhance features, reduce repetitiveness and dramatically accelerate creative workflows — can install the driver with RTX VSR releasing in early November.
Runebee-lievable Streaming
Runebee has been livestreaming for over 10 years, providing a space for viewers to hang out and talk about games, movies or whatever else is going on in life. Over the years, she’s realized how common a desire for escapism is.
“Things aren’t always sunshine and rainbows, so it’s nice to have some company that can help take your mind off things,” said Runebee.
Runebee has amassed over 100K followers on Twitch, YouTube and Instagram, crediting her success to thorough preparation of her setup. Her technology-forward approach ensures efficiency and reliability — allowing her focus to be on performance.
“There’s a lot of planning involved in streaming, but at the end of the day, hitting the ‘start streaming’ button is the most important step, and NVIDIA GPU-acceleration is a massive factor in allowing it to go as smoothly as it does,” said Runebee.
“I never thought I’d have this smooth of a stream just by upgrading to a GeForce RTX 40 Series GPU.” – Runebee
OBS is Runbee’s preferred open-source software for video recording and livestreaming on Twitch. For maximum efficiency, Runebee deploys her GeForce RTX 4080 RTX GPU, taking advantage of the eighth-generation NVIDIA encoder, NVENC, to independently encode video, which frees up the graphics card to focus on livestreaming.
“Streaming games and running OBS used to kill my CPU, and NVENC has taken so much stress off,” said Runebee. “I was hardly even able to stream PC games until I switched to NVENC.”
For livestreamers, RTX 40 Series GPUs can offer support for real-time AV1 hardware encoding, providing a 40% efficiency boost compared to H.264 and delivering higher quality than competing GPUs.
“As I started building more PCs with NVIDIA GPUs, I never had a reason to switch!” – Runebee
Runebee can export recordings of her livestreams with Adobe Premiere Pro in half the normally required time thanks to GeForce RTX 40 Series dual encoders working together, dividing the work evenly to double output.
They’re capable of recording up to 8K, 60 frames per second content in real time via GeForce Experience and OBS Studio.
Always looking to improve her livestreaming process, Runebee plans on experimenting with the NVIDIA Broadcast app, which transforms any room into a home studio by upgrading standard webcams, microphones and speakers into premium smart devices using the power of AI.
Runebee encourages those interested in livestreaming to at least give their potential passion project a shot. “It’s a great way to meet tons of new friends, become more articulate at describing the things you love — be it games or movies — and cultivate a community to share your passions with,” she said.
At SHoP Architects, a New York City-based architectural firm, Mengyi Fan and her team aim to inspire industry professionals to create visual masterpieces by incorporating emerging technologies.
Fan, the director of visualization at SHoP, has expertise that spans the fields of architectural visualization and design. She takes a definitive, novel and enduring approach to designing and planning architecture for city skylines and streetscapes.
Fan and her team work on various architecture visualization projects, from still renderings to real-time walkthroughs. They use multiple creative applications throughout the course of their projects, including Adobe Photoshop, Autodesk 3ds Max, Autodesk Revit and Epic Games’ Unreal Engine. SHoP also collaborates directly with architects at project kickoff, providing images and animations that facilitate quicker decision-making during the design process.
The team consistently integrates new technologies that allow them to explore untapped innovation opportunities, as well as boost research and development. Fan often incorporates real-time and traditional rendering, extended reality and AI into her creative workflows.
To capture all the details that bring the designs together, SHoP uses NVIDIA RTX A5500. Fan is also part of the NVIDIA RTX Ambassador Program, which is designed to amplify the work of professionals from diverse industries who are using RTX technology. Equipped with the latest capabilities of RTX, Fan hopes to continue pushing boundaries in real-time visualization, AI and digital twin applications.
All images courtesy of SHoP Architects.
Redefining Creative Experiences
3D models play a critical role as the single source of truth, which is why SHoP designers need advanced technology to help them create detailed models and visualizations without creativity or productivity slowdowns.
Previously, the team used CPU-based offerings, which limited the scope of work and research and development they could take on. But with RTX, designers can create and communicate complex designs while continuously collaborating with others.
By tapping into RTX A5500, Fan can prioritize efficiency and high rendering quality without worrying about compute power limitations.
“NVIDIA’s professional RTX GPUs are currently known as the industry standard for graphics cards solutions,” said Fan. “RTX provides us with the performance and power needed to do all the above without worrying about hardware constraints.”
The advanced features of the RTX GPUs allow SHoP designers to explore new ways of representation.
SHoP Architects’ projects have grown in scale, location and diversity, and Fan and her team are constantly learning and adapting from each project, drawing inspiration from diverse areas such as automotive, aviation, film and gaming.
Fan views RTX-powered tools as a means of opening up diverse approaches and solutions to be more widely adopted within the industry. And as an NVIDIA RTX Ambassador, she aims to push past technological boundaries by connecting with like-minded designers and creatives.
At one of the U.K.’s largest technology festivals, top enterprises and startups are this week highlighting their latest innovations, hosting workshops and celebrating the growing tech ecosystem based in the country’s southwest.
The Bristol Technology Festival today showcased the work of nine startups that recently participated in a challenge hosted by Digital Catapult — the U.K. authority on advanced digital technology — in collaboration with NVIDIA.
The challenge, which ran for four months, supported companies in developing a prototype or extending an innovation that could transform experiences using reality capture, real-time collaboration and creation, or cross-platform content delivery.
It’s part of MyWorld, an initiative for pioneering creative technology focused on the western U.K.
Each selected startup was given £50,000 to help develop projects that foster the advancement of generative AI, digital twins and other groundbreaking technologies for use in creative industries.
Lux Aeterna Explores Generative AI for Visual Effects
Emmy Award-winning independent visual effects studio Lux Aeterna — which is using gen AI and neural networks for VFX production — deployed its funds to develop a generative AI-powered text-to-image toolkit for creating maps, or 2D images used to represent aspects of a scene, object or effect.
At the Bristol Technology Festival, Lux Aeterna demonstrated this technology, powered by NVIDIA RTX 40 Series GPUs, with a focus on its ability to generate parallax occlusion maps, a method of creating the effect of depth for 3D textured surfaces.
“Our goal is to tackle the unique VFX challenges with bespoke AI-assisted solutions, and to put these tools of the future into the hands of our talented artists,” said James Pollock, creative technologist at Lux Aeterna. “NVIDIA’s insightful feedback on our work as a part of the MyWorld challenge has been invaluable in informing our strategy toward innovation in this rapidly changing space.”
Meaning Machine Brings AI to Game Characters, Dialogue
Meaning Machine, a studio pioneering gameplay that uses natural language AI, used its funds from the challenge to develop a generative AI system for in-game characters and dialogue. Its Game Consciousness technology enables in-game characters to accurately talk about their world, in real time, so that every line of dialogue reflects the game developer’s creative vision.
Meaning Machine’s demo at today’s showcase invited attendees to experience its interrogation game, “Dead Meat,” in which players must chat with an in-game character — a murder suspect — with the aim of manipulating them into giving a confession.
A member of the NVIDIA Inception program for cutting-edge startups, Meaning Machine powers its generative AI technology for game development using the NVIDIA NeMo framework for building, customizing and deploying large language models.
“NVIDIA NeMo enables us to deliver scalable model tuning and inference,” said Ben Ackland, cofounder and chief technology officer at Meaning Machine. “We see potential for Game Consciousness to transform blockbuster games — delivering next-gen characters that feel at home in bigger, deeper, more complex virtual worlds — and our collaboration with NVIDIA will help us make this a reality sooner.”
More Startups Showcase AI for Creative Industries
Additional challenge participants that hosted demos today at the Bristol Technology Festival include:
Black Laboratory, an NVIDIA Inception member demonstrating a live puppet-performance capture system, puppix, that can seamlessly transfer the physicality of puppets to digital characters.
IMPRESS, which is developing an AI-powered launchpad for self-publishing indie video games. It offers data-driven market research for game development, marketing campaign support, press engagement tools and more.
Larkhall, which is expanding Otto, its AI system that generates live, reactive visuals based on musical performances, as well as automatic, expressive captioning for speech-based performances.
Motion Impossible, which is building a software platform for centralized control of its AGITO systems — free-roaming, modular, camera dolly systems for filmmaking.
“NVIDIA’s involvement in the MyWorld challenge, led by Digital Catapult, has created extraordinary value for the participating teams,” said Sarah Addezio, senior innovation partner and MyWorld program lead at Digital Catapult. “We’ve seen the benefit of our cohort having access to industry-leading technical and business-development expertise, elevating their projects in ways that would not have been possible otherwise.”
Put the pedal to the metal this GFN Thursday as Forza Motorsport leads 23 new games in the cloud.
Plus, Acer’s Predator Connect 6E is the newest addition to the GeForce NOW Recommended program, with easy cloud gaming quality-of-service (QoS) settings built in to give Ultimate members the best streaming experience.
No Breaks, No Limits, No Downloads
Take the pole position thanks to the cloud. Turn 10 Studios’ Forza Motorsport joins the GeForce NOW library this week.
The realistic racing sim features over 500 realistically rendered cars across 20 dynamic and world-famous tracks, each with dynamic time-of-day, weather and driving conditions, so no two laps will ever be the same. Unlock more than 800 performance upgrades and outbuild the competition, either online or against new, highly competitive AI racers in the single-player Builders Cup Career Mode.
Stream every turn at GeForce quality on nearly any device and max out image quality thanks to the cloud. Ultimate members can get in gear at up to 4K resolution andat up to 120 frames per second for the most realistic driving experience.
Need for Speed
Better together.
Say hello to the newest addition to the GeForce NOW Recommended program.
GeForce NOW members have access to the best cloud streaming experience, and Acer’s newly released Predator Connect W6 wireless router is built to support it, providing the ultrafast, stable gaming environment needed for 4K cloud streaming.
NVIDIA and Acer have collaborated to create a best-in-class streaming experience, creating a special QoS option in the Predator Connect that prioritizes cloud gaming network traffic for maximized speed. The software underwent six months of rigorous testing, ensuring it can consistently deliver the high-performance offerings of a GeForce NOW Ultimate membership, including 4K 120 fps gaming with ultra-low latency.
The Predator Connect W6 also includes tri-band network support with the latest wireless technologies, like WiFi 6E. Pair it with a GeForce NOW Ultimate membership for an unrivaled cloud gaming experience.
Play On
Live long and prosper in the cloud.
Get the weekend started with the new weekly games list:
Forza Motorsport (New release on Steam, Xbox and available on PC Game Pass, Oct. 12)
From Space (New release on Xbox, available on PC Game Pass, Oct. 12)
Hotel: A Resort Simulator (New release on Steam, Oct. 12)
Saltsea Chronicles (New release on Steam, Oct. 12)
Star Trek: Infinite (New release on Steam, Oct. 12)
Tribe: Primitive Builder (New release on Steam, Oct. 12)
Developers have a new AI-powered steering wheel to help them hug the road while they drive powerful large language models (LLMs) to their desired locations.
NVIDIA NeMo SteerLM lets companies define knobs to dial in a model’s responses as it’s running in production, a process called inference. Unlike current methods for customizing an LLM, it lets a single training run create one model that can serve dozens or even hundreds of use cases, saving time and money.
NVIDIA researchers created SteerLM to teach AI models what users care about, like road signs to follow in their particular use cases or markets. These user-defined attributes can gauge nearly anything — for example, the degree of helpfulness or humor in the model’s responses.
One Model, Many Uses
The result is a new level of flexibility.
With SteerLM, users define all the attributes they want and embed them in a single model. Then they can choose the combination they need for a given use case while the model is running.
For example, a custom model can now be tuned during inference to the unique needs of, say, an accounting, sales or engineering department or a vertical market.
The method also enables a continuous improvement cycle. Responses from a custom model can serve as data for a future training run that dials the model into new levels of usefulness.
Saving Time and Money
To date, fitting a generative AI model to the needs of a specific application has been the equivalent of rebuilding an engine’s transmission. Developers had to painstakingly label datasets, write lots of new code, adjust the hyperparameters under the hood of the neural network and retrain the model several times.
SteerLM replaces those complex, time-consuming processes with three simple steps:
Using a basic set of prompts, responses and desired attributes, customize an AI model that predicts how those attributes will perform.
Automatically generating a dataset using this model.
Training the model with the dataset using standard supervised fine-tuning techniques.
Many Enterprise Use Cases
Developers can adapt SteerLM to nearly any enterprise use case that requires generating text.
With SteerLM, a company might produce a single chatbot it can tailor in real time to customers’ changing attitudes, demographics or circumstances in the many vertical markets or geographies it serves.
SteerLM also enables a single LLM to act as a flexible writing co-pilot for an entire corporation.
For example, lawyers can modify their model during inference to adopt a formal style for their legal communications. Or marketing staff can dial in a more conversational style for their audience.
Game On With SteerLM
To show the potential of SteerLM, NVIDIA demonstrated it on one of its classic applications — gaming (see the video below).
Today, some games pack dozens of non-playable characters — characters that the player can’t control — which mechanically repeat prerecorded text, regardless of the user or situation.
SteerLM makes these characters come alive, responding with more personality and emotion to players’ prompts. It’s a tool game developers can use to unlock unique new experiences for every player.
The Genesis of SteerLM
The concept behind the new method arrived unexpectedly.
“I woke up early one morning with this idea, so I jumped up and wrote it down,” recalled Yi Dong, an applied research scientist at NVIDIA who initiated the work on SteerLM.
While building a prototype, he realized a popular model-conditioning technique could also be part of the method. Once all the pieces came together and his experiment worked, the team helped articulate the method in four simple steps.
It’s the latest advance in model customization, a hot area in AI research.
“It’s a challenging field, a kind of holy grail for making AI more closely reflect a human perspective — and I love a new challenge,” said the researcher, who earned a Ph.D. in computational neuroscience at Johns Hopkins University, then worked on machine learning algorithms in finance before joining NVIDIA.
Get Hands on the Wheel
SteerLM is available as open-source software for developers to try out today. They can also get details on how to experiment with a Llama-2-13b model customized using the SteerLM method.
For users who want full enterprise security and support, SteerLM will be integrated into NVIDIA NeMo, a rich framework for building, customizing and deploying large generative AI models.
The SteerLM method works on all models supported on NeMo, including popular community-built pretrained LLMs such as Llama-2 and BLOOM.
Generative AI is helping creatives across many industries bring ideas to life at unprecedented speed.
This technology will be on display at Adobe MAX, running through Thursday, Oct. 12, in person and virtually.
Adobe is putting the power of generative AI into the hands of creators with the release of Adobe Firefly. Using NVIDIA GPUs, Adobe is bringing new opportunities for artists and more looking to accelerate generative AI — unleashing generative AI enhancements for millions of users. Firefly is now available as a standalone app and integrated with other Adobe apps.
Recent updates to Adobe’s most popular apps — including for Adobe Premiere Pro, Lightroom, After Effects and Substance 3D Stager, Modeler and Sampler — bring new AI features to creators. And GeForce RTX and NVIDIA RTX GPUs help accelerate these apps and AI effects, providing massive time savings.
Video editors can use AI to improve dialogue quality with the Enhance Speech (beta) function and work faster with GPU accelerated decoding of ARRIRAW camera original digital film clips up to 60% faster on RTX GPUs compared to on an Apple MacBook Pro 16 M2 Max in Premiere Pro. Plus, take advantage of improved rotoscoping quality with the Next-Gen Roto Brush (version 3.0) feature now available in After Effects.
Photographers and 2D artists now have new Lens Blur effects in Lightroom, complementing ongoing optimizations that improve performance in its Select Object, Select People and Select Sky features.
These advanced features are further enhanced by NVIDIA Studio Drivers, free for RTX GPU owners, which add performance and reliability. The October Studio Driver is available for download now.
Finally, 3D artist SouthernShotty returns to In the NVIDIA Studio to share his 3D montage of a mix of beautifully hand-crafted worlds — built with Adobe apps and Blender and featuring AI-powered workflows accelerated by his GeForce RTX 4090 Laptop GPU.
MAXimizing Creativity
Adobe Creative Cloud and Substance 3D apps run fastest on NVIDIA RTX GPUs — and recent updates show continued time-saving performance gains.
Tested on NVIDIA Studio laptops with GeForce RTX 4050 and 4090 Laptop GPUs with Intel Core i9 13th Gen; MacBook Pro 14″ with M2 Pro; and MacBook Pro 16″ with M2 Max. Performance measures total time to apply Enhanced Speech effect to video clip within Adobe Premiere Pro.
Premiere Pro’s Enhance Speech (beta) feature, currently in beta, uses AI to remove noise and improve the quality of dialogue clips so that they sound professionally recorded. Tasks are completed 8x faster with a GeForce RTX 4090 Laptop GPU compared to MacBook Pro 16 with M2 Max.
Tested on NVIDIA Studio laptops with GeForce RTX 4050 and 4090 Laptop GPUs with Intel Core i9 13th Gen; MacBook Pro 14″ with M2 Pro; and MacBook Pro 16″ with M2 Max. Performance measures total time to apply export ARRIRAW footage within Adobe Premiere Pro.
Premiere Pro professionals use ARRIRAW footage — the only format that fully retains a camera’s natural color response and great exposure latitude. ARRIRAW video exports can be done 1.6x faster on GeForce RTX 4090 Laptop GPUs than on the MacBook Pro 16 with M2 Max.
Additionally, After Effects users can access the Next-Gen Roto Brush feature in beta, powered by a brand-new AI model. It’s ideal for isolating subjects such as overlapping limbs, hair and other transparencies more easily, saving time.
RTX GPUs shine in 3D workloads. Substance 3D Stager’s new AI-powered, GPU-accelerated denoiser allows almost instantaneous photorealistic rendering.
Substance 3D Modeler’s recent Hardware Ray Tracing in Capture Mode capability uses NVIDIA technology to export high-quality screenshots 2.4x faster than before.
Meanwhile, Substance 3D Sampler’s AI UpScale feature increases detail for low-quality textures and its Image to Material feature makes it easier to create high-quality materials from a single photograph.
Lens Blur in Adobe Lightroom.
Photographers have long used the popular Super Resolution feature in Adobe Camera Raw, which is supported by Photoshop, and gives 3x faster performance on a GeForce RTX 4090 Laptop GPU compared to a MacBook Pro 16 M2 Max. Now, Lightroom users have AI-driven capabilities with the Lens Blur feature for applying realistic lens blur effects, Point Color for precise color adjustments to speed up color correction, and High Dynamic Range Output for edits and renders in an HDR color space.
Adobe Firefly Glows #76B900
Adobe Firefly provides users with generative AI features, utilizing NVIDIA GPUs in the cloud.
Firefly features such as Generative Fill — to add, remove and expand content in Photoshop, and Generative Expand to expand scenes with generative content — help complete tasks instantly in Adobe Photoshop.
Adobe Firefly-powered feature Generative Fill in Adobe Photoshop.
Adobe Illustrator offers the Generative Recolor feature, which enables graphic designers to explore a wide variety of colors, palettes and themes in their work without having to do tedious manual recoloring. Discovering the perfect combination of colors now takes just a few seconds.
Adobe Firefly-powered feature Generative Recolor in Adobe Illustrator.
Adobe Express offers the Text to Image feature to create incredible imagery from standard prompts, and the Text Effects feature helps stylize standard text for use in creating flyers, resumes, social media reels and more.
These powerful AI capabilities were developed with the creative community in mind — guided by AI ethics principles of content and data transparency — to ensure ethically and morally responsible output.
NVIDIA technology will continue to support new Adobe Firefly-powered features from the cloud as they become available to photographers, illustrators, designers, video editors, 3D artists and more.
MAXed Out AI Fun
Independent filmmaker and artist SouthernShotty knows the challenges of producing content alone and how daunting the process can be.
SouthernShotty’s artwork invokes childlike emotions with impressive visuals.
“I’m a big fan of the NVIDIA Studio Driver support, because it adds stability and reliability.” – SouthernShotty
As such, SouthernShotty is always looking for tools and techniques to ease the creative process. To accelerate his workflow, he combined new Adobe AI capabilities accelerated by his GeForce RTX 4090 GPU to achieve incredible efficiency.
The artist kept his 3D models fairly simple, focusing on textures to ensure that the world would match his vision. He deployed one of his favorite features, the AI-powered Image to Material in Adobe Substance 3D Sampler, to convert images to physically based rendering textures.
Applying textures in Blender.
“It’s so fast that I can pretty much preview my entire scene in real time and see the final result before I ever hit the render button.” – SouthernShotty
RTX-accelerated light and ambient occlusion baking allowed SouthernShotty to realize the desired visual effect in seconds.
His RTX GPU continued to play an essential role as he used Blender Cycles’ RTX-accelerated OptiX ray tracing in the viewport for interactive, photorealistic rendering.
As the 3D montage progresses, the main character appears and reappears in several new environments. Each new location is featured for only a second or two, but SouthernShotty still needed to create a fully fleshed out environment for each.
Normally this would take a substantial amount of time, but an AI assist from Adobe Firefly helped speed the process.
Adobe is committed to developing generative AI responsibly, with creators at the center.
SouthernShotty opened the app, entered “fantasy mushroom forest” as the text prompt and then made minor adjustments by tinkering with the digital art, golden hour, for lighting, and wide-angle settings for composition. When satisfied with the result, he downloaded the image for further editing in Photoshop.
An entirely new image is generated in minutes with Adobe Firefly, powered by GeForce RTX GPUs.
SouthernShotty then used the AI-powered Generative Fill feature to remove unwanted background elements. He used the Neural Filters optimization to color match a castle element added in the background, then used Generative Fill again to effortlessly blend the castle in with the trees.
Finally, SouthernShotty used the Neural Filters optimization in the new Lens Blur feature to add depth to the scene — first exporting depth as a separate layer and then editing in Blender to complete the scene.
Editing the depth map in Blender.
“My entire process was sprinkled with GPU-acceleration and AI-enabled features,” said SouthernShotty. “In Blender, the GeForce RTX 4090 GPU accelerated everything — but especially the live render view in my viewport, which was crucial to visualizing my scenes.”
Check out SouthernShotty’s YouTube channel for Blender tutorials on characters, animation, rigging and more.