On Razer’s Edge: VFX Star Surfaced Studio Creates Stunning Sci-Fi World This Week ‘In The NVIDIA Studio’

On Razer’s Edge: VFX Star Surfaced Studio Creates Stunning Sci-Fi World This Week ‘In The NVIDIA Studio’

Visual effects artist Surfaced Studio returns to In the NVIDIA Studio to share his real-world VFX project, created on a brand new Razer Blade 16 Mercury Edition laptop powered by GeForce RTX 4080 graphics.

Surfaced Studio creates photorealistic, digitally generated imagery that seamlessly integrates visual effects into short films, television and console gaming.

He found inspiration for a recent sci-fi project by experimenting with 3D transitions: using a laptop screen as a gateway between worlds, like the portals from Dr. Strange or the transitions from The Matrix.

Break the Rules and Become a Hero

Surfaced Studio aimed to create an immersive experience with his latest project.

“I wanted to get my audience to feel surprised getting ‘sucked into’ the 3D world,” he explained.

Surfaced Studio began with a simple script, alongside sketches of brainstormed ideas and played out shots. “This usually helps me think through how I’d pull each effect off and whether they’re actually possible,” he said.

From there, he shot video and imported the footage into Adobe Premiere Pro for a rough test edit. Then, Surfaced Studio selected the most suitable clips for use.

He cleaned up the footage in Adobe After Effects, stabilizing shots with the Warp Stabilizer tool and removing distracting background elements with the Mocha Pro tool. Both effects were accelerated by his GeForce RTX 4080 Laptop GPU.

After, he created a high-contrast version of the shot for 3D motion tracking in Blender.

3D motion tracking in Blender.

Motion tracking is used to apply tracking data to 3D objects. “This was pretty tricky, as it’s a 16-second gimbal shot with fast moving sections and a decent camera blur,” said Surfaced Studio. “It took me a good few days to get a decent track and fix issues with manual keyframes and ‘patches’ between different sections.”

A gimbal shot uses sensors and motors to stabilize and support the camera.

Surfaced Studio exported footage of the animated camera into a 3D FBX file to use in Unreal Engine and set it up in the Cyberpunk High City pack, which contains a modular constructor for creating highly detailed sci-fi city streets, alleys and blocks.

“I’m not much of a 3D artist so using [the Cyberpunk High City pack] was the best option to complete the project on this side of the century,” the artist said. He then made modifications to the cityscape, reducing flickering lights and adding buildings, custom fog and Razer and NVIDIA Studio banners. He even added a billboard with an ad encouraging kindness to cats. “It’s so off to the side of most shots I doubt anyone actually noticed,” noted a satisfied Surfaced Studio.

A PSA from Surfaced Studio: be nice to cats.

Learning 3D effects can seem overwhelming due to the vast knowledge needed across multiple apps and district workflows. But Surfaced Studio stresses the simple importance of first understanding workflow hierarchies — and how one feeds into another — as an approachable entry point to choosing a specialty suited to a creator’s unique passion and natural talent.

Surfaced Studio was able to seamlessly run his scene in Unreal Engine full 4K resolution — with all textures and materials loading at maximum graphical fidelity — thanks to the GeForce RTX 4080 Laptop GPU in his Razer Blade 16. The graphics card also contains NVIDIA DLSS capabilities to increase viewport interactivity by using AI to upscale frames rendered at lower resolution while retaining high-fidelity detail.

Moving virtual objects in Unreal Engine.

Surfaced Studio then took the FBX file with the exported camera tracking data into Unreal Engine, matching his ‘3D camera’ with the real-world one used to film the laptop with. “This was the crucial step in creating the ‘look-through’ effect I wanted,” he said.

Once satisfied with the look, Surfaced Studio exported all sequences from Unreal Engine as multilayer EXR files — including a Z-depth pass, a grayscale value range to create a depth-of-field effect — to separate visual elements from the 3D footage.

Composite work in Adobe After Effects.

Surfaced Studio went back to After Effects for the final composites. He added distortion effects and some glow for the transition from the physical screen to the 3D world.

Cleaning up screen tracking in Adobe After Effects.

Then, Surfaced Studio again used the Z-depth pass to extract the 3D cars and overlay them onto the real footage.

Composite work in Adobe After Effects.

He exported the final project into Premiere Pro and added sound effects, music and a few color correction edits.

Final edits in Adobe Premiere Pro.

With GeForce RTX 4080 dual encoders, Surface Studio nearly halved Adobe Premiere Pro video decoding and encoding export times. Surfaced Studio has been using NVIDIA GPUs for over a decade, citing their widespread integration with commonly used tools.

“NVIDIA has simply done a better job than its competitors to reach out to and integrate with other companies that create creative apps,” said Surfaced Studio. “CUDA and RTX are widespread technologies that you find in most popular creative apps to accelerate workflows.”

When he’s not working on VFX projects, Surfaced Studio also uses his laptop to game. The Razer Blade 16 has the first dual-mode mini-LED display with two native resolutions: UHD+ at 120Hz — suited for VFX workflows — and FHD at 240Hz — ideal for gamers (or creators who like gaming).

Powerful, elegant, beautiful: the Razer Blade 16 Mercury Edition.

For a limited time, gamers and creators can get the critically acclaimed game Alan Wake 2 with the purchase of the Razer Blade 16 powered by GeForce RTX 40 Series graphics cards.

Surfaced Studio’s VFX tutorials are available on YouTube, where he covers filmmaking, VFX and 3D techniques using Adobe After Effects, Blender, Photoshop, Premiere Pro and other apps.

VFX artist Surfaced Studio.

Join the #SeasonalArtChallenge

Don’t forget to join the #SeasonalArtChallenge by submitting spooky Halloween-inspired art in October and harvest- and fall-themed pieces in November.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

Street View to the Rescue: Deep Learning Paves the Way to Safer Buildings

Street View to the Rescue: Deep Learning Paves the Way to Safer Buildings

Images such as those in Google Street View are taking on a new purpose in the hands of University of Florida Assistant Professor of Artificial Intelligence Chaofeng Wang.

He’s using them, along with deep learning, in a research project to automate the evaluation of urban buildings. The project aims to help governments mitigate natural disaster damage by providing the information needed for decision-makers to bolster building structures or perform post-disaster recovery.

After a natural disaster such as an earthquake, local governments send teams to check and evaluate building conditions. Manually done, it can take up to months to go through the full stock of a city.

Wang’s project uses AI to accelerate the evaluation process — cutting the time needed to a few hours. The AI model is trained using images sourced from Google Street View and local governments to assign scores to buildings based on Federal Emergency Management Agency (FEMA) P-154 standards, which provide assessment guidelines based on factors like wall material, structure type, building age and more. Wang also collaborated with the World Bank Global Program for Resilient Housing to collect images and perform annotations, which were used to improve the model.

The collected images are placed in a data repository. The AI model reads the repository and performs inference on the images — a process accelerated by NVIDIA DGX A100 systems.

“Without NVIDIA GPUs, we wouldn’t have been able to do this,” Wang said. “They significantly accelerate the process, ensuring timely results.”

Wang used the DGX A100 nodes in the University of Florida’s supercomputer, HiPerGator. HiPerGator is one of the world’s fastest AI supercomputers in academia, delivering 700 petaflops of AI performance, and was built with the support of NVIDIA founder and UF alumnus Chris Malachowsky and hardware, software, training and services from NVIDIA.

The AI model’s output is compiled into a database that feeds into a web portal, which shows information — including the safety assessment score, building type and even roof or wall material — in a map-based format.

Wang’s work was funded by the NVIDIA Applied Research Accelerator Program, which supports research projects that have the potential to make a real-world impact through the deployment of NVIDIA-accelerated applications adopted by commercial and government organizations.

A Helping Eye

Wang says that the portal can serve different needs depending on the use case. To prepare for a natural disaster, a government can use predictions solely from street view images.

“Those are static images — one example is Google Street View images, which get updated every several years,” he said. “But that’s good enough for collecting information and getting a general understanding about certain statistics.”

But for rural areas or developing regions, where such images aren’t available or not frequently updated, governments can collect the images themselves. Powered by NVIDIA GPUs, the timely delivery of building assessments can help accelerate analyses.

Wang also suggests that with enough refinement, his research could also create ripples for the urban planning and insurance industries.

The project is currently being tested by a few local governments in Mexico and is garnering interest in some African, Asian and South American countries. At its current state, it can achieve over 85% accuracy in its assessment scores, per ‌FEMA P-154 standards.

Survey of the Land

One challenge Wang cites is the variation in urban landscapes in different countries. Different regions have their own cultural and architectural styles. Not trained on a large or diverse enough pool of images, the AI model could be thrown off by factors like paint color when performing wall material analysis. Another challenge is urban density variation.

“It is a very general limitation of current AI technology,” Wang said. “In order to be useful, it requires enough training data to represent the distribution of the real world, so we’re putting efforts into the data collection process to solve the generalization issue.”

To overcome this challenge, Wang aims to train and test the model for more cities. So far, he’s tested about eight cities in different countries.

“We need to generate more detailed and high-quality annotations to train the model with,” he said. “That is the way we can improve the model in the future so that it can be used more widely.”

Wang’s goal is to get the project to a point where it can be deployed as a service for more general industry use.

“We are creating application programming interfaces that can estimate and analyze buildings and households to allow seamless integration with other products,” he said. “We are also building a user-friendly application that all government agencies and organizations can use.”

Read More

For the World to See: Nonprofit Deploys GPU-Powered Simulators to Train Providers in Sight-Saving Surgery

For the World to See: Nonprofit Deploys GPU-Powered Simulators to Train Providers in Sight-Saving Surgery

GPU-powered surgical-simulation devices are helping train more than 2,000 doctors a year in lower-income countries to treat cataract blindness, the world’s leading cause of blindness, thanks to the nonprofit HelpMeSee.

While cataract surgery has a success rate of around 99%, many patients in low- and middle-income countries lack access to the common procedure due to a severe shortage of ophthalmologists. An estimated 90% of the 100 million people affected by cataract-related visual impairment or blindness are in these locations.

By training more healthcare providers — including those without a specialty in ophthalmology — to treat cataracts, HelpMeSee improves the quality of life for patients such as a mother of two young children in Bhiwandi, near Mumbai, India, who was blinded by cataracts in both eyes.

“After the surgery, her vision improved dramatically and she was able to take up a job, changing the course of her entire family,” said Dr. Chetan Ahiwalay, chief instructor and subject-matter expert for HelpMeSee in India. “She and her husband are now happily raising their kids and leading a healthy life. These are the things that keep us going as doctors.”

HelpMeSee’s simulator devices use NVIDIA RTX GPUs to render high-quality visuals, providing a more realistic training environment for doctors to hone their surgical skills. To further improve the trainee experience, NVIDIA experts are working with the HelpMeSee team to improve rendering performance, increase visual realism and augment the simulator with next-generation technologies such as real-time ray tracing and AI.

Tackling Treatable Blindness With Accessible Training

High-income countries have 18x more ophthalmologists per million residents than low-income countries. That coverage gap, which is far wider still in certain countries, makes it harder for those in thinly resourced areas to receive treatment for avoidable blindness.

HelpMeSee’s devices can train doctors on multiple eye procedures using immersive tools inspired by flight simulators used in aviation. The team trains doctors in countries including India, China, Madagascar, Mexico and the U.S., and rolls out multilingual training each year for new procedures.

The eye surgery simulator offers realistic 3D visuals, haptic feedback, performance scores and the opportunity to attempt a step of the procedure multiple times until the trainee achieves proficiency. Qualified instructors like Dr. Ahiwalay travel to rural and urban areas to deliver the training through structured courses — and help surgeons transition from the simulators to live surgeries.

Doctors training to perform cataract surgery
During a training session, doctors learn to perform manual small-incision cataract surgery.

“We’re lowering the barrier for healthcare practitioners to learn these specific skills that can have a profound impact on patients,” said Dr. Bonnie An Henderson, CEO of HelpMeSee, which is based in New York. “Simulation-based training will improve surgical skills while keeping patients safe.”

Looking Ahead to AI, Advanced Rendering 

HelpMeSee works with Surgical Science, a supplier of medical virtual-reality simulators, based in Gothenburg, Sweden, to develop the 3D models and real-time rendering for its devices. Other collaborators — Strasbourg, France-based InSimo and Pune, India-based Harman Connected Services — develop the physics-based simulations and user interface, respectively. 

“Since there are many crucial visual cues during eye surgery, the simulation requires high fidelity,” said Sebastian Ullrich, senior manager of software development at Surgical Science, who has worked with HelpMeSee for years. “To render a realistic 3D representation of the human eye, we use custom shader materials with high-resolution textures to represent various anatomical components, mimic optical properties such as refraction, use order-independent transparency sorting and employ volume rendering.”

NVIDIA RTX GPUs support 3D volume rendering, stereoscopic rendering and depth sorting algorithms that provide a realistic visual experience for HelpMeSee’s trainees. Working with NVIDIA, the team is investigating AI models that could provide trainees with a real-time analysis of the practice procedure and offer recommendations for improvement.

Watch a demo of HelpMeSee’s cataract surgery training simulation.

Subscribe to NVIDIA healthcare news.

Read More

Eureka! NVIDIA Research Breakthrough Puts New Spin on Robot Learning

Eureka! NVIDIA Research Breakthrough Puts New Spin on Robot Learning

A new AI agent developed by NVIDIA Research that can teach robots complex skills has trained a robotic hand to perform rapid pen-spinning tricks — for the first time as well as a human can.

The stunning prestidigitation, showcased in the video above, is one of nearly 30 tasks that robots have learned to expertly accomplish thanks to Eureka, which autonomously writes reward algorithms to train bots.

Eureka has also taught robots to open drawers and cabinets, toss and catch balls, and manipulate scissors, among other tasks.

The Eureka research, published today, includes a paper and the project’s AI algorithms, which developers can experiment with using NVIDIA Isaac Gym, a physics simulation reference application for reinforcement learning research. Isaac Gym is built on NVIDIA Omniverse, a development platform for building 3D tools and applications based on the OpenUSD framework. Eureka itself is powered by the GPT-4 large language model.

“Reinforcement learning has enabled impressive wins over the last decade, yet many challenges still exist, such as reward design, which remains a trial-and-error process,” said Anima Anandkumar, senior director of AI research at NVIDIA and an author of the Eureka paper. “Eureka is a first step toward developing new algorithms that integrate generative and reinforcement learning methods to solve hard tasks.”

AI Trains Robots

Eureka-generated reward programs — which enable trial-and-error learning for robots — outperform expert human-written ones on more than 80% of tasks, according to the paper. This leads to an average performance improvement of more than 50% for the bots.

Robot arm taught by Eureka to open a drawer.

The AI agent taps the GPT-4 LLM and generative AI to write software code that rewards robots for reinforcement learning. It doesn’t require task-specific prompting or predefined reward templates — and readily incorporates human feedback to modify its rewards for results more accurately aligned with a developer’s vision.

Using GPU-accelerated simulation in Isaac Gym, Eureka can quickly evaluate the quality of large batches of reward candidates for more efficient training.

Eureka then constructs a summary of the key stats from the training results and instructs the LLM to improve its generation of reward functions. In this way, the AI is self-improving. It’s taught all kinds of robots — quadruped, bipedal, quadrotor, dexterous hands, cobot arms and others — to accomplish all kinds of tasks.

The research paper provides in-depth evaluations of 20 Eureka-trained tasks, based on open-source dexterity benchmarks that require robotic hands to demonstrate a wide range of complex manipulation skills.

The results from nine Isaac Gym environments are showcased in visualizations generated using NVIDIA Omniverse.

Humanoid robot learns a running gait via Eureka.

“Eureka is a unique combination of large language models and NVIDIA GPU-accelerated simulation technologies,” said Linxi “Jim” Fan, senior research scientist at NVIDIA, who’s one of the project’s contributors. “We believe that Eureka will enable dexterous robot control and provide a new way to produce physically realistic animations for artists.”

It’s breakthrough work bound to get developers’ minds spinning with possibilities, adding to recent NVIDIA Research advancements like Voyager, an AI agent built with GPT-4 that can autonomously play Minecraft.

NVIDIA Research comprises hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics.

Learn more about Eureka and NVIDIA Research.

Read More

Next-Level Computing: NVIDIA and AMD Deliver Powerful Workstations to Accelerate AI, Rendering and Simulation

Next-Level Computing: NVIDIA and AMD Deliver Powerful Workstations to Accelerate AI, Rendering and Simulation

To enable professionals worldwide to build and run AI applications right from their desktops, NVIDIA and AMD are powering a new line of workstations equipped with NVIDIA RTX Ada Generation GPUs and AMD Ryzen Threadripper PRO 7000 WX-Series CPUs.

Bringing together the highest levels of AI computing, rendering and simulation capabilities, these new platforms enable professionals to efficiently tackle the most resource-intensive, large-scale AI workflows locally.

Bringing AI Innovation to the Desktop

Advanced AI tasks typically require data-center-level performance. Training a large language model with a trillion parameters, for example, takes thousands of GPUs running for weeks, though research is underway to reduce model size and enable model training on smaller systems while still maintaining high levels of AI model accuracy.

The new NVIDIA RTX GPU and AMD CPU-powered AI workstations provide the power and performance required for training such smaller models, as well as local fine-tuning, and helping to offload data center and cloud resources for AI development tasks. The devices let users select single- or multi-GPU configurations as required for their workloads.

Smaller trained AI models also provide the opportunity to use workstations for local inferencing. RTX GPU and AMD CPU-powered workstations can be configured to run these smaller AI models for inference serving for small workgroups or departments.

With up to 48GB of memory in a single NVIDIA RTX GPU, these workstations offer a cost-effective way to reduce compute load on data centers. And when professionals do need to scale training and deployment from these workstations to data centers or the cloud, the NVIDIA AI Enterprise software platform enables seamless portability of workflows and toolchains.

RTX GPU and AMD CPU-powered workstations also enable cutting-edge visual workflows. With accelerated computing power, the new workstations enable highly interactive content creation, industrial digitalization, and advanced simulation and design.

Unmatched Power, Performance and Flexibility

AMD Ryzen Threadripper PRO 7000 WX-Series processors provide the CPU platform for the next generation of demanding workloads. The processors deliver a significant increase in core count — up to 96 cores per CPU — and industry-leading maximum memory bandwidth in a single socket.

Combining them with the latest NVIDIA RTX Ada Generation GPUs brings unmatched power and performance in a workstation. The GPUs enable up to 2x the performance in ray tracing, AI processing, graphics rendering and computational tasks compared to the previous generation.

Ada Generation GPUs options include the RTX 4000 SFF, RTX 4000, RTX 4500, RTX 5000 and RTX 6000. They’re built on the NVIDIA Ada Lovelace architecture and feature up to 142 third-generation RT Cores, 568 fourth-generation Tensor Cores and 18,176 latest-generation CUDA cores.

From architecture and manufacturing to media and entertainment and healthcare, professionals across industries will be able to use the new workstations to tackle challenging AI computing workloads — along with 3D rendering, product visualization, simulation and scientific computing tasks.

Availability

New workstations powered by NVIDIA RTX Ada Generation GPUs and the latest AMD Threadripper Pro processors will be available starting next month from BOXX and HP, with other system integrators offering them soon.

Read More

NVIDIA AI Now Available in Oracle Cloud Marketplace

NVIDIA AI Now Available in Oracle Cloud Marketplace

Training generative AI models just got easier.

NVIDIA DGX Cloud AI supercomputing platform and NVIDIA AI Enterprise software are now available in Oracle Cloud Marketplace, making it possible for Oracle Cloud Infrastructure customers to access high-performance accelerated computing and software to run secure, stable and supported production AI in just a few clicks.

The addition — an industry first — brings new capabilities for end-to-end development and deployment on Oracle Cloud. Enterprises can get started from the Oracle Cloud Marketplace to train models on DGX Cloud, and then deploy their applications on OCI with NVIDIA AI Enterprise.

Oracle Cloud and NVIDIA Lift Industries Into Era of AI

Thousands of enterprises around the world rely on OCI to power the applications that drive their businesses. Its customers include leaders across industries such as healthcare, scientific research, financial services, telecommunications and more.

Oracle Cloud Marketplace is a catalog of solutions that offers customers flexible consumption models and simple billing. Its addition of DGX Cloud and NVIDIA AI Enterprise lets OCI customers use their existing cloud credits to integrate NVIDIA’s leading AI supercomputing platform and software into their development and deployment pipelines.

With DGX Cloud, OCI customers can train models for generative AI applications like intelligent chatbots, search, summarization and content generation.

The University at Albany, in upstate New York, recently launched its AI Plus initiative, which is integrating teaching and learning about AI across the university’s research and academic enterprise, in fields such as cybersecurity, weather prediction, health data analytics, drug discovery and next-generation semiconductor design. It will also foster collaborations across the humanities, social sciences, public policy and public health. The university is using DGX Cloud AI supercomputing instances on OCI as it builds out an on-premises supercomputer.

“We’re accelerating our mission to infuse AI into virtually every academic and research disciplines,” said Thenkurussi (Kesh) Kesavadas, vice president for research and economic development at UAlbany. “We will drive advances in healthcare, security and economic competitiveness, while equipping students for roles in the evolving job market.”

NVIDIA AI Enterprise brings the software layer of the NVIDIA AI platform to OCI. It includes NVIDIA NeMo frameworks for building LLMs, NVIDIA RAPIDS for data science and NVIDIA TensorRT-LLM and NVIDIA Triton Inference Server for supercharging production AI. NVIDIA software for cybersecurity, computer vision, speech AI and more is also included. Enterprise-grade support, security and stability ensure a smooth transition of AI projects from pilot to production.

NVIDIA DGX Cloud generative AI training
NVIDIA DGX Cloud provides enterprises immediate access to AI supercomputing platform and software hosted by their preferred cloud provider.

AI Supercomputing Platform Hosted by OCI

NVIDIA DGX Cloud provides enterprises immediate access to an AI supercomputing platform and software.

Hosted by OCI, DGX Cloud provides enterprises with access to multi-node training on NVIDIA GPUs, paired with NVIDIA AI software, for training advanced models for generative AI and other groundbreaking applications.

Each DGX Cloud instance consists of eight NVIDIA Tensor Core GPUs interconnected with network fabric, purpose-built for multi-node training. This high-performance computing architecture also includes industry-leading AI development software and offers direct access to NVIDIA AI expertise so businesses can train LLMs faster.

OCI customers access DGX Cloud using NVIDIA Base Command Platform, which gives developers access to an AI supercomputer through a web browser. By providing a single-pane view of the customer’s AI infrastructure, Base Command Platform simplifies the management of multinode clusters.

NVIDIA AI Enterprise software
NVIDIA AI Enterprise software powers secure, stable and supported production AI and data science.

Software for Secure, Stable and Supported Production AI

NVIDIA AI Enterprise enables rapid development and deployment of AI and data science.

With NVIDIA AI Enterprise on Oracle Cloud Marketplace, enterprises can efficiently build an application once and deploy it on OCI and their on-prem infrastructure, making a multi- or hybrid-cloud strategy cost-effective and easy to adopt. Since NVIDIA AI Enterprise is also included in NVIDIA DGX Cloud, customers can streamline the transition from training on DGX Cloud to deploying their AI application into production with NVIDIA AI Enterprise on OCI, since the AI software runtime is consistent across the environments.

Qualified customers can purchase NVIDIA AI Enterprise and NVIDIA DGX Cloud with their existing Oracle Universal Credits.

Visit NVIDIA AI Enterprise and NVIDIA DGX Cloud on the Oracle Cloud Marketplace to get started today.

Read More

Coming in Clutch: Stream ‘Counter-Strike 2’ From the Cloud for Highest Frame Rates

Coming in Clutch: Stream ‘Counter-Strike 2’ From the Cloud for Highest Frame Rates

Rush to the cloud — stream Counter-Strike 2 on GeForce NOW for the highest frame rates. Members can play through the newest chapter of Valve’s elite, competitive, first-person shooter from the cloud.

It’s all part of an action-packed GFN Thursday, with 22 more games joining the cloud gaming platform’s library, including Hot Wheels Unleashed 2 – Turbocharged.

“Rush B! Rush B!”

 

Counter-Strike 2 is the long-awaited upgrade to one of the most recognizable competitive first-person shooters in the world.

Building on the legacy of Counter-Strike: Global Offensive, the latest iteration brings the action to Valve’s long-anticipated Source 2 video game engine, promising enhanced graphical fidelity with a physically based rendering system for more realistic textures and materials, dynamic lighting, reflections and more.

Smoke grenades are now dynamic volumetric objects that can interact with their surroundings by reacting to lighting and other environmental effects. And smoke particles work with the unified lighting system, allowing for more realistic light and color.

Even better: GeForce NOW Ultimate members can take full advantage of NVIDIA Reflex for ultra-low-latency gameplay streaming from the cloud. Rush the objective with the squad on Counter-Strike 2’s remastered maps at up to 240 frames per second — a first for cloud gaming. Upgrade today for the Ultimate Counter-Strike experience.

Vroom, Vroom!

We’re going turbo.

There’s more action around every turn of the GeForce NOW library. Put the pedal to the metal in Hot Wheels Unleashed 2 – Turbocharged, one of 22 newly supported games joining this week:

  • Wizard With a Gun (New release on Steam, Oct. 17)
  • Alaskan Road Truckers (New release on Steam, Oct. 18)
  • Hellboy: Web of Wyrd (New release on Steam, Oct. 18)
  • AirportSim (New release on Steam, Oct. 19)
  • Eternal Threads (New release on Epic Games Store, Oct. 19)
  • Hot Wheels Unleashed 2 – Turbocharged (New release on Steam, Oct. 19)
  • Laika Aged Through Blood (New release on Steam, Oct. 19)
  • Battle Chasers: Nightwar (Xbox, available on Microsoft Store)
  • Black Skylands (Xbox, available on Microsoft Store)
  • Blair Witch (Xbox, available on Microsoft Store)
  • Chicory: A Colorful Tale (Xbox and available on PC Game Pass)
  • Dead by Daylight (Xbox and available on PC Game Pass)
  • Dune: Spice Wars (Xbox and available on PC Game Pass)
  • Everspace 2 (Xbox and available on PC Game Pass)
  • EXAPUNKS (Xbox and available on PC Game Pass)
  • Gungrave G.O.R.E (Xbox and available on PC Game Pass)
  • Railway Empire 2 (Xbox and available on PC Game Pass)
  • Techtonica (Xbox and available on PC Game Pass)
  • Teenage Mutant Ninja Turtles: Shredder’s Revenge (Xbox and available on PC Game Pass)
  • Torchlight III (Xbox and available on PC Game Pass)
  • Trine 5: A Clockwork Conspiracy (Epic Games Store)
  • Vampire Survivors (Xbox, available on PC Game Pass)

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

Read More

NVIDIA Expands Robotics Platform to Meet the Rise of Generative AI

NVIDIA Expands Robotics Platform to Meet the Rise of Generative AI

Powerful generative AI models and cloud-native APIs and microservices are coming to the edge.

Generative AI is bringing the power of transformer models and large language models to virtually every industry. That reach now includes areas that touch edge, robotics and logistics systems: defect detection, real-time asset tracking, autonomous planning and navigation, human-robot interactions and more.

NVIDIA today announced major expansions to two frameworks on the NVIDIA Jetson platform for edge AI and robotics: the NVIDIA Isaac ROS robotics framework has entered general availability, and the NVIDIA Metropolis expansion on Jetson is coming next.

To accelerate AI application development and deployments at the edge, NVIDIA has also created a Jetson Generative AI Lab for developers to use with the latest open-source generative AI models.

More than 1.2 million developers and over 10,000 customers have chosen NVIDIA AI and the Jetson platform, including Amazon Web Services, Cisco, John Deere, Medtronic, Pepsico and Siemens.

With the rapidly evolving AI landscape addressing increasingly complicated scenarios, developers are being challenged by longer development cycles to build AI applications for the edge. Reprogramming robots and AI systems on the fly to meet changing environments, manufacturing lines and automation needs of customers is time-consuming and requires expert skills.

Generative AI offers zero-shot learning — the ability for a model to recognize things specifically unseen before in training — with a natural language interface to simplify the development, deployment and management of AI at the edge.

Transforming the AI Landscape

Generative AI dramatically improves ease of use by understanding human language prompts to make model changes. Those AI models are more flexible in detecting, segmenting, tracking, searching and even reprogramming — and  help outperform traditional convolutional neural network-based models.

Generative AI is expected to add $10.5 billion in revenue for manufacturing operations worldwide by 2033, according to ABI Research.

“Generative AI will significantly accelerate deployments of AI at the edge with better generalization, ease of use and higher accuracy than previously possible,” said Deepu Talla, vice president of embedded and edge computing at NVIDIA. “This largest-ever software expansion of our Metropolis and Isaac frameworks on Jetson, combined with the power of transformer models and generative AI, addresses this need.”

Developing With Generative AI at the Edge

The Jetson Generative AI Lab provides developers access to optimized tools and tutorials for deploying open-source LLMs, diffusion models to generate stunning interactive images, vision language models (VLMs) and vision transformers (ViTs) that combine vision AI and natural language processing to provide comprehensive understanding of the scene.

Developers can also use the NVIDIA TAO Toolkit to create efficient and accurate AI models for edge applications. TAO provides a low-code interface to fine-tune and optimize vision AI models, including ViT and vision foundational models. They can also customize and fine-tune foundational models like NVIDIA NV-DINOv2 or public models like OpenCLIP to create highly accurate vision AI models with very little data. TAO additionally now includes VisualChangeNet, a new transformer-based model for defect inspection.

Harnessing New Metropolis and Isaac Frameworks

NVIDIA Metropolis makes it easier and more cost-effective for enterprises to embrace world-class, vision AI-enabled solutions to improve critical operational efficiency and safety problems. The platform brings a collection of powerful application programming interfaces and microservices for developers to quickly develop complex vision-based applications.

More than 1,000 companies, including BMW Group, Pepsico, Kroger, Tyson Foods, Infosys and Siemens, are using NVIDIA Metropolis developer tools to solve Internet of Things, sensor processing and operational challenges with vision AI — and the rate of adoption is quickening. The tools have now been downloaded over 1 million times by those looking to build vision AI applications.

To help developers quickly build and deploy scalable vision AI applications, an expanded set of Metropolis APIs and microservices on NVIDIA Jetson will be available by year’s end.

Hundreds of customers use the NVIDIA Isaac platform to develop high-performance robotics solutions across diverse domains, including agriculture, warehouse automation, last-mile delivery and service robotics, among others.

At ROSCon 2023, NVIDIA announced major improvements to perception and simulation capabilities with new releases of Isaac ROS and Isaac Sim software. Built on the widely adopted open-source Robot Operating System (ROS), Isaac ROS brings perception to automation, giving eyes and ears to the things that move. By harnessing the power of GPU-accelerated GEMs, including visual odometry, depth perception, 3D scene reconstruction, localization and planning, robotics developers gain the tools needed to swiftly engineer robotic solutions tailored for a diverse range of applications.

Isaac ROS has reached production-ready status with the latest Isaac ROS 2.0 release, enabling developers to create and bring high-performance robotics solutions to market with Jetson.

“ROS continues to grow and evolve to provide open-source software for the whole robotics community,” said Geoff Biggs, CTO of the Open Source Robotics Foundation. “NVIDIA’s new prebuilt ROS 2 packages, launched with this release, will accelerate that growth by making ROS 2 readily available to the vast NVIDIA Jetson developer community.”

Delivering New Reference AI Workflows

Developing a production-ready AI solution entails optimizing the development and training of AI models tailored to specific use cases, implementing robust security features on the platform, orchestrating the application, managing fleets, establishing seamless edge-to-cloud communication and more.

NVIDIA announced a curated collection of AI reference workflows based on Metropolis and Isaac frameworks that enable developers to quickly adopt the entire workflow or selectively integrate individual components, resulting in substantial reductions in both development time and cost. The three distinct AI workflows include: Network Video Recording, Automatic Optical Inspection and Autonomous Mobile Robot.

“NVIDIA Jetson, with its broad and diverse user base and partner ecosystem, has helped drive a revolution in robotics and AI at the edge,” said Jim McGregor, principal analyst at Tirias Research. “As application requirements become increasingly complex, we need a foundational shift to platforms that simplify and accelerate the creation of edge deployments. This significant software expansion by NVIDIA gives developers access to new multi-sensor models and generative AI capabilities.”

More Coming on the Horizon 

NVIDIA announced a collection of system services which are fundamental capabilities that every developer requires when building edge AI solutions. These services will simplify integration into workflows and spare developer the arduous task of building them from the ground up.

The new NVIDIA JetPack 6, expected to be available by year’s end, will empower AI developers to stay at the cutting edge of computing without the need for a full Jetson Linux upgrade, substantially expediting development timelines and liberating them from Jetson Linux dependencies. JetPack 6 will also use the collaborative efforts with Linux distribution partners to expand the horizon of Linux-based distribution choices, including Canonical’s Optimized and Certified Ubuntu, Wind River Linux, Concurrent Real’s Redhawk Linux and various Yocto-based distributions.

Partner Ecosystem Benefits From Platform Expansion

The Jetson partner ecosystem provides a wide range of support, from hardware, AI software and application design services to sensors, connectivity and developer tools. These NVIDIA Partner Network innovators play a vital role in providing the building blocks and sub-systems for many products sold on the market.

The latest release allows Jetson partners to accelerate their time to market and expand their customer base by adopting AI with increased performance and capabilities.

Independent software vendor partners will also be able to expand their offerings for Jetson.

Join us Tuesday, Nov. 7, at 9 a.m. PT for the Bringing Generative AI to Life with NVIDIA Jetson webinar, where technical experts will dive deeper into the news announced here, including accelerated APIs and quantization methods for deploying LLMs and VLMs on Jetson, optimizing vision transformers with TensorRT, and more.

Sign up for NVIDIA Metropolis early access here.

 

 

 

 

 

 

 

Read More

Making Machines Mindful: NYU Professor Talks Responsible AI

Making Machines Mindful: NYU Professor Talks Responsible AI

Artificial intelligence is now a household term. Responsible AI is hot on its heels.

Julia Stoyanovich, associate professor of computer science and engineering at NYU and director of the university’s Center for Responsible AI, wants to make the terms “AI” and “responsible AI” synonymous.

In the latest episode of the NVIDIA AI Podcast, host Noah Kravitz ‌spoke with Stoyanovich about responsible AI, her advocacy efforts and how people can help.

Stoyanovich started her work at the Center for Responsible AI with basic research. She soon realized that what was needed were better guardrails, not just more algorithms.

As AI’s potential has grown, along with the ethical concerns surrounding its use, Stoyanovich clarifies that the “responsibility” lies with people, not AI.

“The responsibility refers to people taking responsibility for the decisions that we make individually and collectively about whether to build an AI system and how to build, test, deploy and keep it in check,” she said.

AI ethics is a related concern, used to refer to “the embedding of moral values and principles into the design, development and use of the AI,” she added.

Lawmakers have taken notice. For example, New York recently implemented a law that makes job candidate screening more transparent.

According to Stoyanovich, “the law is not perfect,” but “we can only learn how to regulate something if we try regulating” and converse openly with the “people at the table being impacted.”

Stoyanovich wants two things: for people to recognize that AI can’t predict human choices and that AI systems be transparent and accountable, carrying a “nutritional label.”

That process should include considerations on who is using AI tools, how they’re used to make decisions and who is subjected to those decisions, she said.

Stoyanovich urges people to “start demanding actions and explanations to understand” how AI is used at local, state and federal levels.

“We need to teach ourselves to help others learn about what AI is and why we should care,” she said. “So please get involved in how we govern ourselves, because we live in a democracy. We have to step up.”

You Might Also Like

Jules Anh Tuan Nguyen Explains How AI Lets Amputee Control Prosthetic Hand, Video Games
A postdoctoral researcher at the University of Minnesota discusses his efforts to allow amputees to control their prosthetic limb — right down to the finger motions — with their minds.

Overjet’s Ai Wardah Inam on Bringing AI to Dentistry
Overjet, a member of NVIDIA Inception, is moving fast to bring AI to dentists’ offices. Dr. Wardah Inam, CEO of the company, discusses using AI to improve patient care.

Immunai CTO and Co-Founder Luis Voloch on Using Deep Learning to Develop New Drugs
Luis Voloch talks about tackling the challenges of the immune system with a machine learning and data science mindset.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunesGoogle PodcastsGoogle PlayCastbox, DoggCatcher, OvercastPlayerFM, Pocket Casts, PodbayPodBean, PodCruncher, PodKicker, SoundcloudSpotifyStitcher and TuneIn.

Make the AI Podcast better. Have a few minutes to spare? Fill out this listener survey.

Read More

Into the Omniverse: Marmoset Brings Breakthroughs in Rendering, Extends OpenUSD Support to Enhance 3D Art Production

Into the Omniverse: Marmoset Brings Breakthroughs in Rendering, Extends OpenUSD Support to Enhance 3D Art Production

Editor’s note: This post is part of Into the Omniverse, a series focused on how artists and developers from startups to enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

Real-time rendering, animation and texture baking are essential workflows for 3D art production. Using the Marmoset Toolbag software, 3D artists can enhance their creative workflows and build complex 3D models without disruptions to productivity.

The latest release of Marmoset Toolbag, version 4.06, brings increased support for Universal Scene Description, aka OpenUSD, enabling seamless compatibility with NVIDIA Omniverse, a development platform for connecting and building OpenUSD-based tools and applications.

3D creators and technical artists using Marmoset can now enjoy improved interoperability, accelerated rendering, real-time visualization and efficient performance —  redefining the possibilities of their creative workflows.

Enhancing Cross-Platform Creativity With OpenUSD

Creators are taking their workflows to the next level with OpenUSD.

Berlin-based Armin Halač works as a principal animator at Wooga, a mobile games development studio known for projects like June’s Journey and Ghost Detective. The nature of his job means Halač is no stranger to 3D workflows — he gets hands-on with animation and character rigging.

For texturing and producing high-quality renders, Marmoset is Halač’s go-to tool, providing a user-friendly interface and powerful features to simplify his workflow. Recently, Halač used Marmoset to create the captivating cover image for his book, A Complete Guide to Character Rigging for Games Using Blender.

Using the added support for USD, Halač can seamlessly send 3D assets from Blender to Marmoset, creating new possibilities for collaboration and improved visuals.

The cover image of Halač’s book.

Nkoro Anselem Ire, a.k.a askNK, is a popular YouTube creator as well as a media and visual arts professor at a couple of universities who is also seeing workflow benefits from increased USD support.

As a 3D content creator, he uses Marmoset Toolbag for the majority of his PBR workflow — from texture baking and lighting to animation and rendering. Now, with USD, askNK is enjoying newfound levels of creative flexibility as the framework allows him to “collaborate with individuals or team members a lot easier because they can now pick up and drop off processes while working on the same file.”

Halač and askNK recently joined an NVIDIA-hosted livestream where community members and the Omniverse team explored the benefits of a Marmoset- and Omniverse-boosted workflow.

Daniel Bauer is another creator experiencing the benefits of Marmoset, OpenUSD and Omniverse. A SolidWorks mechanical engineer with over 10 years of experience, Bauer works frequently in CAD software environments, where it’s typical to assign different materials to various scene components. The variance can often lead to shading errors and incorrect geometry representation, but using USD, Bauer can avoid errors by easily importing versions of his scene from Blender to Marmoset Toolbag to Omniverse USD Composer.

A Kuka Scara robot simulation with 10 parallel small grippers for sorting and handling pens.

Additionally, 3D artists Gianluca Squillace and Pasquale Scionti are harnessing the collaborative power of Omniverse, Marmoset and OpenUSD to transform their workflows from a convoluted series of exports and imports to a streamlined, real-time, interconnected process.

Squillace crafted a captivating 3D character with Pixologic ZBrush, Autodesk Maya, Adobe Substance 3D Painter and Marmoset Toolbag — aggregating the data from the various tools in Omniverse. With USD, he seamlessly integrated his animations and made real-time adjustments without the need for constant file exports.

Simultaneously, Scionti constructed a stunning glacial environment using Autodesk 3ds Max, Adobe Substance 3D Painter, Quixel and Unreal Engine, uniting the various pieces from his tools in Omniverse. His work showcased the potential of Omniverse to foster real-time collaboration as he was able to seamlessly integrate Squillace’s character into his snowy world.

Advancing Interoperability and Real-Time Rendering

Marmoset Toolbag 4.06 provides significant improvements to interoperability and image fidelity for artists working across platforms and applications. This is achieved through updates to Marmoset’s OpenUSD support, allowing for seamless compatibility and connection with the Omniverse ecosystem.

The improved USD import and export capabilities enhance interoperability with popular content creation apps and creative toolkits like Autodesk Maya and Autodesk 3ds Max, SideFX Houdini and Unreal Engine.

Additionally, Marmoset Toolbag 4.06 brings additional updates, including:

  • RTX-accelerated rendering and baking: Toolbag’s ray-traced renderer and texture baker are accelerated by NVIDIA RTX GPUs, providing up to a 2x improvement in render times and a 4x improvement in bake times.
  • Real-time denoising with OptiX: With NVIDIA RTX devices, creators can enjoy a smooth and interactive ray-tracing experience, enabling real-time navigation of the active viewport without visual artifacts or performance disruptions.
  • High DPI performance with DLSS image upscaling: The viewport now renders at a reduced resolution and uses AI-based technology to upscale images, improving performance while minimizing image-quality reductions.

Download Toolbag 4.06 directly from Marmoset to explore USD support and RTX-accelerated production tools. New users are eligible for a full-featured, 30-day free trial license.

Get Plugged Into the Omniverse 

Learn from industry experts on how OpenUSD is enabling custom 3D pipelines, easing 3D tool development and delivering interoperability between 3D applications in sessions from SIGGRAPH 2023, now available on demand.

Anyone can build their own Omniverse extension or Connector to enhance their 3D workflows and tools. Explore the Omniverse ecosystem’s growing catalog of connections, extensions, foundation applications and third-party tools.

For more resources on OpenUSD, explore the Alliance for OpenUSD forum or visit the AOUSD website.

Share your Marmoset Toolbag and Omniverse work as part of the latest community challenge, #SeasonalArtChallenge. Use the hashtag to submit a spooky or festive scene for a chance to be featured on the @NVIDIAStudio and @NVIDIAOmniverse social channels.

Get started with NVIDIA Omniverse by downloading the standard license free, or learn how Omniverse Enterprise can connect your team

Developers can check out these Omniverse resources to begin building on the platform. 

Stay up to date on the platform by subscribing to the newsletter and following NVIDIA Omniverse on Instagram, LinkedIn, Medium, Threads and Twitter.

For more, check out our forums, Discord server, Twitch and YouTube channels.

Featured image courtesy of Armin Halač, Christian Nauck and Masuquddin Ahmed.

Read More