The Sky’s the Limit: ‘Cities: Skylines II’ Streams This Week on GeForce NOW

The Sky’s the Limit: ‘Cities: Skylines II’ Streams This Week on GeForce NOW

The cloud is full of treats this GFN Thursday with Cities: Skylines II now streaming, leading 15 newly supported games this week. The game’s publisher, Paradox Interactive, is offering GeForce NOW one-month Priority memberships for those who pick up the game first, so make sure to grab one before they’re gone.

Among the newly supported additions to the GeForce NOW library are more games from the PC Game Pass catalog, including Ghostwire Tokyo, State of Decay and the Dishonored series. Members can also look forward to Alan Wake 2 — streaming soon.

Cloud City

Cities: Skylines II on GeForce NOW
If you build it, they will come.

Members can build the metropolis of their dreams this week in Cities: Skylines II, the sequel to Paradox Interactive’s award-winning city sim. Raise a city from the ground up and transform it into a thriving urban landscape. Get creative to build on an unprecedented scale while managing a deep simulation and a living economy.

The game’s AI and intricate economics mean every choice ripples through the fabric of a player’s city, so they’ll have to stay sharp — strategizing, solving problems and reacting to challenges. Build sky-high and sprawl across the map like never before. New dynamic map features affect how the city expands amid rising pollution, changing weather and seasonal challenges.

Paradox is offering one-month GeForce NOW Priority memberships to the first 100,000 people who purchase the game, so budding city planners can optimize their gameplay across nearly any device. Visit Cities Skylines II for more info.

Newly Risen in the Cloud

Settle in for a spooky night with the newest PC Game Pass additions to the cloud: State of Decay and the Dishonored series.

State of Decay 2: Juggernaut Edition on GeForce NOW
“The right choice is the one that keeps us alive.”

Drop into a post-apocalyptic world and fend off zombies in State of Decay 2: Juggernaut Edition from Undead Labs and Xbox Game Studios. Band together with a small group of survivors and rebuild a corner of civilization in this dynamic, open-world sandbox. Fortify home base, perform daring raids for food and supplies and rescue other survivors who may have unique talents to contribute. Head online with friends for an up to four-player online co-op mode and visit their communities to help defend them and bring back rewards. No two players’ experiences will be the same.

Dishonor on you, dishonor on your cow, “Dishonored” in the cloud.

Get supernatural with the Dishonored series, which comprises first-person action games set in a steampunk Lovecraftian world. In Dishonored, follow the story of Corvo Attano — a former bodyguard turned assassin driven by revenge after being framed for the murder of the Empress of Dunwall. Choose stealth or violence with Dishonored’s flexible combat system and Corvo’s supernatural abilities.

The Definitive Edition includes the original Dishonored game with updated graphics, the “Void Walker’s Arsenal” add-on pack, plus expansion packs for more missions: “The Knife of Dunwall,” “The Brigmore Witches” and “Dunwall City Trials.”

Follow up with the sequel, Dishonored 2, set 15 years after Dishonored. Members can play as Corvo or his daughter, Emily, who seeks to reclaim her rightful place as the Empress of Dunwall. Dishonored: Death of the Outsider is the latest in the series, following the story of former assassin Billie Lurk on her mission to discover the origins of a mysterious entity called The Outsider.

It’s Getting Dark in Here

Maybe you should be afraid of the dark after all.

Alan Wake 2, the long-awaited sequel to Remedy Entertainments’ survival-horror classic, is coming soon to the cloud.

What begins as a small-town murder investigation rapidly spirals into a nightmare journey. Uncover the source of a supernatural darkness in this psychological horror story filled with suspense and unexpected twists. Play as FBI agent Saga Anderson and Alan Wake, a horror writer long trapped in the Dark Place, to see events unfold from different perspectives.

Ultimate members will soon be able to uncover mysteries with the power of a GeForce RTX 4080 server in the cloud. Survive the surreal world of Alan Wake 2 at up to 4K resolution and 120 frames per second, with path-traced graphics accelerated and enhanced by NVIDIA DLSS 3.5 and NVIDIA Reflex technology.

Trick or Treat: Give Me All New Games to Beat

Ghostwire Tokyo on GeForce NOW
I ain’t afraid of no ghost.

It’s time for a bewitching new list of games in the cloud. Ghostwire Tokyo from Bethesda is an action-adventure game set in a modern-day Tokyo mysteriously depopulated by a paranormal phenomenon. Team with a spectral entity to fight the supernatural forces that have taken over the city, including ghosts, yokai and other creatures from Japanese folklore.

Jump into the action now with 15 new games this week:

Make sure to check out the question of the week. Share your answer on Twitter or in the comments below.

Read More

Next-Gen Neural Networks: NVIDIA Research Announces Array of AI Advancements at NeurIPS

Next-Gen Neural Networks: NVIDIA Research Announces Array of AI Advancements at NeurIPS

NVIDIA researchers are collaborating with academic centers worldwide to advance generative AI, robotics and the natural sciences — and more than a dozen of these projects will be shared at NeurIPS, one of the world’s top AI conferences.

Set for Dec. 10-16 in New Orleans, NeurIPS brings together experts in generative AI, machine learning, computer vision and more. Among the innovations NVIDIA Research will present are new techniques for transforming text to images, photos to 3D avatars, and specialized robots into multi-talented machines.

“NVIDIA Research continues to drive progress across the field — including generative AI models that transform text to images or speech, autonomous AI agents that learn new tasks faster, and neural networks that calculate complex physics,” said Jan Kautz, vice president of learning and perception research at NVIDIA. “These projects, often done in collaboration with leading minds in academia, will help accelerate developers of virtual worlds, simulations and autonomous machines.”

Picture This: Improving Text-to-Image Diffusion Models

Diffusion models have become the most popular type of generative AI models to turn text into realistic imagery. NVIDIA researchers have collaborated with universities on multiple projects advancing diffusion models that will be presented at NeurIPS.

  • A paper accepted as an oral presentation focuses on improving generative AI models’ ability to understand the link between modifier words and main entities in text prompts. While existing text-to-image models asked to depict a yellow tomato and a red lemon may incorrectly generate images of yellow lemons and red tomatoes, the new model analyzes the syntax of a user’s prompt, encouraging a bond between an entity and its modifiers to deliver a more faithful visual depiction of the prompt.
  • SceneScape, a new framework using diffusion models to create long videos of 3D scenes from text prompts, will be presented as a poster. The project combines a text-to-image model with a depth prediction model that helps the videos maintain plausible-looking scenes with consistency between the frames — generating videos of art museums, haunted houses and ice castles (pictured above).
  • Another poster describes work that improves how text-to-image models generate concepts rarely seen in training data. Attempts to generate such images usually result in low-quality visuals that aren’t an exact match to the user’s prompt. The new method uses a small set of example images that help the model identify good seeds — random number sequences that guide the AI to generate images from the specified rare classes.
  • A third poster shows how a text-to-image diffusion model can use the text description of an incomplete point cloud to generate missing parts and create a complete 3D model of the object. This could help complete point cloud data collected by lidar scanners and other depth sensors for robotics and autonomous vehicle AI applications. Collected imagery is often incomplete because objects are scanned from a specific angle — for example, a lidar sensor mounted to a vehicle would only scan one side of each building as the car drives down a street.

Character Development: Advancements in AI Avatars

AI avatars combine multiple generative AI models to create and animate virtual characters, produce text and convert it to speech. Two NVIDIA posters at NeurIPS present new ways to make these tasks more efficient.

  • A poster describes a new method to turn a single portrait image into a 3D head avatar while capturing details including hairstyles and accessories. Unlike current methods that require multiple images and a time-consuming optimization process, this model achieves high-fidelity 3D reconstruction without additional optimization during inference. The avatars can be animated either with blendshapes, which are 3D mesh representations used to represent different facial expressions, or with a reference video clip where a person’s facial expressions and motion are applied to the avatar.
  • Another poster by NVIDIA researchers and university collaborators advances zero-shot text-to-speech synthesis with P-Flow, a generative AI model that can rapidly synthesize high-quality personalized speech given a three-second reference prompt. P-Flow features better pronunciation, human likeness and speaker similarity compared to recent state-of-the-art counterparts. The model can near-instantly convert text to speech on a single NVIDIA A100 Tensor Core GPU.

Research Breakthroughs in Reinforcement Learning, Robotics

In the fields of reinforcement learning and robotics, NVIDIA researchers will present two posters highlighting innovations that improve the generalizability of AI across different tasks and environments.

  • The first proposes a framework for developing reinforcement learning algorithms that can adapt to new tasks while avoiding the common pitfalls of gradient bias and data inefficiency. The researchers showed that their method — which features a novel meta-algorithm that can create a robust version of any meta-reinforcement learning model — performed well on multiple benchmark tasks.
  • Another by an NVIDIA researcher and university collaborators tackles the challenge of object manipulation in robotics. Prior AI models that help robotic hands pick up and interact with objects can handle specific shapes but struggle with objects unseen in the training data. The researchers introduce a new framework that estimates how objects across different categories are geometrically alike — such as drawers and pot lids that have similar handles — enabling the model to more quickly generalize to new shapes.

Supercharging Science: AI-Accelerated Physics, Climate, Healthcare

NVIDIA researchers at NeurIPS will also present papers across the natural sciences — covering physics simulations, climate models and AI for healthcare.

  • To accelerate computational fluid dynamics for large-scale 3D simulations, a team of NVIDIA researchers proposed a neural operator architecture that combines accuracy and computational efficiency to estimate the pressure field around vehicles — the first deep learning-based computational fluid dynamics method on an industry-standard, large-scale automotive benchmark. The method achieved 100,000x acceleration on a single NVIDIA Tensor Core GPU compared to another GPU-based solver, while reducing the error rate. Researchers can incorporate the model into their own applications using the open-source neuraloperator library.

 

  • A consortium of climate scientists and machine learning researchers from universities, national labs, research institutes, Allen AI and NVIDIA collaborated on ClimSim, a massive dataset for physics and machine learning-based climate research that will be shared in an oral presentation at NeurIPS. The dataset covers the globe over multiple years at high resolution — and machine learning emulators built using that data can be plugged into existing operational climate simulators to improve their fidelity, accuracy and precision. This can help scientists produce better predictions of storms and other extreme events.
  • NVIDIA Research interns are presenting a poster introducing an AI algorithm that provides personalized predictions of the effects of medicine dosage on patients. Using real-world data, the researchers tested the model’s predictions of blood coagulation for patients given different dosages of a treatment. They also analyzed the new algorithm’s predictions of the antibiotic vancomycin levels in patients who received the medication — and found that prediction accuracy significantly improved compared to prior methods.

NVIDIA Research comprises hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics.

Read More

On Razer’s Edge: VFX Star Surfaced Studio Creates Stunning Sci-Fi World This Week ‘In The NVIDIA Studio’

On Razer’s Edge: VFX Star Surfaced Studio Creates Stunning Sci-Fi World This Week ‘In The NVIDIA Studio’

Visual effects artist Surfaced Studio returns to In the NVIDIA Studio to share his real-world VFX project, created on a brand new Razer Blade 16 Mercury Edition laptop powered by GeForce RTX 4080 graphics.

Surfaced Studio creates photorealistic, digitally generated imagery that seamlessly integrates visual effects into short films, television and console gaming.

He found inspiration for a recent sci-fi project by experimenting with 3D transitions: using a laptop screen as a gateway between worlds, like the portals from Dr. Strange or the transitions from The Matrix.

Break the Rules and Become a Hero

Surfaced Studio aimed to create an immersive experience with his latest project.

“I wanted to get my audience to feel surprised getting ‘sucked into’ the 3D world,” he explained.

Surfaced Studio began with a simple script, alongside sketches of brainstormed ideas and played out shots. “This usually helps me think through how I’d pull each effect off and whether they’re actually possible,” he said.

From there, he shot video and imported the footage into Adobe Premiere Pro for a rough test edit. Then, Surfaced Studio selected the most suitable clips for use.

He cleaned up the footage in Adobe After Effects, stabilizing shots with the Warp Stabilizer tool and removing distracting background elements with the Mocha Pro tool. Both effects were accelerated by his GeForce RTX 4080 Laptop GPU.

After, he created a high-contrast version of the shot for 3D motion tracking in Blender.

3D motion tracking in Blender.

Motion tracking is used to apply tracking data to 3D objects. “This was pretty tricky, as it’s a 16-second gimbal shot with fast moving sections and a decent camera blur,” said Surfaced Studio. “It took me a good few days to get a decent track and fix issues with manual keyframes and ‘patches’ between different sections.”

A gimbal shot uses sensors and motors to stabilize and support the camera.

Surfaced Studio exported footage of the animated camera into a 3D FBX file to use in Unreal Engine and set it up in the Cyberpunk High City pack, which contains a modular constructor for creating highly detailed sci-fi city streets, alleys and blocks.

“I’m not much of a 3D artist so using [the Cyberpunk High City pack] was the best option to complete the project on this side of the century,” the artist said. He then made modifications to the cityscape, reducing flickering lights and adding buildings, custom fog and Razer and NVIDIA Studio banners. He even added a billboard with an ad encouraging kindness to cats. “It’s so off to the side of most shots I doubt anyone actually noticed,” noted a satisfied Surfaced Studio.

A PSA from Surfaced Studio: be nice to cats.

Learning 3D effects can seem overwhelming due to the vast knowledge needed across multiple apps and district workflows. But Surfaced Studio stresses the simple importance of first understanding workflow hierarchies — and how one feeds into another — as an approachable entry point to choosing a specialty suited to a creator’s unique passion and natural talent.

Surfaced Studio was able to seamlessly run his scene in Unreal Engine full 4K resolution — with all textures and materials loading at maximum graphical fidelity — thanks to the GeForce RTX 4080 Laptop GPU in his Razer Blade 16. The graphics card also contains NVIDIA DLSS capabilities to increase viewport interactivity by using AI to upscale frames rendered at lower resolution while retaining high-fidelity detail.

Moving virtual objects in Unreal Engine.

Surfaced Studio then took the FBX file with the exported camera tracking data into Unreal Engine, matching his ‘3D camera’ with the real-world one used to film the laptop with. “This was the crucial step in creating the ‘look-through’ effect I wanted,” he said.

Once satisfied with the look, Surfaced Studio exported all sequences from Unreal Engine as multilayer EXR files — including a Z-depth pass, a grayscale value range to create a depth-of-field effect — to separate visual elements from the 3D footage.

Composite work in Adobe After Effects.

Surfaced Studio went back to After Effects for the final composites. He added distortion effects and some glow for the transition from the physical screen to the 3D world.

Cleaning up screen tracking in Adobe After Effects.

Then, Surfaced Studio again used the Z-depth pass to extract the 3D cars and overlay them onto the real footage.

Composite work in Adobe After Effects.

He exported the final project into Premiere Pro and added sound effects, music and a few color correction edits.

Final edits in Adobe Premiere Pro.

With GeForce RTX 4080 dual encoders, Surface Studio nearly halved Adobe Premiere Pro video decoding and encoding export times. Surfaced Studio has been using NVIDIA GPUs for over a decade, citing their widespread integration with commonly used tools.

“NVIDIA has simply done a better job than its competitors to reach out to and integrate with other companies that create creative apps,” said Surfaced Studio. “CUDA and RTX are widespread technologies that you find in most popular creative apps to accelerate workflows.”

When he’s not working on VFX projects, Surfaced Studio also uses his laptop to game. The Razer Blade 16 has the first dual-mode mini-LED display with two native resolutions: UHD+ at 120Hz — suited for VFX workflows — and FHD at 240Hz — ideal for gamers (or creators who like gaming).

Powerful, elegant, beautiful: the Razer Blade 16 Mercury Edition.

For a limited time, gamers and creators can get the critically acclaimed game Alan Wake 2 with the purchase of the Razer Blade 16 powered by GeForce RTX 40 Series graphics cards.

Surfaced Studio’s VFX tutorials are available on YouTube, where he covers filmmaking, VFX and 3D techniques using Adobe After Effects, Blender, Photoshop, Premiere Pro and other apps.

VFX artist Surfaced Studio.

Join the #SeasonalArtChallenge

Don’t forget to join the #SeasonalArtChallenge by submitting spooky Halloween-inspired art in October and harvest- and fall-themed pieces in November.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

Street View to the Rescue: Deep Learning Paves the Way to Safer Buildings

Street View to the Rescue: Deep Learning Paves the Way to Safer Buildings

Images such as those in Google Street View are taking on a new purpose in the hands of University of Florida Assistant Professor of Artificial Intelligence Chaofeng Wang.

He’s using them, along with deep learning, in a research project to automate the evaluation of urban buildings. The project aims to help governments mitigate natural disaster damage by providing the information needed for decision-makers to bolster building structures or perform post-disaster recovery.

After a natural disaster such as an earthquake, local governments send teams to check and evaluate building conditions. Manually done, it can take up to months to go through the full stock of a city.

Wang’s project uses AI to accelerate the evaluation process — cutting the time needed to a few hours. The AI model is trained using images sourced from Google Street View and local governments to assign scores to buildings based on Federal Emergency Management Agency (FEMA) P-154 standards, which provide assessment guidelines based on factors like wall material, structure type, building age and more. Wang also collaborated with the World Bank Global Program for Resilient Housing to collect images and perform annotations, which were used to improve the model.

The collected images are placed in a data repository. The AI model reads the repository and performs inference on the images — a process accelerated by NVIDIA DGX A100 systems.

“Without NVIDIA GPUs, we wouldn’t have been able to do this,” Wang said. “They significantly accelerate the process, ensuring timely results.”

Wang used the DGX A100 nodes in the University of Florida’s supercomputer, HiPerGator. HiPerGator is one of the world’s fastest AI supercomputers in academia, delivering 700 petaflops of AI performance, and was built with the support of NVIDIA founder and UF alumnus Chris Malachowsky and hardware, software, training and services from NVIDIA.

The AI model’s output is compiled into a database that feeds into a web portal, which shows information — including the safety assessment score, building type and even roof or wall material — in a map-based format.

Wang’s work was funded by the NVIDIA Applied Research Accelerator Program, which supports research projects that have the potential to make a real-world impact through the deployment of NVIDIA-accelerated applications adopted by commercial and government organizations.

A Helping Eye

Wang says that the portal can serve different needs depending on the use case. To prepare for a natural disaster, a government can use predictions solely from street view images.

“Those are static images — one example is Google Street View images, which get updated every several years,” he said. “But that’s good enough for collecting information and getting a general understanding about certain statistics.”

But for rural areas or developing regions, where such images aren’t available or not frequently updated, governments can collect the images themselves. Powered by NVIDIA GPUs, the timely delivery of building assessments can help accelerate analyses.

Wang also suggests that with enough refinement, his research could also create ripples for the urban planning and insurance industries.

The project is currently being tested by a few local governments in Mexico and is garnering interest in some African, Asian and South American countries. At its current state, it can achieve over 85% accuracy in its assessment scores, per ‌FEMA P-154 standards.

Survey of the Land

One challenge Wang cites is the variation in urban landscapes in different countries. Different regions have their own cultural and architectural styles. Not trained on a large or diverse enough pool of images, the AI model could be thrown off by factors like paint color when performing wall material analysis. Another challenge is urban density variation.

“It is a very general limitation of current AI technology,” Wang said. “In order to be useful, it requires enough training data to represent the distribution of the real world, so we’re putting efforts into the data collection process to solve the generalization issue.”

To overcome this challenge, Wang aims to train and test the model for more cities. So far, he’s tested about eight cities in different countries.

“We need to generate more detailed and high-quality annotations to train the model with,” he said. “That is the way we can improve the model in the future so that it can be used more widely.”

Wang’s goal is to get the project to a point where it can be deployed as a service for more general industry use.

“We are creating application programming interfaces that can estimate and analyze buildings and households to allow seamless integration with other products,” he said. “We are also building a user-friendly application that all government agencies and organizations can use.”

Read More

For the World to See: Nonprofit Deploys GPU-Powered Simulators to Train Providers in Sight-Saving Surgery

For the World to See: Nonprofit Deploys GPU-Powered Simulators to Train Providers in Sight-Saving Surgery

GPU-powered surgical-simulation devices are helping train more than 2,000 doctors a year in lower-income countries to treat cataract blindness, the world’s leading cause of blindness, thanks to the nonprofit HelpMeSee.

While cataract surgery has a success rate of around 99%, many patients in low- and middle-income countries lack access to the common procedure due to a severe shortage of ophthalmologists. An estimated 90% of the 100 million people affected by cataract-related visual impairment or blindness are in these locations.

By training more healthcare providers — including those without a specialty in ophthalmology — to treat cataracts, HelpMeSee improves the quality of life for patients such as a mother of two young children in Bhiwandi, near Mumbai, India, who was blinded by cataracts in both eyes.

“After the surgery, her vision improved dramatically and she was able to take up a job, changing the course of her entire family,” said Dr. Chetan Ahiwalay, chief instructor and subject-matter expert for HelpMeSee in India. “She and her husband are now happily raising their kids and leading a healthy life. These are the things that keep us going as doctors.”

HelpMeSee’s simulator devices use NVIDIA RTX GPUs to render high-quality visuals, providing a more realistic training environment for doctors to hone their surgical skills. To further improve the trainee experience, NVIDIA experts are working with the HelpMeSee team to improve rendering performance, increase visual realism and augment the simulator with next-generation technologies such as real-time ray tracing and AI.

Tackling Treatable Blindness With Accessible Training

High-income countries have 18x more ophthalmologists per million residents than low-income countries. That coverage gap, which is far wider still in certain countries, makes it harder for those in thinly resourced areas to receive treatment for avoidable blindness.

HelpMeSee’s devices can train doctors on multiple eye procedures using immersive tools inspired by flight simulators used in aviation. The team trains doctors in countries including India, China, Madagascar, Mexico and the U.S., and rolls out multilingual training each year for new procedures.

The eye surgery simulator offers realistic 3D visuals, haptic feedback, performance scores and the opportunity to attempt a step of the procedure multiple times until the trainee achieves proficiency. Qualified instructors like Dr. Ahiwalay travel to rural and urban areas to deliver the training through structured courses — and help surgeons transition from the simulators to live surgeries.

Doctors training to perform cataract surgery
During a training session, doctors learn to perform manual small-incision cataract surgery.

“We’re lowering the barrier for healthcare practitioners to learn these specific skills that can have a profound impact on patients,” said Dr. Bonnie An Henderson, CEO of HelpMeSee, which is based in New York. “Simulation-based training will improve surgical skills while keeping patients safe.”

Looking Ahead to AI, Advanced Rendering 

HelpMeSee works with Surgical Science, a supplier of medical virtual-reality simulators, based in Gothenburg, Sweden, to develop the 3D models and real-time rendering for its devices. Other collaborators — Strasbourg, France-based InSimo and Pune, India-based Harman Connected Services — develop the physics-based simulations and user interface, respectively. 

“Since there are many crucial visual cues during eye surgery, the simulation requires high fidelity,” said Sebastian Ullrich, senior manager of software development at Surgical Science, who has worked with HelpMeSee for years. “To render a realistic 3D representation of the human eye, we use custom shader materials with high-resolution textures to represent various anatomical components, mimic optical properties such as refraction, use order-independent transparency sorting and employ volume rendering.”

NVIDIA RTX GPUs support 3D volume rendering, stereoscopic rendering and depth sorting algorithms that provide a realistic visual experience for HelpMeSee’s trainees. Working with NVIDIA, the team is investigating AI models that could provide trainees with a real-time analysis of the practice procedure and offer recommendations for improvement.

Watch a demo of HelpMeSee’s cataract surgery training simulation.

Subscribe to NVIDIA healthcare news.

Read More

Eureka! NVIDIA Research Breakthrough Puts New Spin on Robot Learning

Eureka! NVIDIA Research Breakthrough Puts New Spin on Robot Learning

A new AI agent developed by NVIDIA Research that can teach robots complex skills has trained a robotic hand to perform rapid pen-spinning tricks — for the first time as well as a human can.

The stunning prestidigitation, showcased in the video above, is one of nearly 30 tasks that robots have learned to expertly accomplish thanks to Eureka, which autonomously writes reward algorithms to train bots.

Eureka has also taught robots to open drawers and cabinets, toss and catch balls, and manipulate scissors, among other tasks.

The Eureka research, published today, includes a paper and the project’s AI algorithms, which developers can experiment with using NVIDIA Isaac Gym, a physics simulation reference application for reinforcement learning research. Isaac Gym is built on NVIDIA Omniverse, a development platform for building 3D tools and applications based on the OpenUSD framework. Eureka itself is powered by the GPT-4 large language model.

“Reinforcement learning has enabled impressive wins over the last decade, yet many challenges still exist, such as reward design, which remains a trial-and-error process,” said Anima Anandkumar, senior director of AI research at NVIDIA and an author of the Eureka paper. “Eureka is a first step toward developing new algorithms that integrate generative and reinforcement learning methods to solve hard tasks.”

AI Trains Robots

Eureka-generated reward programs — which enable trial-and-error learning for robots — outperform expert human-written ones on more than 80% of tasks, according to the paper. This leads to an average performance improvement of more than 50% for the bots.

Robot arm taught by Eureka to open a drawer.

The AI agent taps the GPT-4 LLM and generative AI to write software code that rewards robots for reinforcement learning. It doesn’t require task-specific prompting or predefined reward templates — and readily incorporates human feedback to modify its rewards for results more accurately aligned with a developer’s vision.

Using GPU-accelerated simulation in Isaac Gym, Eureka can quickly evaluate the quality of large batches of reward candidates for more efficient training.

Eureka then constructs a summary of the key stats from the training results and instructs the LLM to improve its generation of reward functions. In this way, the AI is self-improving. It’s taught all kinds of robots — quadruped, bipedal, quadrotor, dexterous hands, cobot arms and others — to accomplish all kinds of tasks.

The research paper provides in-depth evaluations of 20 Eureka-trained tasks, based on open-source dexterity benchmarks that require robotic hands to demonstrate a wide range of complex manipulation skills.

The results from nine Isaac Gym environments are showcased in visualizations generated using NVIDIA Omniverse.

Humanoid robot learns a running gait via Eureka.

“Eureka is a unique combination of large language models and NVIDIA GPU-accelerated simulation technologies,” said Linxi “Jim” Fan, senior research scientist at NVIDIA, who’s one of the project’s contributors. “We believe that Eureka will enable dexterous robot control and provide a new way to produce physically realistic animations for artists.”

It’s breakthrough work bound to get developers’ minds spinning with possibilities, adding to recent NVIDIA Research advancements like Voyager, an AI agent built with GPT-4 that can autonomously play Minecraft.

NVIDIA Research comprises hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics.

Learn more about Eureka and NVIDIA Research.

Read More

Next-Level Computing: NVIDIA and AMD Deliver Powerful Workstations to Accelerate AI, Rendering and Simulation

Next-Level Computing: NVIDIA and AMD Deliver Powerful Workstations to Accelerate AI, Rendering and Simulation

To enable professionals worldwide to build and run AI applications right from their desktops, NVIDIA and AMD are powering a new line of workstations equipped with NVIDIA RTX Ada Generation GPUs and AMD Ryzen Threadripper PRO 7000 WX-Series CPUs.

Bringing together the highest levels of AI computing, rendering and simulation capabilities, these new platforms enable professionals to efficiently tackle the most resource-intensive, large-scale AI workflows locally.

Bringing AI Innovation to the Desktop

Advanced AI tasks typically require data-center-level performance. Training a large language model with a trillion parameters, for example, takes thousands of GPUs running for weeks, though research is underway to reduce model size and enable model training on smaller systems while still maintaining high levels of AI model accuracy.

The new NVIDIA RTX GPU and AMD CPU-powered AI workstations provide the power and performance required for training such smaller models, as well as local fine-tuning, and helping to offload data center and cloud resources for AI development tasks. The devices let users select single- or multi-GPU configurations as required for their workloads.

Smaller trained AI models also provide the opportunity to use workstations for local inferencing. RTX GPU and AMD CPU-powered workstations can be configured to run these smaller AI models for inference serving for small workgroups or departments.

With up to 48GB of memory in a single NVIDIA RTX GPU, these workstations offer a cost-effective way to reduce compute load on data centers. And when professionals do need to scale training and deployment from these workstations to data centers or the cloud, the NVIDIA AI Enterprise software platform enables seamless portability of workflows and toolchains.

RTX GPU and AMD CPU-powered workstations also enable cutting-edge visual workflows. With accelerated computing power, the new workstations enable highly interactive content creation, industrial digitalization, and advanced simulation and design.

Unmatched Power, Performance and Flexibility

AMD Ryzen Threadripper PRO 7000 WX-Series processors provide the CPU platform for the next generation of demanding workloads. The processors deliver a significant increase in core count — up to 96 cores per CPU — and industry-leading maximum memory bandwidth in a single socket.

Combining them with the latest NVIDIA RTX Ada Generation GPUs brings unmatched power and performance in a workstation. The GPUs enable up to 2x the performance in ray tracing, AI processing, graphics rendering and computational tasks compared to the previous generation.

Ada Generation GPUs options include the RTX 4000 SFF, RTX 4000, RTX 4500, RTX 5000 and RTX 6000. They’re built on the NVIDIA Ada Lovelace architecture and feature up to 142 third-generation RT Cores, 568 fourth-generation Tensor Cores and 18,176 latest-generation CUDA cores.

From architecture and manufacturing to media and entertainment and healthcare, professionals across industries will be able to use the new workstations to tackle challenging AI computing workloads — along with 3D rendering, product visualization, simulation and scientific computing tasks.

Availability

New workstations powered by NVIDIA RTX Ada Generation GPUs and the latest AMD Threadripper Pro processors will be available starting next month from BOXX and HP, with other system integrators offering them soon.

Read More

NVIDIA AI Now Available in Oracle Cloud Marketplace

NVIDIA AI Now Available in Oracle Cloud Marketplace

Training generative AI models just got easier.

NVIDIA DGX Cloud AI supercomputing platform and NVIDIA AI Enterprise software are now available in Oracle Cloud Marketplace, making it possible for Oracle Cloud Infrastructure customers to access high-performance accelerated computing and software to run secure, stable and supported production AI in just a few clicks.

The addition — an industry first — brings new capabilities for end-to-end development and deployment on Oracle Cloud. Enterprises can get started from the Oracle Cloud Marketplace to train models on DGX Cloud, and then deploy their applications on OCI with NVIDIA AI Enterprise.

Oracle Cloud and NVIDIA Lift Industries Into Era of AI

Thousands of enterprises around the world rely on OCI to power the applications that drive their businesses. Its customers include leaders across industries such as healthcare, scientific research, financial services, telecommunications and more.

Oracle Cloud Marketplace is a catalog of solutions that offers customers flexible consumption models and simple billing. Its addition of DGX Cloud and NVIDIA AI Enterprise lets OCI customers use their existing cloud credits to integrate NVIDIA’s leading AI supercomputing platform and software into their development and deployment pipelines.

With DGX Cloud, OCI customers can train models for generative AI applications like intelligent chatbots, search, summarization and content generation.

The University at Albany, in upstate New York, recently launched its AI Plus initiative, which is integrating teaching and learning about AI across the university’s research and academic enterprise, in fields such as cybersecurity, weather prediction, health data analytics, drug discovery and next-generation semiconductor design. It will also foster collaborations across the humanities, social sciences, public policy and public health. The university is using DGX Cloud AI supercomputing instances on OCI as it builds out an on-premises supercomputer.

“We’re accelerating our mission to infuse AI into virtually every academic and research disciplines,” said Thenkurussi (Kesh) Kesavadas, vice president for research and economic development at UAlbany. “We will drive advances in healthcare, security and economic competitiveness, while equipping students for roles in the evolving job market.”

NVIDIA AI Enterprise brings the software layer of the NVIDIA AI platform to OCI. It includes NVIDIA NeMo frameworks for building LLMs, NVIDIA RAPIDS for data science and NVIDIA TensorRT-LLM and NVIDIA Triton Inference Server for supercharging production AI. NVIDIA software for cybersecurity, computer vision, speech AI and more is also included. Enterprise-grade support, security and stability ensure a smooth transition of AI projects from pilot to production.

NVIDIA DGX Cloud generative AI training
NVIDIA DGX Cloud provides enterprises immediate access to AI supercomputing platform and software hosted by their preferred cloud provider.

AI Supercomputing Platform Hosted by OCI

NVIDIA DGX Cloud provides enterprises immediate access to an AI supercomputing platform and software.

Hosted by OCI, DGX Cloud provides enterprises with access to multi-node training on NVIDIA GPUs, paired with NVIDIA AI software, for training advanced models for generative AI and other groundbreaking applications.

Each DGX Cloud instance consists of eight NVIDIA Tensor Core GPUs interconnected with network fabric, purpose-built for multi-node training. This high-performance computing architecture also includes industry-leading AI development software and offers direct access to NVIDIA AI expertise so businesses can train LLMs faster.

OCI customers access DGX Cloud using NVIDIA Base Command Platform, which gives developers access to an AI supercomputer through a web browser. By providing a single-pane view of the customer’s AI infrastructure, Base Command Platform simplifies the management of multinode clusters.

NVIDIA AI Enterprise software
NVIDIA AI Enterprise software powers secure, stable and supported production AI and data science.

Software for Secure, Stable and Supported Production AI

NVIDIA AI Enterprise enables rapid development and deployment of AI and data science.

With NVIDIA AI Enterprise on Oracle Cloud Marketplace, enterprises can efficiently build an application once and deploy it on OCI and their on-prem infrastructure, making a multi- or hybrid-cloud strategy cost-effective and easy to adopt. Since NVIDIA AI Enterprise is also included in NVIDIA DGX Cloud, customers can streamline the transition from training on DGX Cloud to deploying their AI application into production with NVIDIA AI Enterprise on OCI, since the AI software runtime is consistent across the environments.

Qualified customers can purchase NVIDIA AI Enterprise and NVIDIA DGX Cloud with their existing Oracle Universal Credits.

Visit NVIDIA AI Enterprise and NVIDIA DGX Cloud on the Oracle Cloud Marketplace to get started today.

Read More

Coming in Clutch: Stream ‘Counter-Strike 2’ From the Cloud for Highest Frame Rates

Coming in Clutch: Stream ‘Counter-Strike 2’ From the Cloud for Highest Frame Rates

Rush to the cloud — stream Counter-Strike 2 on GeForce NOW for the highest frame rates. Members can play through the newest chapter of Valve’s elite, competitive, first-person shooter from the cloud.

It’s all part of an action-packed GFN Thursday, with 22 more games joining the cloud gaming platform’s library, including Hot Wheels Unleashed 2 – Turbocharged.

“Rush B! Rush B!”

 

Counter-Strike 2 is the long-awaited upgrade to one of the most recognizable competitive first-person shooters in the world.

Building on the legacy of Counter-Strike: Global Offensive, the latest iteration brings the action to Valve’s long-anticipated Source 2 video game engine, promising enhanced graphical fidelity with a physically based rendering system for more realistic textures and materials, dynamic lighting, reflections and more.

Smoke grenades are now dynamic volumetric objects that can interact with their surroundings by reacting to lighting and other environmental effects. And smoke particles work with the unified lighting system, allowing for more realistic light and color.

Even better: GeForce NOW Ultimate members can take full advantage of NVIDIA Reflex for ultra-low-latency gameplay streaming from the cloud. Rush the objective with the squad on Counter-Strike 2’s remastered maps at up to 240 frames per second — a first for cloud gaming. Upgrade today for the Ultimate Counter-Strike experience.

Vroom, Vroom!

We’re going turbo.

There’s more action around every turn of the GeForce NOW library. Put the pedal to the metal in Hot Wheels Unleashed 2 – Turbocharged, one of 22 newly supported games joining this week:

  • Wizard With a Gun (New release on Steam, Oct. 17)
  • Alaskan Road Truckers (New release on Steam, Oct. 18)
  • Hellboy: Web of Wyrd (New release on Steam, Oct. 18)
  • AirportSim (New release on Steam, Oct. 19)
  • Eternal Threads (New release on Epic Games Store, Oct. 19)
  • Hot Wheels Unleashed 2 – Turbocharged (New release on Steam, Oct. 19)
  • Laika Aged Through Blood (New release on Steam, Oct. 19)
  • Battle Chasers: Nightwar (Xbox, available on Microsoft Store)
  • Black Skylands (Xbox, available on Microsoft Store)
  • Blair Witch (Xbox, available on Microsoft Store)
  • Chicory: A Colorful Tale (Xbox and available on PC Game Pass)
  • Dead by Daylight (Xbox and available on PC Game Pass)
  • Dune: Spice Wars (Xbox and available on PC Game Pass)
  • Everspace 2 (Xbox and available on PC Game Pass)
  • EXAPUNKS (Xbox and available on PC Game Pass)
  • Gungrave G.O.R.E (Xbox and available on PC Game Pass)
  • Railway Empire 2 (Xbox and available on PC Game Pass)
  • Techtonica (Xbox and available on PC Game Pass)
  • Teenage Mutant Ninja Turtles: Shredder’s Revenge (Xbox and available on PC Game Pass)
  • Torchlight III (Xbox and available on PC Game Pass)
  • Trine 5: A Clockwork Conspiracy (Epic Games Store)
  • Vampire Survivors (Xbox, available on PC Game Pass)

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

Read More

NVIDIA Expands Robotics Platform to Meet the Rise of Generative AI

NVIDIA Expands Robotics Platform to Meet the Rise of Generative AI

Powerful generative AI models and cloud-native APIs and microservices are coming to the edge.

Generative AI is bringing the power of transformer models and large language models to virtually every industry. That reach now includes areas that touch edge, robotics and logistics systems: defect detection, real-time asset tracking, autonomous planning and navigation, human-robot interactions and more.

NVIDIA today announced major expansions to two frameworks on the NVIDIA Jetson platform for edge AI and robotics: the NVIDIA Isaac ROS robotics framework has entered general availability, and the NVIDIA Metropolis expansion on Jetson is coming next.

To accelerate AI application development and deployments at the edge, NVIDIA has also created a Jetson Generative AI Lab for developers to use with the latest open-source generative AI models.

More than 1.2 million developers and over 10,000 customers have chosen NVIDIA AI and the Jetson platform, including Amazon Web Services, Cisco, John Deere, Medtronic, Pepsico and Siemens.

With the rapidly evolving AI landscape addressing increasingly complicated scenarios, developers are being challenged by longer development cycles to build AI applications for the edge. Reprogramming robots and AI systems on the fly to meet changing environments, manufacturing lines and automation needs of customers is time-consuming and requires expert skills.

Generative AI offers zero-shot learning — the ability for a model to recognize things specifically unseen before in training — with a natural language interface to simplify the development, deployment and management of AI at the edge.

Transforming the AI Landscape

Generative AI dramatically improves ease of use by understanding human language prompts to make model changes. Those AI models are more flexible in detecting, segmenting, tracking, searching and even reprogramming — and  help outperform traditional convolutional neural network-based models.

Generative AI is expected to add $10.5 billion in revenue for manufacturing operations worldwide by 2033, according to ABI Research.

“Generative AI will significantly accelerate deployments of AI at the edge with better generalization, ease of use and higher accuracy than previously possible,” said Deepu Talla, vice president of embedded and edge computing at NVIDIA. “This largest-ever software expansion of our Metropolis and Isaac frameworks on Jetson, combined with the power of transformer models and generative AI, addresses this need.”

Developing With Generative AI at the Edge

The Jetson Generative AI Lab provides developers access to optimized tools and tutorials for deploying open-source LLMs, diffusion models to generate stunning interactive images, vision language models (VLMs) and vision transformers (ViTs) that combine vision AI and natural language processing to provide comprehensive understanding of the scene.

Developers can also use the NVIDIA TAO Toolkit to create efficient and accurate AI models for edge applications. TAO provides a low-code interface to fine-tune and optimize vision AI models, including ViT and vision foundational models. They can also customize and fine-tune foundational models like NVIDIA NV-DINOv2 or public models like OpenCLIP to create highly accurate vision AI models with very little data. TAO additionally now includes VisualChangeNet, a new transformer-based model for defect inspection.

Harnessing New Metropolis and Isaac Frameworks

NVIDIA Metropolis makes it easier and more cost-effective for enterprises to embrace world-class, vision AI-enabled solutions to improve critical operational efficiency and safety problems. The platform brings a collection of powerful application programming interfaces and microservices for developers to quickly develop complex vision-based applications.

More than 1,000 companies, including BMW Group, Pepsico, Kroger, Tyson Foods, Infosys and Siemens, are using NVIDIA Metropolis developer tools to solve Internet of Things, sensor processing and operational challenges with vision AI — and the rate of adoption is quickening. The tools have now been downloaded over 1 million times by those looking to build vision AI applications.

To help developers quickly build and deploy scalable vision AI applications, an expanded set of Metropolis APIs and microservices on NVIDIA Jetson will be available by year’s end.

Hundreds of customers use the NVIDIA Isaac platform to develop high-performance robotics solutions across diverse domains, including agriculture, warehouse automation, last-mile delivery and service robotics, among others.

At ROSCon 2023, NVIDIA announced major improvements to perception and simulation capabilities with new releases of Isaac ROS and Isaac Sim software. Built on the widely adopted open-source Robot Operating System (ROS), Isaac ROS brings perception to automation, giving eyes and ears to the things that move. By harnessing the power of GPU-accelerated GEMs, including visual odometry, depth perception, 3D scene reconstruction, localization and planning, robotics developers gain the tools needed to swiftly engineer robotic solutions tailored for a diverse range of applications.

Isaac ROS has reached production-ready status with the latest Isaac ROS 2.0 release, enabling developers to create and bring high-performance robotics solutions to market with Jetson.

“ROS continues to grow and evolve to provide open-source software for the whole robotics community,” said Geoff Biggs, CTO of the Open Source Robotics Foundation. “NVIDIA’s new prebuilt ROS 2 packages, launched with this release, will accelerate that growth by making ROS 2 readily available to the vast NVIDIA Jetson developer community.”

Delivering New Reference AI Workflows

Developing a production-ready AI solution entails optimizing the development and training of AI models tailored to specific use cases, implementing robust security features on the platform, orchestrating the application, managing fleets, establishing seamless edge-to-cloud communication and more.

NVIDIA announced a curated collection of AI reference workflows based on Metropolis and Isaac frameworks that enable developers to quickly adopt the entire workflow or selectively integrate individual components, resulting in substantial reductions in both development time and cost. The three distinct AI workflows include: Network Video Recording, Automatic Optical Inspection and Autonomous Mobile Robot.

“NVIDIA Jetson, with its broad and diverse user base and partner ecosystem, has helped drive a revolution in robotics and AI at the edge,” said Jim McGregor, principal analyst at Tirias Research. “As application requirements become increasingly complex, we need a foundational shift to platforms that simplify and accelerate the creation of edge deployments. This significant software expansion by NVIDIA gives developers access to new multi-sensor models and generative AI capabilities.”

More Coming on the Horizon 

NVIDIA announced a collection of system services which are fundamental capabilities that every developer requires when building edge AI solutions. These services will simplify integration into workflows and spare developer the arduous task of building them from the ground up.

The new NVIDIA JetPack 6, expected to be available by year’s end, will empower AI developers to stay at the cutting edge of computing without the need for a full Jetson Linux upgrade, substantially expediting development timelines and liberating them from Jetson Linux dependencies. JetPack 6 will also use the collaborative efforts with Linux distribution partners to expand the horizon of Linux-based distribution choices, including Canonical’s Optimized and Certified Ubuntu, Wind River Linux, Concurrent Real’s Redhawk Linux and various Yocto-based distributions.

Partner Ecosystem Benefits From Platform Expansion

The Jetson partner ecosystem provides a wide range of support, from hardware, AI software and application design services to sensors, connectivity and developer tools. These NVIDIA Partner Network innovators play a vital role in providing the building blocks and sub-systems for many products sold on the market.

The latest release allows Jetson partners to accelerate their time to market and expand their customer base by adopting AI with increased performance and capabilities.

Independent software vendor partners will also be able to expand their offerings for Jetson.

Join us Tuesday, Nov. 7, at 9 a.m. PT for the Bringing Generative AI to Life with NVIDIA Jetson webinar, where technical experts will dive deeper into the news announced here, including accelerated APIs and quantization methods for deploying LLMs and VLMs on Jetson, optimizing vision transformers with TensorRT, and more.

Sign up for NVIDIA Metropolis early access here.

 

 

 

 

 

 

 

Read More