GeForce NOW Kicks Off a Summer of Gaming With 25 New Titles This June

GeForce NOW Kicks Off a Summer of Gaming With 25 New Titles This June

GeForce NOW is a gamer’s ticket to an unforgettable summer of gaming. With 25 titles coming this month and endless ways to play, the summer is going to be epic.

Dive in, level up and make it a summer to remember, one game at a time. Start with the ten games available this week, including advanced access for those who’ve preordered the Deluxe or Ultimate versions of Funcom’s highly anticipated Dune: Awakening.

Plus, check out the latest update for miHoYo’s Zenless Zone Zero, bringing fresh content and even more action for summer.

And to keep the good times rolling, take advantage of the GeForce NOW Summer Sale to enjoy a sizzling 40% off a six-month Performance membership. It’s the perfect way to extend a summer of fun in the cloud.

Dawn Rises With the Cloud

Zenless Zone Zero V2.0
The next chapter begins.

Get ready for a new leap in Zenless Zone Zero. Version 2.0 “Where Clouds Embrace the Dawn” launches tomorrow, June 6, marking the start of the game’s second season. Explore the new Waifei Peninsula, team up with Grandmaster Yixuan and manage the Suibian Temple, all with enhanced maps and navigation.

Celebrate the game’s first anniversary with free rewards — including an S-Rank Agent, S-Rank W-Engine, and 1,600 Polychromes. With new agents, expanded content and major improvements, now’s the perfect time to jump into New Eridu.

Stream it on GeForce NOW for instant access and top-tier performance — no downloads or high-end hardware needed. Stream the latest content with smooth graphics and low latency on any device, and jump straight into the action to enjoy all the new features and anniversary rewards.

Jumping Into June

Level up summer gaming with the Summer Sale. Get 40% off six-month GeForce NOW Performance memberships — perfect for playing on handheld devices, including the new GeForce NOW app on Steam Deck, which lets gamers stream over 2,200 games at up to 4K 60 frames per second or 1440p 120 fps. Experience AAA gaming at max settings with longer battery life, and access supported games from Steam, Epic Games Store, PC Game Pass and more.

Put that upgraded membership to the test with what’s coming to the cloud this week on GeForce NOW:

  • Symphonia (New release on Xbox, available on PC Game Pass, June 3)
  • Pro Cycling Manager 25 (New release on Steam, June 5)
  • Tour de France 2025 (New release on Steam, June 5)
  • Dune: Awakening – Advanced Access (New release on Steam, June 5)
  • 7 Days to Die (Xbox)
  • Clair Obscur: Expedition 33 (Epic Games Store)
  • Cubic Odyssey  (Steam)
  • Drive Beyond Horizons (Steam)
  • Police Simulator: Patrol Officers (Xbox, available on PC Game Pass)
  • Sea of Thieves (Battle.net)

Here’s what to expect for the rest of June: 

  • Dune: Awakening (New release on Steam, June 10)
  • MindsEye (New release on Steam, June 10)
  • The Alters (New release on Steam and Xbox, available on PC Game Pass, June 13)
  • Architect Life: A House Design Simulator (New release on Steam, June 19)
  • Crime Simulator (New release on Steam, June 17)
  • FBC: Firebreak (New release on Steam and Xbox, available on PC Game Pass, June 17)
  • Lost in Random: The Eternal Die (New release on Steam and Xbox, available on PC Game Pass, June 17)
  • Broken Arrow (New release on Steam, June 19)
  • REMATCH (New release on Steam and Xbox, available on PC Game Pass, June 19)
  • DREADZONE (New release on Steam, June 26)
  • System Shock 2: 25th Anniversary Remaster (New release on Steam, June 26)
  • Borderlands Game of the Year Enhanced (Steam)
  • Borderlands 2 (Steam and Epic Games Store)
  • Borderlands 3 (Steam and Epic Games Store)
  • Easy Red 2 (Steam)

May I Have More Games?

In addition to the 21 games announced last month, 16 more joined the GeForce NOW library:

  • Mafia (Steam)
  • Mafia II (Classic) (Steam)
  • Mafia: Definitive Edition (Steam and Epic Games Store)
  • Mafia II: Definitive Edition (Steam and Epic Games Store)
  • Mafia III: Definitive Edition (Steam and Epic Games Store)
  • Towerborne (Steam and Xbox, available on PC Game Pass)
  • Capcom Fighting Collection 2 (Steam)
  • Microsoft Flight Simulator 2024 (Steam and Xbox, available on PC Game Pass)
  • S.T.A.L.K.E.R.: Call of Prypiat – Enhanced Edition (Steam)
  • S.T.A.L.K.E.R.: Clear Sky – Enhanced Edition (Steam)
  • S.T.A.L.K.E.R.: Shadow of Chornobyl – Enhanced Edition (Steam)
  • Game of Thrones: Kingsroad (Steam)
  • Splitgate 2 Open Beta (Steam)
  • Onimusha 2: Samurai’s Destiny (Steam)
  • Nice Day for Fishing (Steam)
  • Cash Cleaner Simulator (Steam)

War Robots: Frontiers is no longer coming to GeForce NOW. Stay tuned for more game announcements and updates every GFN Thursday.

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

How 1X Technologies’ Robots Are Learning to Lend a Helping Hand

How 1X Technologies’ Robots Are Learning to Lend a Helping Hand

Humans learn the norms, values and behaviors of society from each other — and Bernt Børnich, founder and CEO of 1X Technologies, thinks robots should learn like this, too.

“For robots to be truly intelligent and show nuances like being careful around your pet, holding the door open for an elderly person and generally behaving like we want robots to behave, they have to live and learn among us,” Børnich told the AI Podcast.

1X Technologies is committed to building fully autonomous humanoid robots, with a focus on safety, affordability and adaptability.

Børnich explained how 1X Technologies uses a combination of reinforcement learning, expert demonstrations and real-world data to enable its robots to continuously learn and adapt to new situations.

NEO, the company’s upcoming robot, can perform household tasks like vacuuming, folding laundry, tidying and retrieving items. It’s built with operational safety at its core, using tendon-driven mechanisms inspired by the human musculoskeletal system to achieve low energy consumption.

Børnich highlights the potential for robots to enhance human productivity by helping handle mundane tasks, freeing people up to focus more on interpersonal connections and creative activities.

Learn more about the latest in physical AI and robotics at NVIDIA GTC Paris, which takes place from June 10-12. Register to attend humanoid-related sessions, including:

Time Stamps

05:18 – 1X Technologies’ approach to robot safety.

11:36 – How world models enable robots to search backwards from the goal.

16:51 – How robots can free humans up for more meaningful activities.

22:29 – NEO answers the door so Børnich can interview a candidate.

You Might Also Like… 

How World Foundation Models Will Advance Physical AI With NVIDIA’s Ming-Yu Liu

AI models that can accurately simulate and predict outcomes in physical, real-world environments will enable the next generation of physical AI systems. Ming-Yu Liu, vice president of research at NVIDIA and an IEEE Fellow, explains the significance of world foundation models — powerful neural networks that can simulate physical environments.

Roboflow Helps Unlock Computer Vision for Every Kind of AI Builder

Roboflow’s mission is to make the world programmable through computer vision. By simplifying computer vision development, the company helps bridge the gap between AI and people looking to harness it. Cofounder and CEO Joseph Nelson discusses how Roboflow empowers users in manufacturing, healthcare and automotive to solve complex problems with visual AI.

Imbue CEO Kanjun Qiu on Transforming AI Agents Into Personal Collaborators

Kanjun Qiu, CEO of Imbue, explores the emerging era where individuals can create and use their own AI agents. Drawing a parallel to the PC revolution of the late 1970s and ‘80s, Qiu discusses how modern AI systems are evolving to work collaboratively with users, enhancing their capabilities rather than just automating tasks.

Read More

NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results

NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results

NVIDIA is working with companies worldwide to build out AI factories — speeding the training and deployment of next-generation AI applications that use the latest advancements in training and inference.

The NVIDIA Blackwell architecture is built to meet the heightened performance requirements of these new applications. In the latest round of MLPerf Training — the 12th since the benchmark’s introduction in 2018 — the NVIDIA AI platform delivered the highest performance at scale on every benchmark and powered every result submitted on the benchmark’s toughest large language model (LLM)-focused test: Llama 3.1 405B pretraining.

The NVIDIA platform was the only one that submitted results on every MLPerf Training v5.0 benchmark — underscoring its exceptional performance and versatility across a wide array of AI workloads, spanning LLMs, recommendation systems, multimodal LLMs, object detection and graph neural networks.

The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. In addition, NVIDIA collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs.

On the new Llama 3.1 405B pretraining benchmark, Blackwell delivered 2.2x greater performance compared with previous-generation architecture at the same scale.

On the Llama 2 70B LoRA fine-tuning benchmark, NVIDIA DGX B200 systems, powered by eight Blackwell GPUs, delivered 2.5x more performance compared with a submission using the same number of GPUs in the prior round.

These performance leaps highlight advancements in the Blackwell architecture, including high-density liquid-cooled racks, 13.4TB of coherent memory per rack, fifth-generation NVIDIA NVLink and NVIDIA NVLink Switch interconnect technologies for scale-up and NVIDIA Quantum-2 InfiniBand networking for scale-out. Plus, innovations in the NVIDIA NeMo Framework software stack raise the bar for next-generation multimodal LLM training, critical for bringing agentic AI applications to market.

These agentic AI-powered applications will one day run in AI factories — the engines of the agentic AI economy. These new applications will produce tokens and valuable intelligence that can be applied to almost every industry and academic domain.

The NVIDIA data center platform includes GPUs, CPUs, high-speed fabrics and networking, as well as a vast array of software like NVIDIA CUDA-X libraries, the NeMo Framework, NVIDIA TensorRT-LLM and NVIDIA Dynamo. This highly tuned ensemble of hardware and software technologies empowers organizations to train and deploy models more quickly, dramatically accelerating time to value.

The NVIDIA partner ecosystem participated extensively in this MLPerf round. Beyond the submission with CoreWeave and IBM, other compelling submissions were from ASUS, Cisco, Dell Technologies, Giga Computing, Google Cloud, Hewlett Packard Enterprise, Lambda, Lenovo, Nebius, Oracle Cloud Infrastructure, Quanta Cloud Technology, ScitiX and Supermicro.

Learn more about MLPerf benchmarks.

Read More

NVIDIA RTX Blackwell GPUs Accelerate Professional-Grade Video Editing

NVIDIA RTX Blackwell GPUs Accelerate Professional-Grade Video Editing

4:2:2 cameras — capable of capturing double the color information compared with most standard cameras — are becoming widely available for consumers. At the same time, generative AI video models are rapidly increasing in functionality and quality, making new tools and workflows possible.

NVIDIA RTX GPUs based on the NVIDIA Blackwell architecture include dedicated hardware to encode and decode 4:2:2 video, and come with fifth-generation Tensor Cores designed to accelerate AI and deep learning workloads.

GeForce RTX 50 Series and NVIDIA RTX PRO Blackwell Series are primed to meet this demand, powering generative AI, new AI features and state-of-the-art video editing workflows for quicker cuts and faster exports.

4:2:2 Goes Mainstream

4:2:2 10-bit compatible video cameras are on the rise.

These cameras, which were traditionally reserved for professional use due to their high cost, are becoming more cost-friendly, with major manufacturers offering them at prices under $600.

4:2:2 cameras can capture double the color information compared with standard 4:2:0 cameras while only increasing raw file sizes by 30%.

4:2:2 video cameras are on the rise, thanks to more affordable prices. Creators have more camera options than ever at lower entry points.

Standard cameras typically use 4:2:0 8-bit color compression, capable of capturing only a fraction of color information. While 4:2:0 is acceptable for video playback on browsers, professional video editors demand cameras that capture 4:2:2 color accuracy and fidelity, while keeping file sizes reasonable.

The downside of 4:2:2 is that the additional color information requires more computational power for playback, often leading to stuttering streams. As a result, many editors have had to create proxies before editing — a time-consuming process that requires additional storage and lowers fidelity while editing.

The GeForce RTX 50 Series adds hardware acceleration for 4:2:2 encode and decode, helping solve this computational challenge. RTX 50 Series GPUs boast a 10x acceleration in 4:2:2 encoding and can decode up to 8K 75 frames per second — equivalent to 10x 4K 30fps streams per decoder.

The most popular video editing apps, including Blackmagic Design’s DaVinci Resolve, CapCut and Wondershare Filmora, support NVIDIA hardware acceleration for 4:2:2 encode and decode. Adobe Premiere Pro offers decode support.

Combining 4:2:2 support with NVIDIA hardware increases creative possibilities. 10-bit 4:2:2 retains more color information than 8-bit 4:2:0, resulting in more accurate color representations and better color grading results for video editors.

4:2:2 offers more accurate color representation for better color grading results.

The extra color data from 4:2:2 support allows for increased flexibility during color correction and grading for more detailed adjustments. Improved keying enables cleaner and more accurate extractions of subjects from background, as well as sharper edges for smaller keyed objects.

4:2:2 offers more accurate color representation for better color grading results.4:2:2 enables cleaner text in video content.

 

4:2:2 reduces file sizes without significantly impacting picture quality, offering an optimal balance between quality and storage.

Generative AI-Powered Video Editing

Generative AI models are enabling video editors to generate filler video, extend clips, modify videos styles and apply advanced visual effects with speed and ease, drastically reducing production times.

Popular models like WAN or LTX Video can generate higher-quality video with greater prompt accuracy and faster load times.

GeForce RTX and NVIDIA RTX PRO GPUs based on NVIDIA Blackwell enable these large, complex models to run quickly and on device, with support thanks to NVIDIA CUDA optimizations for PyTorch. Plus, the fifth-generation Tensor Cores in these GPUs offer support for FP4 quantization, allowing developers and enthusiasts to improve performance by over 2x and halve the VRAM needed.

Cutting-Edge Video Editing AI Features

Modern video editing apps provide an impressive array of advanced AI features — accelerated by GeForce RTX and NVIDIA RTX PRO GPUs.

DaVinci Resolve Studio 20, now in general release, adds new AI effects and integrates NVIDIA TensorRT to optimize AI performance. One of the new features, UltraNR Noise Reduction, is an AI-driven noise reduction mode that intelligently targets and reduces digital noise in video footage to maintain image clarity while minimizing softening. UltraNR Noise Reduction runs up to 75% faster on the GeForce RTX 5090 GPU than the previous generation.

Magic Mask is another AI-powered feature in DaVinci Resolve that enables users to quickly and accurately select and track objects, people or features within a scene, simplifying the process of creating masks and effects. Magic Mask v2 adds a paint brush to further adjust masking selections for more accurate and faster workflows.

Topaz Video AI Pro video enhancement software uses AI models like Gaia and Artemis to intelligently increase video resolution to 4K, 8K and even 16K — adding detail and sharpness while minimizing artifacts and noise. The software also benefits from TensorRT acceleration.

Topaz Starlight mini, the first local desktop diffusion model for video enhancement, can enhance footage — from tricky 8/16mm film to de-interlaced mini-DV video — that may otherwise be challenging for traditional AI models to handle. The model delivers exceptional quality at the cost of intensive compute requirements, meaning it can only run locally on RTX GPUs.

Adobe Premiere Pro recently released several new AI features, such as Adobe Media Intelligence, which uses AI to analyze footage and apply semantic tags to clips. This lets users more easily and quickly find specific footage by describing its content, including objects, locations, camera angles and even transcribed spoken words. Media Intelligence runs 30% faster on the GeForce RTX 5090 Laptop GPU compared with the GeForce RTX 4090 Laptop GPU.

Adobe’s Enhance Speech feature improves the quality of recorded speech by filtering out unwanted noise and making the audio sound clearer. Enhance Speech runs 7x faster on GeForce RTX 5090 Laptop GPUs compared with the MacBook Pro M4 Max.

Cut Like a Pro

GeForce RTX and NVIDIA RTX PRO GPUs are built to deliver the computational power needed for advanced video editing workflows.

These GPUs contain powerful NVIDIA hardware decoders (NVDEC) to unlock smooth playback and scrubbing of high-resolution video footage and multi-stream videos without the need for proxies. NVDEC is supported in Adobe Premiere Pro, CapCut, DaVinci Resolve, Vegas Pro and Wondershare Filmora.

Creative apps use these additional encoders in GeForce RTX 5080 and 5090 GPUs, as well as RTX PRO 6000, 5000, 4500 and 4000 Blackwell GPUs — and now features support for 4:2:2.

Creators can use the RTX 5080 and 5090, for example, to import 5x 8K30 or 20x 4K30 streams at once, or import 10x 4K60 to do multi-camera editing and review multiple camera angles without slowdown. With the RTX PRO 6000, this can be boosted to up to 10x 8K30 or 40x 4K30 streams.

GeForce RTX and NVIDIA RTX PRO GPU Laptop GPU encoders and decoders.

NVIDIA CUDA cores accelerate video and image processing effects such as motion tracking, sharpening, upsampling, transition effects and other computationally intensive tasks. They also accelerate rendering times, enable real-time previews while working with high-resolution video footage and speed up AI features, such as automatic color correction, object removal and noise reduction.

When it’s time to export, video editors that use the GeForce RTX 50 Series ninth-generation NVIDIA video encoder can get a 5% improvement in video quality on HEVC and AV1 encoding (BD-BR), resulting in higher-quality exports at the same bitrates.

Plus, a new Ultra High Quality (UHQ) mode available in the latest Blackwell encoder boosts quality by an additional 5% for HEVC and AV1 and is backwards-compatible with the GeForce RTX 40 Series.

DaVinci Resolve, CapCut and Filmora also support multi-encoder encoding, either via split encoding — where an input frame is divided into three parts, each processed by a different NVENC encoder — or simultaneous scene encoding, in which a video is split by groups of pictures that are each sent to an encoder to batch the operation for up to 2.5x faster export performance.

Tune in to NVIDIA founder and CEO Jensen Huang’s keynote at NVIDIA GTC Paris at VivaTech on June 11. Check out full-day workshops on June 10 and two days of technical sessions, training and certifications.

Stay tuned for more RTX and AI powered advances in content creation.

Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 

Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.

Follow NVIDIA Workstation on LinkedIn and X

See notice regarding software product information.

Read More

Bring Receipts: New NVIDIA AI Blueprint Detects Fraudulent Credit Card Transactions With Precision

Bring Receipts: New NVIDIA AI Blueprint Detects Fraudulent Credit Card Transactions With Precision

Editor’s note: This blog, originally published on October 28, 2024, has been updated.

Financial losses from worldwide credit card transaction fraud are projected to reach more than $403 billion over the next decade.

The new NVIDIA AI Blueprint for financial fraud detection can help combat this burgeoning epidemic — using accelerated data processing and advanced algorithms to improve AI’s ability to detect and prevent credit card transaction fraud.

Launched this week at the Money20/20 financial services conference, the blueprint provides a reference example for financial institutions to identify subtle patterns and anomalies in transaction data based on user behavior to improve accuracy and reduce false positives compared with traditional methods.

It shows developers how to build a financial fraud detection workflow by providing reference code, deployment tools and a reference architecture.

Companies can streamline the migration of their fraud detection workflows from traditional compute to accelerated compute using the NVIDIA AI Enterprise software platform and NVIDIA accelerated computing. The NVIDIA AI Blueprint is available for customers to run on Amazon Web Services, with availability coming soon on Dell Technologies and Hewlett Packard Enterprise. Customers can also use the blueprint through service offerings from NVIDIA partners including Cloudera, EXL, Infosys and SHI International.

Businesses embracing comprehensive machine learning (ML) tools and strategies can observe up to an estimated 40% improvement in fraud detection accuracy, boosting their ability to identify and stop fraudsters faster and mitigate harm.

As such, leading financial organizations like American Express and Capital One have been using AI to build proprietary solutions that mitigate fraud and enhance customer protection.

The new AI Blueprint accelerates model training and inference, and demonstrates how these components can be wrapped into a single, easy-to-use software offering, powered by NVIDIA AI.

Currently optimized for credit card transaction fraud, the blueprint could be adapted for use cases such as new account fraud, account takeover and money laundering.

Using Accelerated Computing and Graph Neural Networks for Fraud Detection

Traditional data science pipelines lack the compute acceleration to handle the massive data volumes required for effective fraud detection. ML models like XGBoost are effective for detecting anomalies in individual transactions but fall short when fraud involves complex networks of linked accounts and devices.

Helping address these gaps, NVIDIA RAPIDS — part of the NVIDIA CUDA-X collection of microservices, libraries, tools and technologies — enables payment companies to speed up data processing and transform raw data into powerful features at scale. These companies can fuel their AI models and integrate them with graph neural networks (GNNs) to uncover hidden, large-scale fraud patterns by analyzing relationships across different transactions, users and devices.

The use of gradient-boosted decision trees — a type of ML algorithm — tapping into libraries such as XGBoost, has long been the standard for fraud detection.

The new AI Blueprint for financial fraud detection enhances the XGBoost ML model with NVIDIA CUDA-X Data Science libraries including GNNs to generate embeddings that can be used as additional features to help reduce false positives.

The GNN embeddings are fed into XGBoost to create and train a model that can then be orchestrated. In addition, NVIDIA Dynamo-Triton, formerly NVIDIA Triton Inference Server, boosts real-time inferencing while optimizing AI model throughput, latency and utilization.

NVIDIA CUDA-X Data Science and Dynamo-Triton are included with NVIDIA AI Enterprise.

Leading Financial Services Organizations Adopt AI

During a time when many large North American financial institutions are reporting online or mobile fraud losses continue to increase, AI is helping to combat this trend.

American Express, which began using AI to fight fraud in 2010, leverages fraud detection algorithms to monitor all customer transactions globally in real time, generating fraud decisions in just milliseconds. Using a combination of advanced algorithms, one of which tapped into the NVIDIA AI platform, American Express enhanced model accuracy, advancing the company’s ability to better fight fraud.

European digital bank bunq uses generative AI and large language models to help detect fraud and money laundering. Its AI-powered transaction-monitoring system achieved nearly 100x faster model training speeds with NVIDIA accelerated computing.

BNY announced in March 2024 that it became the first major bank to deploy an NVIDIA DGX SuperPOD with DGX H100 systems, which will help build solutions that support fraud detection and other use cases.

And now, systems integrators, software vendors and cloud service providers can integrate the new NVIDIA blueprint for fraud detection to boost their financial services applications and help keep customers’ money, identities and digital accounts safe.

Explore the NVIDIA AI Blueprint for financial fraud detection and read this NVIDIA Technical Blog on supercharging fraud detection with GNNs.

Learn more about AI for fraud detection by visiting the AI Summit at Money20/20, running this week in Amsterdam.

See notice regarding software product information.

Read More

Researchers and Students in Türkiye Build AI, Robotics Tools to Boost Disaster Readiness

Researchers and Students in Türkiye Build AI, Robotics Tools to Boost Disaster Readiness

Since a 7.8-magnitude earthquake hit Syria and Türkiye two years ago — leaving 55,000 people dead, 130,000 injured and millions displaced from their homes — students, researchers and developers have been harnessing the latest AI robotics technologies to increase disaster preparedness in the region.

The work is part of a Disaster Response Innovation and Education Grant provided by NVIDIA in collaboration with Bridge to Türkiye Fund, a nonprofit supporting underserved communities in Türkiye, with a focus on education and sustainability.

The fruits of the grant — which provided 100 free NVIDIA Jetson Nano Developer Kits and $50,000 in funding divided among eight awardees — are now being realized through projects on AI-powered inspection, search and rescue, robotics education and more.

Recipients of the grant have, for example, trained robots on key skills needed in search-and-rescue operations, built a tool to test water and food sources for pathogen contamination in disaster-stricken areas, and launched a hands-on programming course at a Turkish institute of technology.

The grant’s impact is in addition to the more than $1.9 million in employee donations and company matching provided by NVIDIANs around the globe to support victims of the devastating earthquakes.

“After the earthquake, we didn’t want to be bystanders,” said Harun Bayraktar, senior director of libraries engineering at NVIDIA. “We wanted to invest our time and efforts to make a difference and save lives next time.”

Bayraktar and Berra Kara, a senior GPU power architect at NVIDIA — both of whom grew up in Türkiye — volunteered to lead the grant program team, aiming to raise awareness for disaster response in the country, increase AI and robotics expertise, and help minimize the casualties of any future earthquakes.

Read more about the rippling impacts of NVIDIA and Bridge to Türkiye’s grant program:

Researchers Build Unmanned Ground Vehicle for Search and Rescue

At Ankara University, in the country’s capital, researchers used the grant to build a modular unmanned ground vehicle (UGV) that can support search-and-rescue operations in post-earthquake scenarios.

Equipped with a thermal camera, an RGB-D camera and an NVIDIA Jetson Nano Developer Kit, the small, durable UGV scans environments in 3D and detects thermal activity, so users can determine the presence of a human in the aftermath of a disaster while maintaining a safe distance from dangerous areas.

“Our autonomous UGV system used the NVIDIA Jetson Nano’s onboard AI computing power to perform real-time, thermal vision-based victim detection in post-disaster search-and-rescue scenarios,” said Mehmet Cem Çatalbaş, associate professor in the software engineering department at Ankara University. “NVIDIA’s earthquake relief program significantly accelerated our research and development process, transforming an innovative concept into an effective, life-saving solution.”

University Students Train Robots to Navigate Post-Disaster Environments

Simultaneous localization and mapping (SLAM), a commonly used method to help robots map areas and find their way around unknown environments, is a critical skill for robots that could be used in search-and-rescue missions.

To equip students with SLAM and other robotics skills, the computer engineering department at Hacettepe University — a world-class research university in Ankara — integrated NVIDIA Jetson Nano Developer Kit-based projects into two courses. More than a dozen students used the embedded AI developer kits to build small mobile robots, dubbed “Duckiebots,” with SLAM capabilities.

Using SLAM, sensor integration and autonomous-navigation features, AI-powered robots like these could enter various areas — such as collapsed buildings or fires — to help find and rescue people.

Through these courses, Hacettepe University students simulated potential planned paths for robots, as well as assembly and initial operation of the Duckiebots.

Researchers Enable Fast Pathogen Screening in Disaster-Stricken Areas

In the aftermath of earthquakes, floods, wildfires and other natural disasters, a lack of sanitation and access to clean water can often lead to disease outbreaks. It’s important to test water and food sources for pathogen contamination and quickly identify the types of any existing pathogens to prevent their spread.

Researchers at Bilkent University, a nonprofit research university in Ankara, built a mini supercomputer cluster — based on the NVIDIA Jetson Nano Developer Kits — that promptly carries out computational tasks related to metagenomic analysis, or the analysis of DNA from a sample of an environment.

The portability of the NVIDIA Jetson devices means the cluster can be easily transported to disaster-stricken areas to identify pathogens directly on site — rather than needing to send samples to a wet lab — helping to efficiently, speedily prevent disease spread.

The research team used the open-source CuCLARK library for metagenomic classification using NVIDIA CUDA-enabled GPUs, which resulted in fast, accurate DNA screening.

University Students Learn the Fundamentals of AI and Embedded Systems

At the Izmir Institute of Technology — a research university that places a strong emphasis on the natural sciences and engineering — the computer engineering department tapped into the NVIDIA Jetson Nano devices, CUDA and NVIDIA Deep Learning Institute teaching kits to equip nearly 80 undergraduates with the fundamentals of AI, accelerated computing and robotics.

“Using the Jetson Nano Developer Kits provided by the NVIDIA and Bridge to Türkiye grant, we expanded our heterogeneous parallel programming course to include a hands-on deep learning project for computer science undergraduates,” said Işıl Öz, assistant professor of computer engineering at the university. “Such hands-on experience makes learning more engaging and effective for the next generation of innovators who will help build life-saving, sustainable technologies.”

Based on this work, a paper titled, “Teaching Accelerated Computing With Hands-on Experience,” will be presented by Öz at this month’s IEEE International Parallel and Distributed Processing Symposium. The paper outlines the challenges and successes that come with teaching heterogeneous parallel programming — a type of computing that uses more than one kind of processor or core to increase performance and energy efficiency.

Learn about the NVIDIA Academic Grant Program.

Read More

The Supercomputer Designed to Accelerate Nobel-Worthy Science

The Supercomputer Designed to Accelerate Nobel-Worthy Science

Ready for a front-row seat to the next scientific revolution?

That’s the idea behind Doudna — a groundbreaking supercomputer being built at Lawrence Berkeley National Laboratory. The system represents a major national investment in advancing U.S. high-performance computing leadership, ensuring U.S. researchers have access to cutting-edge tools to address global challenges.

Also known as NERSC-10, Doudna is named for Nobel laureate and CRISPR pioneer Jennifer Doudna. The next-generation system announced today at Lawrence Berkeley National Laboratory is designed not just for speed, but for impact.

“The Doudna system represents DOE’s commitment to advancing American leadership in science, AI, and high-performance computing,” said U.S. Secretary of Energy Chris Wright said in a statement. “It will be a powerhouse for rapid innovation that will transform our efforts to develop abundant, affordable energy supplies and advance breakthroughs in quantum computing.”

Powered by Dell infrastructure with the NVIDIA Vera Rubin architecture, and set to launch in 2026, Doudna is tailored for real-time discovery across the U.S. Department of Energy’s most urgent scientific missions. It’s poised to catapult American researchers to the forefront of critical scientific breakthroughs, fostering innovation and securing the nation’s competitive edge in key technological fields.

“Doudna is a time machine for science — compressing years of discovery into days,” said Jensen Huang, founder and CEO of NVIDIA in a statement. “Built together with DOE and powered by NVIDIA’s Vera Rubin platform, it will let scientists delve deeper and think bigger to seek the fundamental truths of the universe.”

Designed to Accelerate Breakthroughs 

Unlike traditional systems that operate in silos, Doudna merges simulation, data and AI into a single seamless platform.

“The Doudna supercomputer is designed to accelerate a broad set of scientific workflows,” said NERSC Director Sudip Dosanjh in a statement. “Doudna will be connected to DOE experimental and observational facilities through the Energy Sciences Network (ESnet), allowing scientists to stream data seamlessly into the system from all parts of the country and to analyze it in near-real time.”

The Mayall 4-Meter Telescope, which will be home to the Dark Energy Spectroscopic Instrument (DESI), seen at night at Kitt Peak National Observatory. © The Regents of the University of California, Lawrence Berkeley National Laboratory

It’s engineered to empower over 11,000 researchers with almost instantaneous responsiveness and integrated workflows, helping scientists explore bigger questions and reach answers faster than ever.

“We’re not just building a faster computer,” said Nick Wright, advanced technologies group lead and Doudna chief architect at NERSC. “We’re building a system that helps researchers think bigger, and discover sooner.”

Here’s what Wright expects Doudna to advance:

  • Fusion energy: Breakthroughs in simulation that unlocks clean fusion energy.
  • Materials science: AI models that design new classes of superconducting materials.
  • Drug discovery acceleration: Ultrarapid workflow that helps biologists fold proteins fast enough to outpace a pandemic.
  • Astronomy: Real-time processing of data from the Dark Energy Spectroscopic Instrument at Kitt Peak to help scientists map the universe.

Doudna is expected to outperform its predecessor, Perlmutter, by more than 10x in scientific output, all while using just 2-3x the power.

This translates to a 3-5x increase in performance per watt, a result of innovations in chip design, dynamic load balancing and system-level efficiencies.

AI-Powered Discovery, at Scale

Doudna will power AI-driven breakthroughs across high-impact scientific fields nationwide.

Highlights include:

  • AI for protein design: David Baker, a 2024 Nobel laureate, used NERSC systems to support his work using AI to predict novel protein structures, addressing challenges across scientific disciplines.
  • AI for fundamental physics: Researchers like Benjamin Nachman are using AI to “unfold” detector distortions in particle physics data and analyze proton data from electron-proton colliders.
  • AI for materials science: A collaboration including Berkeley Lab and Meta created “Open Molecules 2025,” a massive dataset for using AI to accurately model complex molecular chemical reactions. Researchers involved also use NERSC for their AI models.

Real-Time Science, Real-World Impact 

The new system is named for Nobel laureate and CRISPR pioneer Jennifer Doudna. © The Regents of the University of California, Lawrence Berkeley National Laboratory

Doudna isn’t a standalone system. It’s an integral part of scientific workflows. DOE’s ESnet will stream data from telescopes, detectors and genome sequencers directly into the machine with low-latency, high-throughput NVIDIA Quantum-X800 InfiniBand networking.

This critical data flow is prioritized by intelligent QoS mechanisms, ensuring it stays fast and uninterrupted, from input to insight.

This will make the system incredibly responsive. At the DIII-D national fusion ignition facility, for example, data will stream control-room events directly into Doudna for rapid-response plasma modeling, so scientists can make adjustments in real time.

“We used to think of the supercomputer as a passive participant in the corner,” Wright said. “Now it’s part of the entire workflow, connected to experiments, telescopes, detectors.”

The Platform for What’s Next: Unlocking Quantum and HPC Workflows

Doudna supports traditional HPC, cutting-edge AI, real-time streaming and even quantum workflows.

This includes support for scalable quantum algorithm development and the co-design of future integrated quantum-HPC systems, using platforms like NVIDIA CUDA-Q.

All of these workflows will run on the next-generation NVIDIA Vera Rubin platform, which will blend high-performance CPUs with coherent GPUs, meaning all processors can access and share data directly to support the most demanding scientific workloads.

Researchers are already porting full pipelines using frameworks like PyTorch, the NVIDIA Holoscan software development kit, NVIDIA TensorFlow, NVIDIA cuDNN and NVIDIA CUDA-Q, all optimized for the system’s Rubin GPUs and NVIDIA NVLink architecture.

Over 20 research teams are already porting full workflows to Doudna through the NERSC Science Acceleration Program, tackling everything from climate models to particle physics. This isn’t just about raw compute, it’s about discovery, integrated from idea to insight.

Designed for Urgency 

In 2024, AI-assisted science earned two Nobel Prizes. From climate research to pandemic response, the next breakthroughs won’t wait for better infrastructure.

With deployment slated for 2026, Doudna is positioned to lead a new era of accelerated science. DOE facilities across the country, from Fermilab to the Joint Genome Institute, will rely on its capabilities to turn today’s questions into tomorrow’s breakthroughs.

“This isn’t a system for one field,” Wright said. “It’s for discovery — across chemistry, physics and fields we haven’t imagined yet.”

As NVIDIA founder and CEO Jensen Huang put it, Doudna is “a time machine for science.” It compresses years of discovery into days, and gives the world’s toughest problems the power they’ve been waiting for.

Read More

RTX on Deck: The GeForce NOW Native App for Steam Deck Is Here

RTX on Deck: The GeForce NOW Native App for Steam Deck Is Here

GeForce NOW is supercharging Valve’s Steam Deck with a new native app — delivering the high-quality GeForce RTX-powered gameplay members are used to on a portable handheld device.

It’s perfect to pair with the six new games available this week, including Tokyo Xtreme Racing from Japanese game developer Genki.

Stream Deck

At the CES trade show in January, GeForce NOW announced a native app for the Steam Deck, unlocking the full potential of Valve’s handheld device for cloud gaming.

The app is now available, and gamers can stream titles on the Steam Deck at up to 4K 60 frames per second — connected to a TV — with HDR10, NVIDIA DLSS 4 and Reflex technologies on supported titles. Plus, members can run these games at settings and performance levels that aren’t possible natively on the Steam Deck. To top it off, Steam Deck users can enjoy up to 50% longer battery life when streaming from an RTX gaming rig in the cloud.

Steam Deck gamers can dive into graphics-intense AAA titles with the new app. Play Clair Obscur: Expedition 33, Elder Scrolls IV: Oblivion Remastered, Monster Hunter Wilds and Microsoft Flight Simulator 2024 at max settings — without worrying about hardware limits or battery drain.

Elder Scrolls IV on GFN Native app on Steam Deck
Obliterate gaming limits on the go.

Plus, Steam Deck users can now access over 2,200 supported games on GeForce NOW, including from their Steam, Epic Games Store, Ubisoft, Battle.net and Xbox libraries, with over 180 supported PC Game Pass titles.

Get all the perks of an RTX 4080 GPU owner while using a handheld device, with battery savings and no overheating. Dock it to the TV for a big-screen experience, or game on the go. Unlock a massive game library, better visuals and access to games that wouldn’t run on the handheld before.

Download the native app from the GeForce NOW page and find a step-by-step guide on the support page for GeForce NOW on Steam Deck.

To celebrate the launch of the new native app, NVIDIA is giving away two prize bundles — each including a Steam Deck OLED and Steam Deck Dock — as well as some free GeForce NOW Ultimate memberships. Be on the lookout for a chance to win by following the GeForce NOW social media channels (X, Facebook, Instagram, Threads), using #GFNOnSteamDeck and following the sweepstakes instructions.

Racing for New Games

Tokyo Xtreme Racer
Accelerate with the cloud and leave opponents behind in the dust.

Tokyo Xtreme Racer plunges players into the high-stakes world of Japanese highway street racing, featuring open-road duels on Tokyo’s expressways. Challenge rivals in intense one-on-one battles, aiming to drain their Spirit Points by outdriving them through traffic and tight corners. With deep car customization and a moody, neon-lit atmosphere, the game delivers a unique and immersive street racing experience.

Look for the following games available to stream in the cloud this week:

  • Nice Day for Fishing (New release on Steam, May 29)
  • Cash Cleaner Simulator (Steam)
  • Tokyo Xtreme Racer (Steam)
  • The Last Spell (Steam)
  • Tainted Grail: The Fall of Avalon (Steam)
  • Torque Drift 2 (Epic Games Store)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Run LLMs on AnythingLLM Faster With NVIDIA RTX AI PCs

Run LLMs on AnythingLLM Faster With NVIDIA RTX AI PCs

Large language models (LLMs), trained on datasets with billions of tokens, can generate high-quality content. They’re the backbone for many of the most popular AI applications, including chatbots, assistants, code generators and much more.

One of today’s most accessible ways to work with LLMs is with AnythingLLM, a desktop app built for enthusiasts who want an all-in-one, privacy-focused AI assistant directly on their PC.

With new support for NVIDIA NIM microservices on NVIDIA GeForce RTX and NVIDIA RTX PRO GPUs, AnythingLLM users can now get even faster performance for more responsive local AI workflows.

What Is AnythingLLM?

AnythingLLM is an all-in-one AI application that lets users run local LLMs, retrieval-augmented generation (RAG) systems and agentic tools.

It acts as a bridge between a user’s preferred LLMs and their data, and enables access to tools (called skills), making it easier and more efficient to use LLMs for specific tasks like:

  • Question answering: Getting answers to questions from top LLMs — like Llama and DeepSeek R1 — without incurring costs.
  • Personal data queries: Use RAG to query content privately, including PDFs, Word files, codebases and more.
  • Document summarization: Generating summaries of lengthy documents, like research papers.
  • Data analysis: Extracting data insights by loading files and querying it with LLMs.
  • Agentic actions: Dynamically researching content using local or remote resources, running generative tools and actions based on user prompts.

AnythingLLM can connect to a wide variety of open-source local LLMs, as well as larger LLMs in the cloud, including those provided by OpenAI, Microsoft and Anthropic. In addition, the application provides access to skills for extending its agentic AI capabilities via its community hub.

With a one-click install and the ability to launch as a standalone app or browser extension — wrapped in an intuitive experience with no complicated setup required — AnythingLLM is a great option for AI enthusiasts, especially those with GeForce RTX and NVIDIA RTX PRO GPU-equipped systems.

RTX Powers AnythingLLM Acceleration

GeForce RTX and NVIDIA RTX PRO GPUs offer significant performance gains for running LLMs and agents in AnythingLLM — speeding up inference with Tensor Cores designed to accelerate AI.

AnythingLLM runs LLMs with Ollama for on-device execution accelerated through Llama.cpp and ggml tensor libraries for machine learning.

Ollama, Llama.cpp and GGML are optimized for NVIDIA RTX GPUs and the fifth-generation Tensor Cores. Performance on GeForce RTX 5090 is 2.4X compared to an Apple M3 Ultra.

GeForce RTX 5090 delivers 2.4x faster LLM inference in AnythingLLM than Apple M3 Ultra on both Llama 3.1 8B and DeepSeek R1 8B.

As NVIDIA adds new NIM microservices and reference workflows — like its growing library of AI Blueprints — tools like AnythingLLM will unlock even more multimodal AI use cases.

AnythingLLM — Now With NVIDIA NIM

AnythingLLM recently added support for NVIDIA NIM microservices — performance-optimized, prepackaged generative AI models that make it easy to get started with AI workflows on RTX AI PCs with a streamlined API.

NVIDIA NIMs are great for developers looking for a quick way to test a Generative AI model in a workflow. Instead of having to find the right model, download all the files and figure out how to connect everything, they provide a single container that has everything you need. And they can run both on Cloud and PC, making it easy to prototype locally and then deploy on the cloud.

By offering them within AnythingLLM’s user-friendly UI, users have a quick way to test them and experiment with them. And then they can either connect them to their workflows with AnythingLLM, or leverage NVIDIA AI Blueprints and NIM documentation and sample code to plug them directly to their apps or projects.

Explore the wide variety of NIM microservices available to elevate AI-powered workflows, including language and image generation, computer vision and speech processing.

Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 

Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.

Follow NVIDIA Workstation on LinkedIn and XSee notice regarding software product information.

Read More