NVIDIA Open-Sources cuOpt, Ushering in New Era of Decision Optimization

NVIDIA Open-Sources cuOpt, Ushering in New Era of Decision Optimization

Every second, businesses worldwide are making critical decisions. A logistics company decides which trucks to send where. A retailer figures out how to stock its shelves. An airline scrambles to reroute flights after a storm. These aren’t just routing choices — they’re high-stakes puzzles with millions of variables, and getting them wrong costs money and, sometimes, customers.

That’s changing.

NVIDIA today announced it will open-source cuOpt, an AI-powered decision optimization engine — making the powerful software free for developers to unlock real-time optimization at an unprecedented scale.

Optimization ecosystem leaders COPT, the Xpress team at FICO, HiGHS, IBM and SimpleRose are integrating or evaluating cuOpt, accelerating decision-making across industries.

Gurobi Optimization is evaluating and testing cuOpt solvers to refine first-order algorithms for next-level performance.

NVIDIA is working with the COIN-OR Foundation to make cuOpt open source, in what is widely regarded as the oldest and largest such repository for operations research software.

Meanwhile, a team of researchers at Arizona State University, Cornell Tech, Princeton University, University of Pavia and Zuse Institute of Berlin are exploring its capabilities, developing next-generation solvers and tackling complex optimization problems with exceptional speed.

With the technology, airlines can reconfigure flight schedules mid-air to prevent cascading delays, power grids can rebalance in real time to avoid blackouts and financial institutions can manage portfolios with up-to-the-moment risk analysis.

Faster Optimization, Smarter Decisions

The best-known AI applications are all about predictions — whether forecasting weather or generating the next word in a sentence. But prediction is only half the challenge. The real power comes from acting on information in real time.

That’s where cuOpt comes in.

cuOpt dynamically evaluates billions of variables — inventory levels, factory output, shipping delays, fuel costs, risk factors and regulations — and delivers the best move in near real time.

As AI agents and large language model-driven simulations take on more decision-making tasks, the need for instant optimization has never been greater. cuOpt, powered by NVIDIA GPUs, accelerates these computations by orders of magnitude.

Unlike traditional optimization methods that navigate solution spaces sequentially or with limited parallelism, cuOpt taps into GPU acceleration to evaluate millions of possibilities simultaneously — finding optimal solutions exponentially faster for specific instances.

It doesn’t replace existing techniques — it enhances them. By working alongside traditional solvers, cuOpt rapidly identifies high-quality solutions, helping CPU-based models discard bad paths faster.

Why Optimization Is So Hard — and How cuOpt Does It Better

Every decision — where to send a truck, how to schedule workers and when to rebalance power grids — is a puzzle with an exponential number of possible answers.

To put this into perspective, the number of possible ways to schedule 100 nurses in a hospital for the next month is greater than the number of atoms in the observable universe.

Many traditional solvers search for solutions sequentially or with limited parallelism — like navigating a vast maze with a flashlight, one corridor at a time. cuOpt rewrites the rules by evaluating millions of possibilities intelligently, accelerating optimization exponentially.

For years, workforce scheduling, logistics routing and supply-chain planning all took hours — sometimes days — to compute.

NVIDIA cuOpt changes that — the numbers tell the story:

  • Linear programming acceleration: 70x faster on average than a CPU-based PDLP solver on large-scale benchmarks, with a 10x to 3,000x speedup range.
  • Mixed-integer programming (MIP): 60x faster MIP solves, as demonstrated by SimpleRose.
  • Vehicle routing: 240x speedup in dynamic routing, enabling cost to serve insights and near time route adjustments, as demonstrated by Lyric.

Decisions that once took hours or days now take seconds.

Optimizing for a Better World

Better optimization doesn’t just make businesses more efficient — it makes the world more sustainable, resilient and equitable.

Smarter decision-making leads to less waste. Energy grids can distribute power more efficiently, reducing blackouts and seamlessly integrating renewables like wind and solar. Supply chains can adjust dynamically to minimize excess inventory, cutting both costs and emissions.

Hospitals in underserved regions can allocate beds, doctors and medicine in real time, helping lifesaving treatments reach patients faster. Humanitarian aid groups responding to disasters can instantly recalculate the best way to distribute food, water and medicine, reducing delays in critical moments. And public transit systems can adjust dynamically to demand, reducing congestion and travel times for millions of people.

cuOpt isn’t just about more hardware — it’s about smarter search. Instead of going through every possibility, cuOpt intelligently navigates massive search spaces, focusing on constraint edges to converge faster. By using GPU acceleration, it evaluates multiple solutions in parallel, delivering real-time, high-efficiency optimization.

Industry Support — a New Era for Decision Intelligence

Optimization leaders such as FICO, Gurobi Optimization, IBM and SimpleRose are among the companies who are exploring the benefits of GPU acceleration or evaluating the possibility of integrating cuOpt into their workflows and evaluating its potential, spanning industrial planning to supply chain management and scheduling.

Smarter Decisions, Stronger Systems, Better Outcomes

cuOpt redefines optimization at scale.

For businesses, as described, it means AI-powered optimization can reconfigure schedules, route fleets and reallocate resources in real time — cutting costs and boosting agility.

For developers, it provides a high-performance AI toolkit that can solve decision problems up to 3,000x faster than CPU solvers in complex optimization challenges such as network data routing — optimizing the flow of video, voice, and web traffic to reduce congestion and improve efficiency — or electricity distribution,  balancing supply and demand across power grids while minimizing losses and ensuring stable transmission.

For researchers, it’s an open playground for pushing AI-driven decision-making to new frontiers.

cuOpt will be released as open source and freely available for developers, researchers and enterprises later this year.

See cuOpt in Action

Explore real-world applications of cuOpt at these NVIDIA GTC sessions:

For enterprise production deployments, cuOpt is supported as part of the NVIDIA AI Enterprise software platform and can be deployed as an NVIDIA NIM microservice — making it easy to integrate, scale and deploy across cloud, on-premises and edge environments.

With its open-source release, developers will be able to easily access, modify and integrate the cuOpt source code into their own solutions.

Learn more about how companies are already transforming their operations with cuOpt and sign up to be notified when the open-source software is available.

See notice regarding software product information.

Read More

Explaining Tokens — the Language and Currency of AI

Explaining Tokens — the Language and Currency of AI

Under the hood of every AI application are algorithms that churn through data in their own language, one based on a vocabulary of tokens.

Tokens are tiny units of data that come from breaking down bigger chunks of information. AI models process tokens to learn the relationships between them and unlock capabilities including prediction, generation and reasoning. The faster tokens can be processed, the faster models can learn and respond.

AI factories — a new class of data centers designed to accelerate AI workloads — efficiently crunch through tokens, converting them from the language of AI to the currency of AI, which is intelligence.

With AI factories, enterprises can take advantage of the latest full-stack computing solutions to process more tokens at lower computational cost, creating additional value for customers. In one case, integrating software optimizations and adopting the latest generation NVIDIA GPUs reduced cost per token by 20x compared to unoptimized processes on previous-generation GPUs — delivering 25x more revenue in just four weeks.

By efficiently processing tokens, AI factories are manufacturing intelligence — the most valuable asset in the new industrial revolution powered by AI.

What Is Tokenization? 

Whether a transformer AI model is processing text, images, audio clips, videos or another modality, it will translate the data into tokens. This process is known as tokenization.

Efficient tokenization helps reduce the amount of computing power required for training and inference. There are numerous tokenization methods — and tokenizers tailored for specific data types and use cases can require a smaller vocabulary, meaning there are fewer tokens to process.

For large language models (LLMs), short words may be represented with a single token, while longer words may be split into two or more tokens.

The word darkness, for example, would be split into two tokens, “dark” and “ness,” with each token bearing a numerical representation, such as 217 and 655. The opposite word, brightness, would similarly be split into “bright” and “ness,” with corresponding numerical representations of 491 and 655.

In this example, the shared numerical value associated with “ness” can help the AI model understand that the words may have something in common. In other situations, a tokenizer may assign different numerical representations for the same word depending on its meaning in context.

For example, the word “lie” could refer to a resting position or to saying something untruthful. During training, the model would learn the distinction between these two meanings and assign them different token numbers.

For visual AI models that process images, video or sensor data, a tokenizer can help map visual inputs like pixels or voxels into a series of discrete tokens.

Models that process audio may turn short clips into spectrograms — visual depictions of sound waves over time that can then be processed as images. Other audio applications may instead focus on capturing the meaning of a sound clip containing speech, and use another kind of tokenizer that captures semantic tokens, which represent language or context data instead of simply acoustic information.

How Are Tokens Used During AI Training?

Training an AI model starts with the tokenization of the training dataset.

Based on the size of the training data, the number of tokens can number in the billions or trillions — and, per the pretraining scaling law, the more tokens used for training, the better the quality of the AI model.

As an AI model is pretrained, it’s tested by being shown a sample set of tokens and asked to predict the next token. Based on whether or not its prediction is correct, the model updates itself to improve its next guess. This process is repeated until the model learns from its mistakes and reaches a target level of accuracy, known as model convergence.

After pretraining, models are further improved by post-training, where they continue to learn on a subset of tokens relevant to the use case where they’ll be deployed. These could be tokens with domain-specific information for an application in law, medicine or business — or tokens that help tailor the model to a specific task, like reasoning, chat or translation. The goal is a model that generates the right tokens to deliver a correct response based on a user’s query — a skill better known as inference.

How Are Tokens Used During AI Inference and Reasoning? 

During inference, an AI receives a prompt — which, depending on the model, may be text, image, audio clip, video, sensor data or even gene sequence — that it translates into a series of tokens. The model processes these input tokens, generates its response as tokens and then translates it to the user’s expected format.

Input and output languages can be different, such as in a model that translates English to Japanese, or one that converts text prompts into images.

To understand a complete prompt, AI models must be able to process multiple tokens at once. Many models have a specified limit, referred to as a context window — and different use cases require different context window sizes.

A model that can process a few thousand tokens at once might be able to process a single high-resolution image or a few pages of text. With a context length of tens of thousands of tokens, another model might be able to summarize a whole novel or an hourlong podcast episode. Some models even provide context lengths of a million or more tokens, allowing users to input massive data sources for the AI to analyze.

Reasoning AI models, the latest advancement in LLMs, can tackle more complex queries by treating tokens differently than before. Here, in addition to input and output tokens, the model generates a host of reasoning tokens over minutes or hours as it thinks about how to solve a given problem.

These reasoning tokens allow for better responses to complex questions, just like how a person can formulate a better answer given time to work through a problem. The corresponding increase in tokens per prompt can require over 100x more compute compared with a single inference pass on a traditional LLM — an example of test-time scaling, aka long thinking.

How Do Tokens Drive AI Economics? 

During pretraining and post-training, tokens equate to investment into intelligence, and during inference, they drive cost and revenue. So as AI applications proliferate, new principles of AI economics are emerging.

AI factories are built to sustain high-volume inference, manufacturing intelligence for users by turning tokens into monetizable insights. That’s why a growing number of AI services are measuring the value of their products based on the number of tokens consumed and generated, offering pricing plans based on a model’s rates of token input and output.

Some token pricing plans offer users a set number of tokens shared between input and output. Based on these token limits, a customer could use a short text prompt that uses just a few tokens for the input to generate a lengthy, AI-generated response that took thousands of tokens as the output. Or a user could spend the majority of their tokens on input, providing an AI model with a set of documents to summarize into a few bullet points.

To serve a high volume of concurrent users, some AI services also set token limits, the maximum number of tokens per minute generated for an individual user.

Tokens also define the user experience for AI services. Time to first token, the latency between a user submitting a prompt and the AI model starting to respond, and inter-token or token-to-token latency, the rate at which subsequent output tokens are generated, determine how an end user experiences the output of an AI application.

There are tradeoffs involved for each metric, and the right balance is dictated by use case.

For LLM-based chatbots, shortening the time to first token can help improve user engagement by maintaining a conversational pace without unnatural pauses. Optimizing inter-token latency can enable text generation models to match the reading speed of an average person, or video generation models to achieve a desired frame rate. For AI models engaging in long thinking and research, more emphasis is placed on generating high-quality tokens, even if it adds latency.

Developers have to strike a balance between these metrics to deliver high-quality user experiences with optimal throughput, the number of tokens an AI factory can generate.

To address these challenges, the NVIDIA AI platform offers a vast collection of software, microservices and blueprints alongside powerful accelerated computing infrastructure — a flexible, full-stack solution that enables enterprises to evolve, optimize and scale AI factories to generate the next wave of intelligence across industries.

Understanding how to optimize token usage across different tasks can help developers, enterprises and even end users reap the most value from their AI applications.

Learn more in this ebook and get started at build.nvidia.com.

Read More

GTC 2025 – Announcements and Live Updates

GTC 2025 – Announcements and Live Updates

What’s next in AI is at GTC 2025. Not only the technology, but the people and ideas that are pushing AI forward — creating new opportunities, novel solutions and whole new ways of thinking. For all of that, this is the place.

Here’s where to find the news, hear the discussions, see the robots and ponder the just-plain mind-blowing. From the keynote to the final session, check back for live coverage kicking off when the doors open on Monday, March 17, in San Jose, California.

The Future Rolls Into San Jose

Anyone who’s been in downtown San Jose lately has seen it happening. The banners are up. The streets are shifting. The whole city is getting a fresh coat of NVIDIA green.

From March 17-21, San Jose will become a crossroads for the thinkers, tinkerers and true enthusiasts of AI, robotics and accelerated computing. The conversations will be sharp, fast-moving and sometimes improbable — but that’s the point.

At the center of it all? NVIDIA founder and CEO Jensen Huang’s keynote, offering a glimpse into the future. It’ll take place at the SAP Center on Tuesday, March 18, at 10 a.m. PT. Expect big ideas, a few surprises, some roars of laughter and the occasional moment that leaves the room silent.

But GTC isn’t just what happens on stage. It’s a conference that refuses to stay inside its walls. It spills out into sessions at McEnery Convention Center, hands-on demos at the Tech Interactive Museum, late-night conversations at the Plaza de César Chávez night market and more. San Jose isn’t just hosting GTC. It’s becoming it.

The speakers are a mix of visionaries and builders — the kind of people who make you rethink what’s possible:

🧠Yann LeCun – chief AI scientist at Meta, professor, New York University
🏆Frances Arnold – Nobel Laureate, Caltech
🚗RJ Scaringe – founder and CEO of Rivian
🤖Pieter Abbeel – robotics pioneer, UC Berkeley
🌍Arthur Mensch – CEO of Mistral AI
🌮Joe Park – chief digital and technology officer of Yum! Brands
♟Noam Brown – research scientist at OpenAI

Some are pushing the limits of AI itself; others are weaving it into the world around us.

📢 Want in? Register now.

Check back here for what to watch, read and play — and what it all means. Tune in to all the big moments, the small surprises and the ideas that’ll stick for years to come.

See you in San Jose. #GTC25

Read More

Drop It Like It’s Mod: Breathing New Life Into Classic Games With AI in NVIDIA RTX Remix

Drop It Like It’s Mod: Breathing New Life Into Classic Games With AI in NVIDIA RTX Remix

PC game modding is massive, with over 5 billion mods downloaded annually. Mods push graphics forward with each GPU generation, extend a game’s lifespan with new content and attract new players.

NVIDIA RTX Remix is a modding platform for RTX AI PCs that lets modders capture game assets, automatically enhance materials with generative AI tools and create stunning RTX remasters with full ray tracing. Today, RTX Remix exited beta and fully launched with new NVIDIA GeForce RTX 50 Series neural rendering technology and many community-requested upgrades.

Since its initial beta release, RTX Remix has been experimented with by over 30,000 modders, bringing ray-traced mods of hundreds of classic titles to over 1 million gamers.

RTX Remix supports a host of AI tools, including NVIDIA DLSS 4, RTX Neural Radiance Cache and the community-published AI model PBRFusion 3.

Modders can build 4K physically based rendering (PBR) assets by hand or use generative AI to accelerate their workflows. And with a few additional clicks, RTX Remix mods support DLSS 4 with Multi Frame Generation. DLSS’ new transformer model and the first neural shader, Neural Radiance Cache, provide enhanced neural rendering performance, meaning classic games look and play better than ever.

Generative AI Texture Tools

RTX Remix’s built-in generative AI texture tools analyze low-resolution textures from classic games, generate physically accurate materials — including normal and roughness maps — and upscale the resolution by up to 4x. Many RTX Remix mods have been created incorporating generative AI.

Earlier this month, RTX Remix modder NightRaven published PBRFusion 3 — a new AI model that upscales textures and generates high-quality normal, roughness and height maps for physically-based materials.

PBRFusion 3 consists of two custom-trained models: a PBR model and a diffusion-based upscaler. PBRFusion 3 can also use the RTX Remix application programming interface to connect with ComfyUI in an integrated flow. NightRaven has packaged all the relevant pieces to make it easy to get started.

The PBRFusion3 page features a plug-and-play package that includes the relevant ComfyUI graphs and nodes. Once installed, remastering is easy. Select a number of textures in RTX Remix’s Viewport and hit process in ComfyUI. This integrated flow enables extensive remasters of popular games to be completed by small hobbyist mod teams.

RTX Remix and REST API

RTX Remix Toolkit capabilities are accessible via REST API, allowing modders to livelink RTX Remix to digital content creation tools such as Blender, modding tools such as Hammer and generative AI apps such as ComfyUI.

For example, through REST API integration, modders can seamlessly export all game textures captured in RTX Remix to ComfyUI and enhance them in one big batch before automatically bringing them back into the game. ComfyUI is RTX-accelerated and includes thousands of generative AI models to try, helping reduce the time to remaster a game scene and providing many ways to process textures.

Modders have many super resolution and PBR models to choose from, including ones that feature metallic and height maps — unlocking 8x or more resolution increases. Additionally, ComfyUI enables modders to use text prompts to generate new details in textures, or make grand stylistic departures by changing an entire scene’s look with a single text prompt.

‘Half-Life 2 RTX’ Demo

Half-Life 2 owners can download a free Half-Life 2 RTX demo from Steam, built with RTX Remix, starting March 18. The demo showcases Orbifold Studios’ work in Ravenholm and Nova Prospekt ahead of the full game’s release at a later date.

Half-Life 2 RTX showcases the expansive capabilities of RTX Remix and NVIDIA’s neural rendering technologies. DLSS 4 with Multi Frame Generation multiplies frame rates by up to 10x at 4K. Neural Radiance Cache further accelerates ray-traced lighting. RTX Skin enhances Father Grigori, headcrabs and zombies with one of the first implementations of subsurface scattering in ray-traced gaming. RTX Volumetrics add realistic smoke effects and fog. And everything interplays and interacts with the fully ray-traced lighting.

What’s Next in AI Starts Here

From the keynote by NVIDIA founder and CEO Jensen Huang on Tuesday, March 18, to over 1,000 inspiring sessions, 300+ exhibits, technical hands-on training and tons of unique networking events — NVIDIA’s own GTC is set to put a spotlight on AI and all its benefits.

Experts from across the AI ecosystem will share insights on deploying AI locally, optimizing models and harnessing cutting-edge hardware and software to enhance AI workloads — highlighting key advancements in RTX AI PCs and workstations. RTX AI Garage will be there to share highlights of the latest advancements coming to the RTX AI platform.

Follow NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter

Read More

Gaming Goodness: NVIDIA Reveals Latest Neural Rendering and AI Advancements Supercharging Game Development at GDC 2025

Gaming Goodness: NVIDIA Reveals Latest Neural Rendering and AI Advancements Supercharging Game Development at GDC 2025

AI is leveling up the world’s most beloved games, as the latest advancements in neural rendering, NVIDIA RTX and digital human technologies equip game developers to take innovative leaps in their work.

At this year’s GDC conference, running March 17-21 in San Francisco, NVIDIA is revealing new AI tools and technologies to supercharge the next era of graphics in games.

Key announcements include new neural rendering advancements with Unreal Engine 5 and Microsoft DirectX; NVIDIA DLSS 4 now available in over 100 games and apps, making it the most rapidly adopted NVIDIA game technology of all time; and a Half-Life 2 RTX demo coming Tuesday, March 18.

Plus, the open-source NVIDIA RTX Remix modding platform has now been released, and NVIDIA ACE technology enhancements are bringing to life next-generation digital humans and AI agents for games.

Neural Shaders Enable Photorealistic, Living Worlds With AI

The next era of computer graphics will be based on NVIDIA RTX Neural Shaders, which allow the training and deployment of tiny neural networks from within shaders to generate textures, materials, lighting, volumes and more. This results in dramatic improvements in game performance, image quality and interactivity, delivering new levels of immersion for players.

At the CES trade show earlier this year, NVIDIA introduced RTX Kit, a comprehensive suite of neural rendering technologies for building AI-enhanced, ray-traced games with massive geometric complexity and photorealistic characters.

Now, at GDC, NVIDIA is expanding its powerful lineup of neural rendering technologies, including with Microsoft DirectX support and plug-ins for Unreal Engine 5.

NVIDIA is partnering with Microsoft to bring neural shading support to the DirectX 12 Agility software development kit preview in April, providing game developers with access to RTX Tensor Cores to accelerate the performance of applications powered by RTX Neural Shaders.

Plus, Unreal Engine developers will be able to get started with RTX Kit features such as RTX Mega Geometry and RTX Hair through the experimental NVIDIA RTX branch of Unreal Engine 5. These enable the rendering of assets with dramatic detail and fidelity, bringing cinematic-quality visuals to real-time experiences.

Now available, NVIDIA’s “Zorah” technology demo has been updated with new incredibly detailed scenes filled with millions of triangles, complex hair systems and cinematic lighting in real time — all by tapping into the latest technologies powering neural rendering, including:

  • ReSTIR Path Tracing
  • ReSTIR Direct Illumination
  • RTX Mega Geometry
  • RTX Hair

And the first neural shader, Neural Radiance Cache, is now available in RTX Remix.

Over 100 DLSS 4 Games and Apps Out Now

DLSS 4 debuted with the release of GeForce RTX 50 Series GPUs. Over 100 games and apps now feature support for DLSS 4. This milestone has been reached two years quicker than with DLSS 3, making DLSS 4 the most rapidly adopted NVIDIA game technology of all time.

DLSS 4 introduced Multi Frame Generation, which uses AI to generate up to three additional frames per traditionally rendered frame, working with the complete suite of DLSS technologies to multiply frame rates by up to 8x over traditional brute-force rendering.

This massive performance improvement on GeForce RTX 50 Series graphics cards and laptops enables gamers to max out visuals at the highest resolutions and play at incredible frame rates.

In addition, Lost Soul Aside, Mecha BREAK, Phantom Blade Zero, Stellar Blade, Tides of Annihilation and Wild Assault will launch with DLSS 4, giving GeForce RTX gamers the definitive PC experience in each title. Learn more.

Developers can get started with DLSS 4 through the DLSS 4 Unreal Engine plug-in.

‘Half-Life 2 RTX’ Demo Launch, RTX Remix Official Release

Half-Life 2 RTX is a community-made remaster of the iconic first-person shooter Half-Life 2. 

A playable Half-Life 2 RTX demo will be available on Tuesday, March 18, for free download from Steam for Half-Life 2 owners. The demo showcases Orbifold Studios’ work in the eerily sensational maps of Ravenholm and Nova Prospekt, with significantly improved assets and textures, full ray tracing, DLSS 4 with Multi Frame Generation and RTX neural rendering technologies.

Half-Life 2 RTX was made possible by NVIDIA RTX Remix, an open-source platform officially released today for modders to create stunning RTX remasters of classic games.

Use the platform now to join the 30,000+ modders who’ve experimented with enhancing hundreds of classic titles since its beta release last year, enabling over 1 million gamers to experience astonishing ray-traced mods.

NVIDIA ACE Technologies Enhance Game Characters With AI

The NVIDIA ACE suite of RTX-accelerated digital human technologies brings game characters to life with generative AI.

NVIDIA ACE autonomous game characters add autonomous teammates, nonplayer characters (NPCs) and self-learning enemies to games, creating new narrative possibilities and enhancing player immersion.

ACE autonomous game characters are debuting in two titles this month:

In inZOI, “Smart Zoi” NPCs will respond more realistically and intelligently to their environment based on their personalities. The game launches with NVIDIA ACE-based characters on Friday, March 28.

And in NARAKA: BLADEPOINT MOBILE PC VERSION, on-device NVIDIA ACE-powered teammates will help players battle enemies, hunt for loot and fight for victory starting Thursday, March 27.

Developers can start building with ACE today.

Join NVIDIA at GDC.

See notice regarding software product information.

Read More

Relive the Magic as GeForce NOW Brings More Blizzard Gaming to the Cloud

Relive the Magic as GeForce NOW Brings More Blizzard Gaming to the Cloud

Bundle up — GeForce NOW is bringing a flurry of Blizzard titles to its ever-expanding library.

Prepare to weather epic gameplay in the cloud, tackling the genres of real-time strategy (RTS), multiplayer online battle arena (MOBA) and more. Classic Blizzard titles join GeForce NOW, including Heroes of the Storm, Warcraft Rumble and three titles from the Warcraft: Remastered series.

They’re all part of 11 games joining the cloud this week, atop the latest update for hit game Zenless Zone Zero from miHoYo.

Blizzard Heats Things Up

Heroes of the Storm on GeForce NOW
Heroes (and save data) never die in the cloud.

Heroes of the Storm, Blizzard’s unique take on the MOBA genre, offers fast-paced team battles across diverse battlegrounds. The game features a roster of iconic Blizzard franchise characters, each with customizable talents and abilities. Heroes of the Storm emphasizes team-based gameplay with shared experiences and objectives, making it more accessible to newcomers while providing depth for experienced players.

Warcraft Rumble on GeForce NOW
The cloud is rumbling.

In Warcraft Rumble, a mobile action-strategy game set in the Warcraft universe, players collect and deploy miniature versions of the series’ beloved characters. The game offers a blend of tower defense and RTS elements as players battle across various modes, including a single-player campaign, player vs. player matches and cooperative dungeons.

Warcraft Remastered on GeForce NOW
Old-school cool, new-school graphics.

The Warcraft Remastered collection gives the classic RTS titles a modern twist with updated visuals and quality-of-life improvements. Warcraft: Remastered and Warcraft II: Remastered offer enhanced graphics while maintaining the original gameplay, allowing players to toggle between classic and updated visuals. Warcraft III: Reforged includes new graphics options and multiplayer features. Both these remasters provide nostalgia for long-time fans and an ideal opportunity for new players to experience the iconic strategy games that shaped the genre.

New Games, No Wait

Zenless Zone Zero update 1.6 Among the Forgotten Ruins on GeForce NOW
New agents, new adventures.

The popular Zenless Zone Zero gets its 1.6 update, “Among the Forgotten Ruins,” now available for members to stream without waiting around for updates or downloads. This latest update brings three new playable agents: Soldier 0-Anby, Pulchra and Trigger. Players can explore two new areas, Port Elpis and Reverb Arena, as well as try out the “Hollow Zero-Lost Void” mode. The update also introduces a revamped Decibel system for more strategic gameplay.

Look for the following games available to stream in the cloud this week:

  • Citizen Sleeper 2: Starward Vector (Xbox, available on PC Game Pass)
  • City Transport Simulator: Tram (Steam)
  • Dave the Diver (Steam)
  • Heroes of the Storm (Battle.net
  • Microtopia (Steam)
  • Orcs Must Die Deathtrap (Xbox, available on PC Game Pass)
  • Potion Craft: Alchemist Simulator (Steam)
  • Warcraft I Remastered (Battle.net)
  • Warcraft II Remastered (Battle.net)
  • Warcraft III: Reforged  (Battle.net)
  • Warcraft Rumble (Battle.net)

What are you planning to play this weekend? Let us know on X or in the comments below.

 

Read More

Utah to Advance AI Education, Training

Utah to Advance AI Education, Training

A new AI education initiative in the State of Utah, developed in collaboration with NVIDIA, is set to advance the state’s commitment to workforce training and economic growth.

The public-private partnership aims to equip universities, community colleges and adult education programs across Utah with the resources to develop skills in generative AI.

“AI will continue to grow in importance, affecting every sector of Utah’s economy,” said Spencer Cox, governor of Utah. “We need to prepare our students and faculty for this revolution. Working with NVIDIA is an ideal path to help ensure that Utah is positioned for AI growth in the near and long term.”

As part of the new initiative, Utah’s educators can gain certification through the NVIDIA Deep Learning Institute University Ambassador Program. The program offers high-quality teaching kits, extensive workshop content and access to NVIDIA GPU-accelerated workstations in the cloud.

By empowering educators with the latest AI skills and technologies, the initiative seeks to create a competitive advantage for Utah’s entire higher education system.

“We believe that AI education is more than a pathway to innovation — it’s a foundation for solving some of the world’s most pressing challenges,” said Manish Parashar, director of the University of Utah Scientific Computing and Imaging (SCI) Institute, which leads the One-U Responsible AI Initiative. “By equipping students and researchers with the tools to explore, understand and create with AI, we empower them to be able to drive advancements in medicine, engineering and beyond.”

The initiative will begin with the Utah System of Higher Education (USHE) and several other universities in the state, including the University of Utah, Utah State University, Utah Valley University, Weber State University, Utah Tech University, Southern Utah University, Snow College and Salt Lake Community College.

Setting Up Students and Professionals for Success

The Utah AI education initiative will benefit students entering the job market and working professionals by helping them expand their skill sets beyond community college or adult education courses.

Utah state agencies are exploring how internship and apprenticeship programs can offer students hands-on experience with AI skills, helping bridge the gap between education and industry needs. This initiative aligns with Utah’s broader goals of fostering a tech-savvy workforce and positioning the state as a leader in AI innovation and application.

As AI continues to evolve and gain prevalence across industries, Utah’s proactive approach to equipping educators and students with resources and training will help prepare its workforce for the future of technology, sharpening its competitive edge.

Read More

Oscars Gold: NVIDIA Researchers Honored for Advancing the Art and Science of Filmmaking

Oscars Gold: NVIDIA Researchers Honored for Advancing the Art and Science of Filmmaking

For the past 16 years, NVIDIA technologies have been working behind the scenes of every Academy Award-nominated film for Best Visual Effects.

This year, three NVIDIA researchers — Essex Edwards, Fabrice Rousselle and Timo Aila — have been honored with Scientific and Technical Awards by the Academy of Motion Picture Arts and Sciences for their groundbreaking contributions to the film industry. Their innovations in simulation, denoising and rendering are helping shape the future of visual storytelling, empowering filmmakers to create even more breathtaking and immersive worlds.

Image courtesy of DNEG © 2024 Warner Bros. Ent. and Legendary. All rights reserved. GODZILLA TM & © Toho Co., Ltd.

Ziva VFX: Bringing Digital Characters to Life

Essex Edwards received a Technical Achievement Award, alongside James Jacobs, Jernej Barbic, Crawford Doran and Andrew van Straten, for his design and development of Ziva VFX. This cutting-edge system allows artists to construct and simulate human muscles, fat, fascia and skin for digital characters with an intuitive, physics-based approach.

Providing a robust solver and an artist-friendly interface, Ziva VFX transformed the ways studios bring photorealistic and animated characters to the big screen and beyond.

Award-winning visuals effect and animation studio DNEG is continuing to develop Ziva VFX to further enhance its creature pipeline.

“Ziva VFX was the result of a team of artists and engineers coming together and making thousands of really good small design decisions over and over for years,” said Edwards.

Disney’s ML Denoiser: Revolutionizing Rendering

Fabrice Rousselle was honored with a Scientific and Engineering Award, alongside Thijs Vogels, David Adler, Gerhard Röthlin and Mark Meyer, for his work on Disney’s ML Denoiser. This advanced machine learning denoiser introduced a pioneering kernel-predicting convolutional network, ensuring temporal stability in rendered images for higher-quality graphics.

Originally developed to enhance the quality of animated films, this breakthrough technology has since become an essential tool in live-action visual effects and high-end rendering workflows. It helps remove noise, sharpens images and speeds up rendering, allowing artists to work faster while achieving higher quality.

Since 2018, Disney’s state-of-the-art denoiser powered by machine learning (ML) has been used in over 100 films, including “Toy Story 4,” “Ralph Breaks the Internet,” and “Avengers: Endgame.”

The denoiser was developed by Disney Research, ILM, Pixar and Walt Disney Animation — the result of a massive cross-studio effort helping to push the boundaries of visual storytelling for studios across the industry.

In this extreme example of four samples average per pixel, Disney’s ML Denoiser does a remarkable job. Inside Out 2 © Disney/Pixar 

Intel Open Image Denoise: Advancing AI-Powered Image Processing

Timo Aila received a Technical Achievement Award, alongside Attila T. Áfra, for his pioneering contributions to AI image denoising. Aila’s early work at NVIDIA focused on the U-Net architecture, which Áfra used in Intel Open Image Denoise — an open-source library that provides an efficient, high-quality solution for AI-driven denoising in rendering.

By preserving fine details while significantly reducing noise, Intel Open Image Denoise has become a vital component in real-time and offline rendering across the industry.

“Path tracing has an inherent noise problem, and in the early days of deep learning, we started looking for architectures that could help,” Aila said. “We turned to denoising autoencoders, and the pivotal moment was when we introduced skip connections. Everything began to work, from fixing JPEG compression artifacts to eliminating the kind of Monte Carlo noise that occurs in path-traced computer graphics. This breakthrough led to the production of cleaner, more realistic images in rendering pipelines.”

Pushing the Boundaries of Visual Storytelling

With these latest honors, Edwards, Rousselle and Aila join the many NVIDIA researchers who have been recognized by the Academy for their pioneering contributions to filmmaking.

Jos Stam accepting his award at the 78th Sci-Tech Awards ceremony.

Over the years, 14 additional NVIDIA researchers have received Scientific and Technical Awards, reflecting NVIDIA’s significant contributions to the art and science of motion pictures through cutting-edge research in AI, simulation and real-time rendering.

This group includes Christian Rouet, Runa Loeber and NVIDIA’s advanced rendering team, Michael Kass, Jos Stam, Jonathan Cohen, Michael Kowalski, Matt Pharr, Joe Mancewicz, Ken Museth, Charles Loop, Ingo Wald, Dirk Van Gelder, Gilles Daviet, Luca Fascione and Christopher Jon Horvath.

The awards ceremony will take place on Tuesday, April 29, at the Academy Museum of Motion Pictures in Los Angeles.

Learn more about NVIDIA Research, AI, simulation and rendering at NVIDIA GTC, a global AI conference taking place March 17-21 at the San Jose Convention Center and online. Register now to join a conference track dedicated to media and entertainment.

Main feature courtesy of DNEG © 2024 Warner Bros. Ent. and Legendary. All Rights Reserved. GODZILLA TM & © Toho Co., Ltd.

Read More

‘Monster Hunter Wilds’ Charges Onto GeForce NOW

‘Monster Hunter Wilds’ Charges Onto GeForce NOW

Time for a roaring-good time with Capcom’s hit Monster Hunter Wilds. GeForce NOW members can hunt even the largest, most daunting monsters with the sharpest clarity, armed with a GeForce RTX 4080-class gaming rig in the cloud.

Plus, jump into mind-bending adventures with Split Fiction from Hazelight Studios, an action-adventure experience that will keep players on the edges of their seats with plenty of unexpected twists.

It’s all part of the eight games available to stream in the cloud this week.

The Hunt Begins

Monster Hunter Wilds on GeForce NOW
Forget Bigfoot — hunt 40-foot monsters in the cloud instead.

Happy hunting in the cloud. The unbridled force of nature runs wild and relentless in Monster Hunter Wilds, with environments transforming drastically from one moment to the next. This is a story of monsters and humans and their struggles to live in harmony in a world of duality. Members can fulfill their duties as a Hunter by tracking and defeating powerful monsters and forging strong, new weapons and armor from materials harvested from the hunt, all while uncovering the connection between the people of the Forbidden Lands and the locales they inhabit.

GeForce NOW members can join the ultimate hunting experience without waiting for game downloads or worrying about hardware space. Stream the title across devices, from underpowered PCs and Macs to the Steam Deck and virtual-reality devices. Performance members get six-hour gaming sessions, and Ultimate members get eight-hour sessions. Performance and Ultimate members can also stream with NVIDIA DLSS and ray-tracing technologies for the highest frame rates. This game has system requirements that require a GeForce NOW Performance or Ultimate membership — free members can upgrade today to join in on the action.

Join the Writer’s Block Party

Split Fiction on GeForce NOW
Ctrl+Alt+Adventure

Split Fiction from Hazelight Studios, creators of the award-winning It Takes Two, is now available to stream in the cloud. Split Fiction is a cooperative adventure where science fiction and fantasy authors Mio and Zoe are trapped in a simulation that’s stealing their stories.

Players must work together using unique abilities in ever-changing worlds, ranging from cyberpunk cities to enchanted forests, to overcome diverse challenges like taming dragons, mastering laser swords and solving gravity puzzles. The game also features innovative split-screen mechanics and a Friend’s Pass feature that enables one player to host the full game while their partner joins for free.

Split Fiction emphasizes teamwork and communication for a genre-bending, chaotic and imaginative co-op experience. Stream in the cloud today across devices with a GeForce NOW membership.

Hit the Gas on New Games

The Crew Motorfest S6 on GeForce NOW
Aloha, adrenaline.

Members can now stream the newest season of The Crew Motorfest. Season six brings significant updates, including a full series of challenges, activities and surprises. Discover a new playground, striking new vehicles, world improvements and two new Playlists including ”Red Bull Speed Clash” at the game’s launch. The enhanced player vs. player experience offers weekly themed Grand Races, vehicle handling improvements and new features like Photo Quest fast travel. Enjoy an even more immersive and enjoyable open-world driving experience across the Hawaiian islands of O’ahu and Maui with the wings of Red Bull, streaming on GeForce NOW.

Look for the following games available to stream in the cloud this week:

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Roboflow Helps Unlock Computer Vision for Every Kind of AI Builder

Roboflow Helps Unlock Computer Vision for Every Kind of AI Builder

Ninety percent of information transmitted to the human brain is visual. The importance of sight in understanding the world makes computer vision essential for AI systems.

By simplifying computer vision development, startup Roboflow helps bridge the gap between AI and people looking to harness it. Trusted by over a million developers and half of the Fortune 100 companies, Roboflow’s mission is to make the world programmable through computer vision. Roboflow Universe is home to the largest collection of open-source computer vision datasets and models.

Cofounder and CEO Joseph Nelson joined the NVIDIA AI Podcast to discuss how Roboflow empowers users in manufacturing, healthcare and automotive to solve complex problems with visual AI.

A member of the NVIDIA Inception program for cutting-edge startups, Roboflow streamlines model training and deployment, helping organizations extract value from images and video using computer vision. For example, using the technology, automotive companies can improve production efficiency, and scientific researchers can identify microscopic cell populations.

Over $50 trillion in global GDP is dependent on applying AI to problems in industrial settings, and NVIDIA is working with Roboflow to deliver those solutions. Nelson also shares insights from his entrepreneurial journey, emphasizing perseverance, adaptability and community in building a mission-driven company. Impactful technology isn’t just about innovation, he says. It’s about making powerful tools accessible to the people solving real problems.

Looking ahead, Nelson highlights the potential of multimodal AI, where vision integrates with other data types to unlock new possibilities, and the importance of running models on the edge, especially on real-time video. Learn more about the latest advancements in visual agents and edge computing at NVIDIA GTC, a global AI conference taking place March 17-21 in San Jose, California.

Time Stamps

2:03 – Nelson explains Roboflow’s aim to make the world programmable through computer vision.

7:26 – Real-world applications of computer vision to improve manufacturing efficiency, quality control and worker safety.

22:15 – How multimodalilty allows AI to be more intelligent.

33:01 – Lessons learned and perspectives on leadership, mission-driven work and what it takes to scale a company successfully.

29:43 – Teasing Roboflow’s upcoming announcements at GTC.

You Might Also Like…

How World Foundation Models Will Advance Physical AI With NVIDIA’s Ming-Yu Liu

AI models that can accurately simulate and predict outcomes in physical, real-world environments will enable the next generation of physical AI systems. Ming-Yu Liu, vice president of research at NVIDIA and an IEEE Fellow, explains the significance of world foundation models — powerful neural networks that can simulate physical environments.

Snowflake’s Baris Gultekin on Unlocking the Value of Data With Large Language Models

Snowflake is using AI to help enterprises transform data into insights and applications. Baris Gultekin, head of AI at Snowflake, explains how the company’s AI Data Cloud platform separates the storage of data from compute, enabling organizations across the world to connect via cloud technology and work on a unified platform.

NVIDIA’s Annamali Chockalingam on the Rise of LLMs

LLMs are in the spotlight, capable of tasks like generation, summarization, translation, instruction and chatting. Annamalai Chockalingam, senior product manager of developer marketing at NVIDIA, discusses how a combination of these modalities and actions can build applications to solve any problem.

Subscribe to the AI Podcast

Get the AI Podcast through Amazon Music, Apple Podcasts, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, SoundCloud, Spotify, Stitcher and TuneIn.

Read More