These days, just about everyone is a content creator. But can generative AI help make people create high-quality films and other content affordably? Find out from Pinar Seyhan Demirdag, cofounder and CEO of Cuebric, during his conversation with NVIDIA AI Podcast host Noah Kravitz.
Cuebric is on a mission to offer new solutions in filmmaking and content creation through immersive, two-and-a-half-dimensional cinematic environments. Its AI-powered application aims to help creators quickly bring their ideas to life, making high-quality production more accessible.
Demirdag discusses how Cuebric uses generative AI to enable the creation of engaging environments affordably. Listen in to find out about the current landscape of content creation, the role of AI in simplifying the creative process, and Cuebric’s participation in NVIDIA’s GTC technology conference.
1:15: Getting to know Pinar Seyhan Demirdag and Cuebric
2:30: The beginnings and goals of Cuebric
4:45: How Cuebric’s AI application works for filmmakers
9:00: Advantages of AI in content creation
13:20: Making high-quality production budget-friendly
17:35: The future of AI in creative endeavors
22:00: Cuebric at NVIDIA GTC
AI could help students work smarter, not harder. Anant Agarwal, founder of edX and chief platform officer at 2U, shares his vision for the future of online education and the impact of AI in revolutionizing the learning experience.
Joe Glover, provost and senior vice president of academic affairs at the University of Florida, discusses the university’s efforts to implement AI across all aspects of higher education, including a public-private partnership with NVIDIA that has helped transform UF into one of the leading AI universities in the country.
Cambridge-1, the U.K.’s most powerful supercomputer, ranks among the world’s top 3 most energy-efficient supercomputers and was built to help healthcare researchers make new discoveries. Marc Hamilton, vice president of solutions architecture and engineering at NVIDIA, speaks on how he remotely oversaw its construction.
Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.
YouTube content creator Ralph Panebianco really, really loves video games.
Since getting an original Nintendo Entertainment System at the age of four, Panebianco, this week’s featured In the NVIDIA Studio creator, has spent much of his free time playing video games. He pursued a career in gaming in his native country of Australia before pivoting to content creation, opening a YouTube channel called Skill Up, where he reviews the latest video games.
“When I wasn’t playing video games, I was reading about them, and now I get to talk about them for a living,” he said.
And calling all art fans: the latest Studio Standouts video features film noir-themed artwork brought to life with dramatic, monochromatic flair.
Video Editing Skillz
Panebianco works with his partner to create in-depth, insightful reviews of the latest video games on his Skill Up YouTube channel, which has garnered nearly 1 million subscribers. Below is a recent video reviewing Pacific Drive, a title available on the NVIDIA GeForce NOW cloud gaming service, powered by GeForce RTX GPUs.
“Creatively, we don’t view game reviews as functional buying guides with a list of pros and cons,” said Panebianco. “We view reviews as a chance to crack a game open and really show the audience what makes it tick. They’re sort of mini-essays on game design, delving deep into why specific game mechanics do or don’t work.”
The content creation process begins with booting up the game on his PC, powered by the recently launched GeForce RTX 4080 SUPER graphics card. This allows the Skill Up team to tap RTX ray tracing and NVIDIA DLSS — breakthrough technologies that use AI to create additional frames and improve image quality.
He records video footage primarily using GeForce Experience, a companion to NVIDIA GeForce GPUs that enables users to capture assets, optimize game settings and keep drivers up to date, among other features.
When footage requires high-dynamic range, the team uses the OBS Studio open-source software with AV1 hardware encoding to achieve 40% more efficient encoding on average than H.264 and deliver higher quality than competing GPUs.
“The AV1 encoder is ridiculously efficient in terms of file size,” he said.
NVIDIA GPUs and OBS Studio software work in synergy.
Once the footage is ready, Panebianco writes a video script in Microsoft Word and then records himself, using Audacity. He uses the AI-powered NVIDIA Broadcast app, free for RTX GPU owners, to eliminate background noise and achieve professional studio quality.
Panebianco then hands off the files to his editor for production in Adobe Premiere Pro, where a number of GPU-accelerated, AI-powered features such as Enhance Speech, Auto Reframe and Unsharp Mask help speed the video editing process.
NVIDIA’s GPU-accelerated video decoder (NVDEC) enables smooth playback and scrubbing of high-resolution videos.
Next, Panebianco exports the final files twice as fast thanks to the dual AV1 encoders in his RTX GPU. Lastly, his editor creates a YouTube thumbnail in Adobe Photoshop, and then the video is ready for publishing.
Adobe Photoshop has over 30 GPU-accelerated features that help modify and adjust images smoothly and quickly.
“Almost my entire workflow was enhanced by NVIDIA’s hardware,” Panebianco shared. “It’s not just about the hardware making for efficient encoding or lightning-fast, hardware-enabled rendering — it’s about the end-to-end toolset.”
Panebianco has words of wisdom for aspiring content creators.
“Worry less about the numbers and more about the quality,” he said. “The metrics grind pays little in the way of dividends, but putting out truly excellent content is an almost failure-proof path to growth.”
The spirit of Grace Hopper will live on at NVIDIA GTC.
Accelerated systems using powerful processors — named in honor of the pioneer of software programming — will be on display at the global AI conference running March 18-21, ready to take computing to the next level.
System makers will show more than 500 servers in multiple configurations across 18 racks, all packing NVIDIA GH200 Grace Hopper Superchips. They’ll form the largest display at NVIDIA’s booth in the San Jose Convention Center, filling the MGX Pavilion.
MGX Speeds Time to Market
NVIDIA MGX is a blueprint for building accelerated servers with any combination of GPUs, CPUs and data processing units (DPUs) for a wide range of AI, high performance computing and NVIDIA Omniverse applications. It’s a modular reference architecture for use across multiple product generations and workloads.
GTC attendees can get an up-close look at MGX models tailored for enterprise, cloud and telco-edge uses, such as generative AI inference, recommenders and data analytics.
The pavilion will showcase accelerated systems packing single and dual GH200 Superchips in 1U and 2U chassis, linked via NVIDIA BlueField-3 DPUs and NVIDIA Quantum-2 400Gb/s InfiniBand networks over LinkX cables and transceivers.
The systems support industry standards for 19- and 21-inch rack enclosures, and many provide E1.S bays for nonvolatile storage.
Grace Hopper in the Spotlight
Here’s a sampler of MGX systems now available:
ASRock RACK’s MECAI, measuring 450 x 445 x 87mm, accelerates AI and 5G services in constrained spaces at the edge of telco networks.
ASUS’s MGX server, the ESC NM2N-E1, slides into a rack that holds up to 32 GH200 processors and supports air- and water-cooled nodes.
Foxconn provides a suite of MGX systems, including a 4U model that accommodates up to eight NVIDIA H100 NVL PCIe Tensor Core GPUs.
GIGABYTE’s XH23-VG0-MGX can accommodate plenty of storage in its six 2.5-inch Gen5 NVMe hot-swappable bays and two M.2 slots.
Inventec’s systems can slot into 19- and 21-inch racks and use three different implementations of liquid cooling.
Lenovo supplies a range of 1U, 2U and 4U MGX servers, including models that support direct liquid cooling.
Pegatron’s air-cooled AS201-1N0 server packs a BlueField-3 DPU for software-defined, hardware-accelerated networking.
QCT can stack 16 of its QuantaGrid D74S-IU systems, each with two GH200 Superchips, into a single QCT QoolRack.
Supermicro’s ARS-111GL-NHR with nine hot-swappable fans is part of a portfolio of air- and liquid-cooled GH200 and NVIDIA Grace CPU systems.
Wiwynn’s SV7200H, a 1U dual GH200 system, supports a BlueField-3 DPU and a liquid-cooling subsystem that can be remotely managed.
Wistron’s MGX servers are 4U GPU systems for AI inference and mixed workloads, supporting up to eight accelerators in one system.
The new servers are in addition to three accelerated systems using MGX announced at COMPUTEX last May — Supermicro’s ARS-221GL-NR using the Grace CPU and QCT’s QuantaGrid S74G-2U and S74GM-2U powered by the GH200.
Grace Hopper Packs Two in One
System builders are adopting the hybrid processor because it packs a punch.
GH200 Superchips combine a high-performance, power-efficient Grace CPU with a muscular NVIDIA H100 GPU. They share hundreds of gigabytes of memory over a fast NVIDIA NVLink-C2C interconnect.
The result is a processor and memory complex well-suited to take on today’s most demanding jobs, such as running large language models. They have the memory and speed needed to link generative AI models to data sources that can improve their accuracy using retrieval-augmented generation, aka RAG.
Recommenders Run 4x Faster
In addition, the GH200 Superchip delivers greater efficiency and up to 4x more performance than using the H100 GPU with traditional CPUs for tasks like making recommendations for online shopping or media streaming.
In its debut on the MLPerf industry benchmarks last November, GH200 systems ran all data center inference tests, extending the already leading performance of H100 GPUs.
In all these ways, GH200 systems are taking to new heights a computing revolution their namesake helped start on the first mainframe computers more than seven decades ago.
Register for NVIDIA GTC, the conference for the era of AI, running March 18-21 at the San Jose Convention Center and virtually.
And get the 30,000-foot view from NVIDIA CEO and founder Jensen Huang in his GTC keynote.
Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use OpenUSD to build tools, applications and services for 3D workflows and physically accurate virtual worlds.
A failed furniture-shopping trip turned into a business idea for Steven Gay, cofounder and CEO of company Mode Maison.
Gay grew up in Houston and studied at the University of Texas before working in New York as one of the youngest concept designers at Ralph Lauren. He was inspired to start his own company after a long day of trying — and failing — to pick out a sofa.
The experience illuminated how the luxury home-goods industry has traditionally lagged in adopting digital technologies, especially those for creating immersive, interactive experiences for consumers.
Gay founded Mode Maison in 2018 with the goal of solving this challenge and paving the way for scalability, creativity and a generative future in retail. Using the Universal Scene Description framework, aka OpenUSD, and the NVIDIA Omniverse platform, Gay, along with Mode Maison Chief Technology Officer Jakub Cech and the Mode Maison team, are helping enhance and digitalize entire product lifecycle processes — from design and manufacturing to consumer experiences.
Register for NVIDIA GTC, which takes place March 17-21, to hear how leading companies are using the latest innovations in AI and graphics. And join us for OpenUSD Day to learn how to build generative AI-enabled 3D pipelines and tools using Universal Scene Description.
They developed a photometric scanning system, called Total Material Appearance Capture, which offers an unbiased, physically based approach to digitizing the material world that’s enabled by real-world embedded sensors.
TMAC captures proprietary data and the composition of any material, then turns it into input that serves as a single source of truth, which can be used for creating a fully digitized retail model. Using the system, along with OpenUSD and NVIDIA Omniverse, Mode Maison customers can create highly accurate digital twins of any material or product.
“By enabling this, we’re effectively collapsing and fostering a complete integration across the entire product lifecycle process — from design and production to manufacturing to consumer experiences and beyond,” said Gay.
Mode Maison developed a photometric scanning system called Total Material Appearance Capture.
Streamlining Workflows and Enhancing Productivity With Digital Twins
Previously, Mode Maison faced significant challenges in creating physically based, highly flexible and scalable digital materials. The limitations were particularly noticeable when rendering complex materials and textures, or integrating digital models into cohesive, multilayered environments.
Using Omniverse helped Gay and his team overcome these challenges by offering advanced rendering capabilities, physics simulations and extensibility for AI training that unlock new possibilities in digital retail.
Before using Omniverse and OpenUSD, Mode Maison used disjointed processes for digital material capture, modeling and rendering, often leading to inconsistencies, the inability to scale and minimal interoperability. After integrating Omniverse, the company experienced a streamlined, coherent workflow where high-fidelity digital twins can be created with greater efficiency and interoperability.
The team primarily uses Autodesk 3ds Max for design, and they import the 3D data using Omniverse Connectors. Gay says OpenUSD is playing an increasingly critical role in its workflows, especially when developing composable, flexible, interoperable capabilities across asset creation.
This enhanced pipeline starts with capturing high-fidelity material data using TMAC. The data is then processed and formatted into OpenUSD for the creation of physically based, scientifically accurate, high-fidelity digital twins.
“OpenUSD allows for an unprecedented level of collaboration and interoperability in creating complex, multi-layered capabilities and advanced digital materials,” Gay said. “Its ability to seamlessly integrate diverse digital assets and maintain their fidelity across various applications is instrumental in creating realistic, interactive digital twins for retail.”
OpenUSD and Omniverse have sped Mode Maison and their clients’ ability to bring products to market, reduced costs associated with building and modifying digital twins, and enhanced productivity through streamlined creation.
“Our work represents a major step toward a future where digital and physical realities will be seamlessly integrated,” said Gay. “This shift enhances consumer engagement and paves the way for more sustainable business practices by reducing the need for physical prototyping while enabling more precise manufacturing.”
As for emerging technological advancements in digital retail, Gay says AI will play a central role in creating hyper-personalized design, production, sourcing and front-end consumer experiences — all while reducing carbon footprints and paving the way for a more sustainable future in retail.
Learn more about how OpenUSD and NVIDIA Omniverse are transforming industries at NVIDIA GTC, a global AI conference running March 18-21, online and at the San Jose Convention Center.
Join OpenUSD Day at GTC on Tuesday, March 19, to learn more about building generative AI-enabled 3D pipelines and tools using USD.
With generative AI and hybrid work environments becoming the new standard, nearly every professional, whether a content creator, researcher or engineer, needs a powerful, AI-accelerated laptop to help users tackle their industry’s toughest challenges — even on the go.
The new NVIDIA RTX 500 and 1000 Ada Generation Laptop GPUs will be available in new, highly portable mobile workstations, expanding the NVIDIA Ada Lovelace architecture-based lineup, which includes the RTX 2000, 3000, 3500, 4000 and 5000 Ada Generation Laptop GPUs.
AI is rapidly being adopted to drive efficiencies across professional design and content creation workflows and everyday productivity applications, underscoring the importance of having powerful local AI acceleration and sufficient processing power in systems.
The next generation of mobile workstations with Ada Generation GPUs, including the RTX 500 and 1000 GPUs, will include both a neural processing unit (NPU), a component of the CPU, and an NVIDIA RTX GPU, which includes Tensor Cores for AI processing. The NPU helps offload light AI tasks, while the GPU provides up to an additional 682 TOPS of AI performance for more demanding day-to-day AI workflows.
The higher level of AI acceleration delivered by the GPU is useful for tackling a wide range of AI-based tasks, such as video conferencing with high-quality AI effects, streaming videos with AI upscaling, or working faster with generative AI and content creation applications.
The new RTX 500 GPU delivers up to 14x the generative AI performance for models like Stable Diffusion, up to 3x faster photo editing with AI and up to 10x the graphics performance for 3D rendering compared with a CPU-only configuration — bringing massive leaps in productivity for traditional and emerging workflows.
Enhancing Professional Workflows Across Industries
The RTX 500 and 1000 GPUs elevate workflows with AI for laptop users everywhere in compact designs. Video editors can streamline tasks such as removing background noise with AI. Graphic designers can bring blurry images to life with AI upscaling. Professionals can work on the go while using AI for higher-quality video conferencing and streaming experiences.
For users looking to tap AI for advanced rendering, data science and deep learning workflows, NVIDIA also offers the RTX 2000, 3000, 3500, 4000 and 5000 Ada Generation Laptop GPUs. 3D creators can use AI denoising and deep learning super sampling (DLSS) to visualize photorealistic renders in real time. Businesses can query their internal knowledge base with chatbot-like interfaces using local large language models. And researchers and scientists can experiment with data science, AI model training and tuning, and development projects.
Performance and Portability With NVIDIA RTX
The RTX 500 and 1000 GPUs, based on the NVIDIA Ada Lovelace architecture, bring the latest advancements to thin and light laptops, including:
Third-generation RT Cores: Up to 2x the ray tracing performance of the previous generation for high-fidelity, photorealistic rendering.
Fourth-generation Tensor Cores: Up to 2x the throughput of the previous generation, accelerating deep learning training, inferencing and AI-based creative workloads.
Ada Generation CUDA cores: Up to 30% the single-precision floating point (FP32) throughput compared to the previous generation for significant performance improvements in graphics and compute workloads.
Dedicated GPU memory: 4GB GPU memory with the RTX 500 GPU and 6GB with the RTX 1000 GPU allows users to run demanding 3D and AI-based applications, as well as tackle larger projects, datasets and multi-app workflows.
DLSS 3: Delivers a breakthrough in AI-powered graphics, significantly boosting performance by generating additional high-quality frames.
AV1 encoder: Eighth-generation NVIDIA encoder, aka NVENC, with AV1 support is up to 40% more efficient than H.264, enabling new possibilities for broadcasting, streaming and video calling.
Availability
The new NVIDIA RTX 500 and 1000 Ada Generation Laptop GPUs will be available this spring in mobile workstations from global manufacturing partners including Dell Technologies, HP, Lenovo and MSI.
Editor’s note: This post is part of Into the Omniverse, a series focused on how artists, developers and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.
The combination of powerful 3D tools and groundbreaking technologies can transform the way designers bring their visions to life — and Universal Scene Description, or OpenUSD, is helping enable that synergy. It’s the framework on which the NVIDIA Omniverse platform that enables the development of OpenUSD-based tools and 3D workflows is based.
Rhinoceros, commonly known as Rhino, or Rhino 3D, is a powerful computer-aided design (CAD) and 3D modeling software used across industries — from education and jewelry design to architecture and marine modeling. The most recent software release includes support for OpenUSD export, among other updates, establishing it among the many applications embracing the new 3D standard.
Turning Creativity Into CAD Reality
Tanja Langgner, 3D artist and illustrator, grew up in Austria and now lives in a converted pigsty in the English countryside. With a background in industrial design, she’s had the opportunity to work with a slew of design agencies across Europe.
For the past decade, she’s undertaken freelance work in production and visualization, helping clients with tasks ranging from concept design and ideation to CAD and 3D modeling. Often doing industrial design work, Langgner relies on Rhino to construct CAD models, whether for production evaluation or rendering purposes.
When faced with designs requiring intricate surface patterns, “Rhino’s Grasshopper, a parametric modeler, is excellent in creating complex parametric shapes,” she said.
Langgner is no stranger to OpenUSD. She uses it to transfer assets easily from one application to another, allowing her to visualize her work more efficiently. With the new Rhino update, she can now export OpenUSD files from Rhino, further streamlining her design workflow.
Mathew Schwartz, an assistant professor in architecture and design at the New Jersey Institute of Technology, also uses Rhino and OpenUSD in his 3D workflows. Schwartz’s research and design lab, SiBORG, focuses on understanding and improving design workflows, especially with regard to accessibility, human factors and automation.
With OpenUSD, he can combine his research, Python code, 3D environments and renders, with his favorite tools in Omniverse.
Schwartz, along with Langgner, recently joined a community livestream, where he shared more details about his research — including how he’s made navigation graphs that show how someone can move in a space if they’re using a wheelchair or crutches in real-time. With his industrial design experience, he demonstrated computation using Rhino 3D and the use of generative AI for a seamless design process.
“With OpenUSD and Omniverse, we’ve been able to expand the scope of our research, as we can easily combine data analysis and visualization with the design process,” he said.
Learn more by watching the replay of the community livestream:
Rhino 3D Updates Simplify 3D Collaboration
Rhino 8, available now, brings significant enhancements to the 3D modeling experience. It enables the export of meshes, mesh vertex colors, physically based rendering materials and textures so users can seamlessly share and collaborate on 3D designs with enhanced visual elements.
Modeling: New features, including PushPull direct editing, a ShrinkWrap function for creating water-tight meshes around various geometries, enhanced control over subdivision surfaces with the SubD Crease control tool, and improved functionality for smoother surface fillets.
Drawing and illustration: Precision-boosting enhancements to clipping and sectioning operations, a new feature for creating reflected ceiling plans, major improvements to linetype options and enhanced UV mapping for better texture coordination.
Operating systems: A faster-than-ever experience for Mac users thanks to Apple silicon processors and Apple Metal display technology, along with significantly accelerated rendering with the updated Cycles engine.
Development: New Grasshopper components covering annotations, blocks, materials and user data, accompanied by a new, enhanced script editor.
Future Rhino updates will feature an expansion of export capabilities to include NURBS curves and surfaces, subdivision modeling and the option to import OpenUSD content.
The Rhino team actively seeks user feedback on desired import platforms and applications, continually working to make OpenUSD files widely accessible and adaptable across 3D environments.
Get Plugged Into the World of OpenUSD
Learn more about OpenUSD and meet experts at NVIDIA GTC, the conference for the era of AI, taking place March 18-21 at the San Jose Convention Center. Don’t miss:
Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.
The February NVIDIA Studio Driver, designed specifically to optimize creative apps, is now available for download. Developed in collaboration with app developers, Studio Drivers undergo extensive testing to ensure seamless compatibility with creative apps while enhancing features, automating processes and speeding workflows.
Creators can download the latest driver on the public beta of the new NVIDIA app, the essential companion for creators and gamers with NVIDIA GPUs in their PCs and laptops. The NVIDIA app beta is a first step to modernize and unify the NVIDIA Control Panel, GeForce Experience and RTX Experience apps.
The NVIDIA App offers easy access to the latest Studio Drivers, a suite of AI-powered Studio apps, games and more.
The NVIDIA app simplifies the process of keeping PCs updated with the latest NVIDIA drivers, enables quick discovery and installation of NVIDIA apps like NVIDIA Broadcast and NVIDIA Omniverse, unifies the GPU control center, and introduces a redesigned in-app overlay for convenient access to powerful recording tools. Download the NVIDIA app beta today.
Adobe Premiere Pro’s AI-powered Enhance Speech tool is now available in general release. Accelerated by NVIDIA RTX, the new feature removes unwanted noise and improves the quality of dialogue clips so they sound professionally recorded. It’s 75% faster on a GeForce RTX 4090 laptop GPU compared with an RTX 3080 Ti.
Have a Chat with RTX, the tech demo app that lets GeForce RTX owners personalize a large language model connected to their own content. Results are fast and secure since it runs locally on a Windows RTX PC or workstation. Download Chat with RTX today.
And this week In the NVIDIA Studio, filmmaker James Matthews shares his short film, Dive, which was created with an Adobe Premiere Pro-powered workflow supercharged by his ASUS ZenBook Pro NVIDIA Studio laptop with a GeForce RTX 4070 graphics card.
Going With the Flow
Matthews’ goal with Dive was to create a visual and auditory representation of what it feels like to get swallowed up in the creative editing process.
Talk about a dream content creation location.
“When I’m really deep into an edit, I sometimes feel like I’m fully immersed into the film and the editing process itself,” he said. “It’s almost like a flow state, where time stands still and you are one with your own creativity.”
To capture and visualize that feeling, Matthews used the power of his ASUS ZenBook Pro NVIDIA Studio laptop equipped with a GeForce RTX 4070 graphics card.
He started by brainstorming — listening to music and sketching conceptual images with pencil and paper. Then, Matthews added a song to his Adobe Premiere Pro timeline and created a shot list, complete with cuts and descriptions of focal range, speed, camera movement, lighting and other details.
Next, he planned location and shooting times, paying special attention to lighting conditions.
“I always have my Premiere Pro timeline up so I can really see and feel what I need to create from the images I originally drew while building the concept in my head,” Matthews said. “This helps get the pacing of each shot right, by watching it back and possibly adding it into the timeline for a test.”
Then, Matthews started editing the footage in Premiere Pro, aided by his Studio laptop. His dedicated GPU-based NVIDIA video encoder (NVENC) enabled buttery-smooth playback and scrubbing of his high-resolution and multi-stream footage, saving countless hours.
Matthews’ RTX GPU accelerated a variety of AI-powered Adobe video editing tools, such as Enhance Speech, Scene Edit Detection and Auto Color, which applies color corrections with just a few clicks.
Finally, Matthews added sound design before exporting the final files twice as fast thanks to NVENC’s dual AV1 encoders.
“The entire edit used GPU acceleration,” he shared. “Effects in Premiere Pro, along with the NVENC video encoders on the GPU, unlocked a seamless workflow and essentially allowed me to get into my flow state faster.”
Top-tier games from publishing partners Bandai Namco Entertainment and Inflexion Games are joining GeForce NOW this week as the cloud streaming service’s fourth-anniversary celebrations continue.
Eleven new titles join the over 1,800 supported games in the GeForce NOW library, including Nightingale from Inflexion Games and Bandai Namco Entertainment’s Tales of Arise, Katamari Damacy REROLL and Klonoa Phantasy Reverie Series.
“Happy fourth anniversary, GeForce NOW!” cheered Jarrett Lee, head of publishing at Inflexion Games. “The platform’s ease of access and seamless performance comprise a winning combination, and we’re excited to see how cloud gaming evolves.”
Bigger Than Your Gaming Backlog
Games galore.
GeForce NOW offers over 1,800 games supported in the cloud, including over 100 free-to-play titles. That’s more than enough to play a different game every day for nearly five years.
The expansive GeForce NOW library supports games from popular digital stores Steam, Xbox — including supported PC Game Pass titles — Epic Games Store, Ubisoft Connect and GOG.com. From indie games to triple-A titles, there’s something for everyone to play.
The city of the future is in the cloud.
Explore sprawling open worlds in the neon-drenched streets of Night City in Cyberpunk 2077, or unearth ancient secrets in the vast landscapes of Assassin’s Creed Valhalla. Test skills against friends in the high-octane action of Apex Legends or Fortnite, strategize on the battlefield in Age of Empires IV or gather the ultimate party in Baldur’s Gate 3.
Build a dream farm in Stardew Valley, explore charming worlds in Hollow Knight or build a thriving metropolis in Cities: Skylines II.
Members can also catch the latest titles in the cloud, including the newly launched dark-fantasy adventure game The Inquisitor and Ubisoft’s Skull and Bones.
Dedicated rows in the GeForce NOW app help members find the perfect game to stream, and tags indicate when sales or downloadable content are available. GeForce NOW even has game-library syncing capabilities for Steam, Xbox and Ubisoft Connect so that supported games automatically sync to members’ cloud gaming libraries for easy access.
Access titles without waiting for them to download or worrying about system specs. Plus, Ultimate members gain exclusive access to gaming servers to get to their games faster.
Let’s Get Crafty
Entering new dimensions in the cloud.
Set out on an adventure into the mysterious and dangerous Fae Realms of Nightingale, the highly anticipated shared-world survival crafting game from Inflexion Games. Become an intrepid Realmwalker and explore, craft, build and fight across a visually stunning magical fantasy world inspired by the Victorian era.
Venture forth alone or with up to six other players in an online, shared world. The game features epic action, a variety of fantastical creatures and a Realm Card system that allows players to travel between realms and reshape landscapes.
Experience the magic of the Fae Realms in stunning resolution with RTX ON.
Roll On Over to the Cloud
Arise to the cloud and battle for the fate of two worlds.
Get ready for more fantasy, a touch of royalty and even some nostalgia with the latest Bandai Namco Entertainment titles coming to the cloud. Tales of Arise, Katamari Damacy REROLL, Klonoa Phantasy Reverie Series, PAC-MAN MUSEUM+ and PAC-MAN WORLD Re-PAC are now available for members to stream.
Embark on a mesmerizing journey in the fantastical world of Tales of Arise and unravel a gripping narrative that transcends the boundaries of imagination. Or roll into the whimsical, charming world of Katamari Damacy REROLL — control a sticky ball and roll up everything in its path to create colorful, celestial bodies.
Indulge in a nostalgic gaming feast with Klonoa Phantasy Reverie Series, PAC-MAN MUSEUM+ and PAC-MAN WORLD Re-PAC. The Klonoa series revitalizes dreamlike adventures, blending fantasy and reality for a captivating experience. Meanwhile, PAC-MAN MUSEUM+ invites players to munch through PAC-MAN’s iconic history, showcasing the timeless charm of the beloved yellow icon. For those seeking a classic world with a modern twist, PAC-MAN WORLD Re-PAC delivers an adventure packed with excitement and familiar ghosts.
Face Your Fate
The fate of mankind is in the cloud.
Dive into an adrenaline-fueled journey with Terminator: Dark Fate – Defiance, where strategic prowess decides the fate of mankind against machines. In a world taken over by machines, the greatest threats remaining may come not from the machines themselves but from other human survivors.
In four weeks, those will be among the most powerful words in your industry. But you won’t be able to use them if you haven’t been here.
NVIDIA’s GTC 2024 transforms the San Jose Convention Center into a crucible of innovation, learning and community from March 18-21, marking a return to in-person gatherings that can’t be missed.
Tech enthusiasts, industry leaders and innovators from around the world are set to present and explore over 900 sessions and close to 300 exhibits.
They’ll dive into the future of AI, computing and beyond, with contributions from some of the brightest minds at companies such as Amazon, Amgen, Character.AI, Ford Motor Co., Genentech, L’Oréal, Lowe’s, Lucasfilm and Industrial Light & Magic, Mercedes-Benz, Pixar, Siemens, Shutterstock, xAI and many more.
Among the most anticipated events is the Transforming AI Panel, featuring the original architects behind the concept that revolutionized the way we approach AI today: Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin.
All eight authors of “Attention Is All You Need,” the seminal 2017 NeurIPS paper that introduced the trailblazing transformer neural network architecture will appear in person at GTC on a panel hosted by NVIDIA Founder and CEO Jensen Huang.
Located in the vibrant heart of Silicon Valley, GTC stands as a pivotal gathering where the convergence of technology and community shapes the future. This conference offers more than just presentations; it’s a collaborative platform for sharing knowledge and sparking innovation.
Exclusive Insights:Last year, Huang announced a “lightspeed” leap in computing and partnerships with giants like Microsoft to set the stage. This year, anticipate more innovations at the SAP Center, giving attendees a first look at the next transformative breakthroughs.
Networking Opportunities: GTC’s networking events are designed to transform casual encounters into pivotal career opportunities. Connect directly with industry leaders and innovators, making every conversation a potential gateway to your next big role or project.
Cutting-Edge Exhibits: Step into the future with exhibits that showcase the latest in AI and robotics. Beyond mere displays, these exhibits offer hands-on learning experiences, providing attendees with invaluable knowledge to stay ahead.
AI is spilling out in all directions, and GTC is the best way to capture it all. Pictured: The latest installation from AI artist Refik Anadol, whose work will be featured at GTC.
Diversity and Innovation: Begin your day at the Women In Tech breakfast. This, combined with unique experiences like generative AI art installations and street food showcases, feeds creativity and fosters innovation in a relaxed setting.
Learn From the Best: Engage with sessions led by visionaries from organizations such as Disney Research, Google DeepMind, Johnson & Johnson Innovative Medicine, Stanford University and beyond. These aren’t just lectures but opportunities to question, engage and turn insights into actionable knowledge that can shape your career trajectory.
Silicon Valley Experience: Embrace the energy of the world’s foremost tech hub. Inside the conference, GTC connects attendees with the latest technologies and minds. Beyond the show floor, it’s a gateway to building lasting relationships with leaders and thinkers across industries.
Seize the Future Now: Don’t just join a story — write one. Be part of this moment in AI. Register now for GTC to write your own story in the epicenter of technological advancement. Be part of this transformative moment in AI.
NVIDIA, in collaboration with Google, today launched optimizations across all NVIDIA AI platforms for Gemma — Google’s state-of-the-art new lightweight 2 billion– and 7 billion-parameter open language models that can be run anywhere, reducing costs and speeding innovative work for domain-specific use cases.
Teams from the companies worked closely together to accelerate the performance of Gemma — built from the same research and technology used to create the Gemini models — with NVIDIA TensorRT-LLM, an open-source library for optimizing large language model inference, when running on NVIDIA GPUs in the data center, in the cloud and on PCs with NVIDIA RTX GPUs.
This allows developers to target the installed base of over 100 million NVIDIA RTX GPUs available in high-performance AI PCs globally.
Developers can also run Gemma on NVIDIA GPUs in the cloud, including on Google Cloud’s A3 instances based on the H100 Tensor Core GPU and soon, NVIDIA’s H200 Tensor Core GPUs — featuring 141GB of HBM3e memory at 4.8 terabytes per second — which Google will deploy this year.
Enterprise developers can additionally take advantage of NVIDIA’s rich ecosystem of tools — including NVIDIA AI Enterprise with the NeMo framework and TensorRT-LLM — to fine-tune Gemma and deploy the optimized model in their production application.
Learn more about how TensorRT-LLM is revving up inference for Gemma, along with additional information for developers. This includes several model checkpoints of Gemma and the FP8-quantized version of the model, all optimized with TensorRT-LLM.
Experience Gemma 2B and Gemma 7B directly from your browser on the NVIDIA AI Playground.
Gemma Coming to Chat With RTX
Adding support for Gemma soon is Chat with RTX, an NVIDIA tech demo that uses retrieval-augmented generation and TensorRT-LLM software to give users generative AI capabilities on their local, RTX-powered Windows PCs.
The Chat with RTX lets users personalize a chatbot with their own data by easily connecting local files on a PC to a large language model.
Since the model runs locally, it provides results fast, and user data stays on the device. Rather than relying on cloud-based LLM services, Chat with RTX lets users process sensitive data on a local PC without the need to share it with a third party or have an internet connection.