A New Window in the Cloud: NVIDIA and Microsoft to Bring Top PC Games to GeForce NOW

A New Window in the Cloud: NVIDIA and Microsoft to Bring Top PC Games to GeForce NOW

The cloud just got bigger. NVIDIA and Microsoft announced this week they’re working to bring top PC Xbox Game Studios games to the GeForce NOW library, including titles from Bethesda, Mojang Studios and Activision, pending closure of Microsoft’s acquisition.

With six new games joining the cloud this week for members to stream, it’s a jam-packed GFN Thursday.

Plus, Ultimate members can now access cloud-based RTX 4080-class servers in and around Paris, the latest city to light up on the update map. Keep checking GFN Thursday to see which RTX 4080 SuperPOD upgrade is completed next.

Game On

GeForce NOW Ultimate Superpods
GeForce NOW beyond fast gaming expands to Xbox PC Games.

NVIDIA and Microsoft’s 10-year deal to bring the Xbox PC game library to GeForce NOW is a major boost for cloud gaming and brings incredible choice to gamers. It’s the perfect bow to wrap up GeForce NOW’s anniversary month, expanding the over 1,500 titles available to stream.

Work to bring top Xbox PC game franchises and titles to GeForce NOW, such as Halo, Minecraft and Elder Scrolls, will begin immediately. Games from Activision like Call of Duty and Overwatch are on the horizon once Microsoft’s acquisition of Activision closes. GeForce NOW members will be able to stream these titles across their devices, with the flexibility to easily switch between underpowered PCs, Macs, Chromebooks, smartphones and more.

Xbox Game Studios PC games available on third-party stores, like Steam or Epic Games Store, will be among the first streamed through GeForce NOW. The partnership also marks the first games that will be available on the Windows Store, support for which will begin soon.

It’s an exciting time for all gamers, as the partnership will give people more choice and higher performance. Stay tuned to GFN Thursdays for news on the latest Microsoft titles coming to GeForce NOW.

Ready, Set, Action!

Son of the Forest on GeForce NOW
Find a way to survive alone or with a buddy.

A new week means new GFN Thursday games. Sons of the Forest, the highly anticipated sequel to The Forest from Endnight Games, places gamers on a cannibal-infested island after crash-landing. Survive alone or pair up online with a buddy online.

Earlier in the week, members started streaming Atomic Heart, the action role-playing game from Mundfish, day-and-date from the cloud. Check out the full list of new titles available to stream this week:

With the wrap up of GeForce NOW’s #3YearsOfGFN celebrations, members are sharing their winning GeForce NOW moments on Twitter and Facebook for a chance to win an MSI Ultrawide Gaming monitor — the perfect companion with an Ultimate membership. Join the conversation and add your own favorite moments.

Let us know in the comments or on GeForce NOW social channels what you’ll be streaming next.

Read More

New NVIDIA Studio Laptops Powered by GeForce RTX 4070, 4060, 4050 Laptop GPUs Boost On-the-Go Content Creation

New NVIDIA Studio Laptops Powered by GeForce RTX 4070, 4060, 4050 Laptop GPUs Boost On-the-Go Content Creation

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Laptops equipped with NVIDIA GeForce RTX 4070, 4060 and 4050 GPUs are now available. The new lineup — including NVIDIA Studio-validated laptops from ASUS, GIGABYTE and Samsung — gives creators more options to create from anywhere with lighter, thinner devices that dramatically exceed the performance of the last generation.

These new GeForce RTX Laptop GPUs bring increased efficiency, thanks to the NVIDIA Ada Lovelace GPU architecture and fifth-generation Max-Q technology.

The laptops are fueled by powerful NVIDIA Studio technologies, including hardware acceleration for 3D, video and AI workflows; optimizations for RTX hardware in over 110 popular creative apps; and exclusive NVIDIA Studio apps like Omniverse, Canvas and Broadcast. And when the creating ends to let the gaming begin, DLSS 3 technology doubles frame rates.

Plus, the making of 3D artist Shangyu Wang’s short film, called Most Precious Gift, is highlighted In the NVIDIA Studio this week. The film was staged in NVIDIA Omniverse, a platform for creating and operating metaverse applications.

And don’t forget to sign up for creator and Omniverse sessions, tutorials and more at NVIDIA GTC, a free, global conference for the era of AI and the metaverse running online March 20-23.

A GPU Class of Their Own 

The new Studio laptops, equipped with powerful GeForce RTX 4070, 4060 and 4050 Laptop GPUs and fifth-generation Max-Q technology, revolutionize content creation on the go.

These advancements enable extreme efficiencies that allow creators to get the best of both worlds: small size and high performance. The thinner, lighter, quieter laptops retain extraordinary performance — letting users complete complex creative tasks in a fraction of the time needed before.

GeForce RTX 4070 GPUs unlock advanced video editing and 3D rendering capabilities. Work in 6K RAW high-dynamic range video files with lightning-fast decoding, export in AV1 with the new eighth-generation encoder, and gain a nearly 40% performance boost over the previous generation with GPU-accelerated effects in Blackmagic Design’s DaVinci Resolve. Advanced 3D artists can tackle large projects with ease across essential 3D apps using new third-generation RT Cores.

The GeForce RTX 4060 GPU-class laptops equipped with 8GB of video memory are great for video editing and artists looking to get started in 3D modeling and animation. In the popular open-source 3D app Blender, render times are a whopping 38% faster than the last generation.

Get started with GPU acceleration for photography, graphic design and video editing workflows using GeForce RTX 4050 GPUs, which provide a massive upgrade from integrated graphics. Access accelerated AI features, including 54% faster performance in Topaz Video for upscaling and deinterlacing footage. And turn home offices into professional-grade studios with NVIDIA’s encoder and the AI-powered NVIDIA Broadcast app for livestreaming.

Freelancers, hobbyists, aspiring artists and others can find a GeForce RTX GPU to fit their needs, now available in the new lineup of NVIDIA Studio laptops.

Potent, Portable, Primed for Creating

Samsung’s Galaxy Book3 Ultra comes with a choice of the GeForce RTX 4070 or 4050 GPU, alongside a vibrant 16-inch, 3K, AMOLED display.

Pick one up at Best Buy or on Samsung.com.

The Samsung Galaxy Book3 Ultra houses the GeForce RTX 4070 or 4050 GPU.

GIGABYTE upgraded its Aero 16 Studio laptop with up to a GeForce RTX 4070 GPU and a 16-inch, thin-bezel, 60Hz, OLED display. The Aero 14 features a GeForce RTX 4050 GPU with a 14-inch, thin-bezel, 90Hz, OLED display.

Purchase the Aero 14 from Amazon, and find both laptops on GIGABYTE.com.

GIGABYTE’s Aero 16 and 14 models with up to a GeForce RTX 4070 GPU are content-creation beasts.

The ASUS ROG FLOW Z13 comes with up to a GeForce RTX 4060 GPU, QHD, 165Hz, 13.4-inch Nebula display, as well as a 170-degree kickstand and detachable full-sized keyboard for portable creating, plus a stylus with NVIDIA Canvas support to turn simple brushstrokes into realistic images powered by AI.

Get one from ASUS.com.

The ASUS ROG FLOW Z13 is equipped with up to a GeForce RTX 4060 GPU.

MSI’s Stealth 17 Studio and Razer’s 16 and 18 models, with up to GeForce RTX 4090 Laptop GPUs, are also available to pick up today.

All Aboard the Creative Ship

Studio laptops power the imaginations of the world’s most creative minds, including this week’s In the NVIDIA Studio artist, Shangyu Wang.

From the moment his movie’s opening credits roll, viewers can expect to be captivated by a spellbinding journey in space and an intricately designed world, complemented by engaging music and voice-overs.

The film, Most Precious Gift, centers on humanity attempting to make peace with another intelligent lifeform holding the key to survival. It’s an extension of Wang’s interests in alien civilizations and their potential conflicts with humankind.

Wang usually jumps directly into 3D modeling, bypassing the concept stage that most artists go through. He sculpts and shapes the models in Autodesk Maya and Autodesk Fusion 360.

Ultra-fine details modeled in Autodesk Maya.

By selecting the default Autodesk Arnold renderer, using his GeForce RTX 3080 Ti-powered Studio laptop, Wang was able to use RTX-accelerated ray tracing and AI denoising, which let him tinker with and add details to highly interactive, photorealistic visuals. This was a boon for his efficiency.

Clothing segments combined and applied to the 3D model in Autodesk Maya.

Wang built textures in Adobe Substance 3D Painter and placed extra care on the fine details, noting the app was the “best option for the most realistic, original materials.” RTX-accelerated light and ambient occlusion guaranteed fully baked assets in mere seconds.

Realistic textures applied to 3D models in Adobe Substance 3D Painter.

For final renders, Wang said it was a no-brainer to assemble, simulate and stage his 3D scenes in Omniverse Create. “Because of the powerful path-tracing rendering, I can modify scene lights and materials in real time,” he said.

 

And when it came to final exports, Wang could use his preferred renderer within the Omniverse Create viewport, which has support for Pixar HD Storm, Chaos V-Ray, Maxon’s Redshift, OTOY OctaneRender, Blender Cycles and more.

Realistic lighting and shadows, manipulated and tinkered with in Omniverse Create.

Wang wrapped up compositing in NUKE software, where he adjusted colors and added depth-of-field visuals to the lens. The artist finally moved to DaVinci Resolve to add sound effects, music and subtitles.

3D artist Shangyu Wang.

Check out more of Wang’s work on ArtStation.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. Learn more about Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

Read More

Survey Reveals How Telcos Plan to Ring in Change Using AI

Survey Reveals How Telcos Plan to Ring in Change Using AI

The telecommunications industry has for decades helped advance revolutionary change – enabling everything from telephones and television to online streaming and self-driving cars. Yet the industry has long been considered an evolutionary mover in its own business.

A recent survey of more than 400 telecommunications industry professionals from around the world found that same cautious tone in how they plan to define and execute on their AI strategies.

To fill in a more complete picture of how the telecommunications industry is using AI, and where it’s headed, NVIDIA’s first “State of AI in Telecommunications” survey consisted of questions covering a range of AI topics, infrastructure spending, top use cases, biggest challenges and deployment models.

Survey respondents included C-suite leaders, managers, developers and IT architects from mobile telecoms, fixed and cable companies. The survey was conducted over eight weeks between mid-November 2022 and mid-January 2023.

Dial AI for Motivation

The survey results revealed two consistent themes: industry players (73%) see AI as a tool to grow revenue, improve operations and sustainability, or boost customer retention. Amid skepticism about the money-making potential of 5G, telecoms see efficiencies driven by AI as the most likely path for returns on investment.

Yet, 93% of those responding to questions about undertaking AI projects at their own companies appear to be substantially underinvesting in AI as a percentage of annual capital spending.

Some 50% of respondents reported spending less than $1 million last year on AI projects; a year earlier, 60% of respondents said they spent less than $1 million on AI. Just 3% of respondents spent over $50 million on AI in 2022.

The reasons cited for such cautious spending? Some 44% of respondents reported an inability to adequately quantify return on investment, which illustrates a mismatch between aspirations and the reality in introducing AI-driven solutions.

Technical challenges — whether from lack of enough skilled personnel or poor infrastructure — are also obstructing AI adoption. Of respondents, 34% cited an insufficient number of data scientists as the second-biggest challenge. Given that data scientists are sought after across industries, the response suggests that the telecoms industry needs to push harder to woo them.

With 33% of respondents also citing a lack of budget for AI projects, the results suggest that AI advocates need to work harder with decision-makers to develop a convincing case for AI adoption.

Likewise, for a technology solution that relies on data, concerns about the availability, handling, privacy and security of data were all critical issues to be addressed, especially in the light of data privacy and data residency laws around the globe, for example GDPR.

AI Engagement

Some 95% of telecommunications industry respondents said they were engaged with AI. But only 34% of respondents reported using AI for more than six months, while 23% said they’re still learning about the different options for AI. Eighteen percent reported being in a trial or pilot phase of an AI project.

For respondents at the trial or implementation stage, a clear majority acknowledged that there had been a positive impact on both revenue and cost. About 73% of respondents reported that implementation of AI had led to increased revenue in the last year, with 17% noting revenue gains of more than 10% in specific parts of the business.

Likewise, 80% of respondents reported that their implementation of AI led to reduced annual costs in the last year, with 15% noting that this cost reduction is above 10% — again, in specific parts of their business.

AI, AI Everywhere

The telecommunications industry has a deep and multilayered view on where best to allocate resources to AI: cost reduction, revenue increase, customer experience enhancement and creating operational efficiencies were all cited as key priorities.

In terms of deployment, however, AI focused on improving operational efficiency was a clear winner. This is somewhat expected, as the operational complexity of new telecommunications networks like 5G lend themselves to new solutions like AI. The industry is responsible for critical national infrastructure in every country, supports over 5 billion customer end points, and is expected to constantly deliver above 99% reliability. Telcos have also discussed AI-enabled solutions for network operations, cell sites planning, truck-routing optimization and machine learning data analytics. To improve the customer experience, some are adopting recommendation engines, virtual assistants and digital avatars.

In the near term, the focus appears to be on building more effective telecom infrastructure and unlocking new revenue-generating opportunities, especially together with partners.

The trick will be moving from early testing to widespread adoption.

Download the “State of AI in Telecommunications: 2023 Trends” report for in-depth results and insights.

Learn more about how telcos are leveraging AI to optimize operations and improve customer experiences.

Read More

Transportation Generation: See How AI and the Metaverse Are Shaping the Automotive Industry at GTC

Transportation Generation: See How AI and the Metaverse Are Shaping the Automotive Industry at GTC

Novel AI technologies are generating images, stories and, now, new ways to imagine the automotive future.

At NVIDIA GTC, a global conference for the era of AI and the metaverse running online March 20-23, industry luminaries working on these breakthroughs will come together and share their visions to transform transportation.

This year’s slate of in-depth sessions includes leaders from automotive, robotics, healthcare and other industries, as well as trailblazing AI researchers.

Headlining GTC is NVIDIA founder and CEO Jensen Huang, who will present the latest in AI and NVIDIA Omniverse, a platform for creating and operating metaverse applications, in a keynote address on Tuesday, March 21, at 8 a.m. PT.

Conference attendees will have plenty of opportunities to network and learn from NVIDIA and industry experts about the technologies powering the next generation of automotive.

Here’s what to expect from auto sessions at GTC:

End-to-End Innovation

The entire automotive industry is being transformed by AI and metaverse technologies, whether they’re used for design and engineering, manufacturing, autonomous driving or the customer experience.

Speakers from these areas will share how they’re using the latest innovations to supercharge development:

  • Sacha Vražić, director of autonomous driving R&D at Rimac Technology, discusses how the supercar maker is using AI to teach any driver how to race like a professional on the track.
  • Toru Saito, deputy chief of Subaru Lab at Subaru Corporation, walks through how the automaker is improving camera perception with AI, using large-dataset training on GPUs and in the cloud.
  • Tom Xie, vice president at ZEEKR, explains how the electric vehicle company is rethinking the electronic architecture in EVs to develop a software-defined lineup that is continuously upgradeable.
  • Liz Metcalfe-Williams, senior data scientist, and Otto Fitzke, machine learning engineer at Jaguar Land Rover, cover key learnings from the premium automaker’s research into natural language processing to improve knowledge and systems, and to accelerate the development of high-quality, validated, cutting-edge products.
  • Marco Pavone, director of autonomous vehicle research; Sanja Fidler, vice president of AI research; and Sarah Tariq, vice president of autonomous vehicle software at NVIDIA, show how generative AI and novel, highly integrated system architectures will radically change how AVs are designed and developed.

Develop Your Drive

In addition to sessions from industry leaders, GTC attendees can access talks on the latest NVIDIA DRIVE technologies led by in-house experts.

NVIDIA DRIVE Developer Days consist of a series of deep-dive sessions on building safe and robust autonomous vehicles. Led by the NVIDIA engineering team, these talks will highlight the newest DRIVE features and how to apply them.

Topics include high-definition mapping, AV simulation, synthetic data generation for testing and validation, enhancing AV safety with in-system testing, and multi-task models for AV perception.

Access these virtual sessions and more by registering free to attend and see the technologies generating the intelligent future of transportation.

Read More

UK’s Conservation AI Makes Huge Leap Detecting Threats to Endangered Species Across the Globe

UK’s Conservation AI Makes Huge Leap Detecting Threats to Endangered Species Across the Globe

The video above represents one of the first times that a pangolin, one of the world’s most critically endangered species, was detected in real time using artificial intelligence.

A U.K.-based nonprofit called Conservation AI made this possible with the help of NVIDIA technology. Such use of AI can help track even the rarest, most reclusive of species in real time, enabling conservationists to protect them from threats, such as poachers and fires, before it’s too late to intervene.

The organization was founded four years ago by researchers at Liverpool John Moores University — Paul Fergus, Carl Chalmers, Serge Wich and Steven Longmore.

In the past year and a half, Conservation AI has deployed 70+ AI-powered cameras across the world. These help conservationists preserve biodiversity through real-time detection of threats using deep learning models trained with transfer learning.

“It’s very simple — if we don’t protect our biodiversity, there won’t be people on this planet,” said Chalmers, who teaches deep learning and applied AI at Liverpool John Moores University. “And without AI, we’re never going to achieve our targets for protecting endangered species.”

The Conservation AI platform — built using NVIDIA Jetson modules for edge AI and the NVIDIA Triton Inference Server — in just four seconds analyzes footage, identifies species of interest and alerts conservationists and other users of potential threats via email.

It can also rapidly model trends in biodiversity and habitat health using a huge database of images and other metadata that would otherwise take years to analyze. The platform now enables conservationists to identify these trends and species activities in real time.

Conservation AI works with 150 organizations across the globe, including conservation societies, safaris and game reserves. To date, the platform has processed over 2 million images, about half of which were from the past three months.

Saving Time to Save Species

Threats to biodiversity have long been monitored using camera traps — networks of cameras equipped with infrared sensors that are placed in the wild. But camera traps can produce data that is hard to manage, as there’s often much variability in images of the animals and their environments.

“A typical camera trap study can take three years to analyze, so by the time you get the insights, it’s too late to do anything about the threat to those species,” said Fergus, a professor of machine learning at Liverpool John Moores University. “Conservation AI can analyze the same amount of data and send results to conservation teams so that interventions can happen in real time, all enabled by NVIDIA technology.”

Many endangered species occupy remote areas without access to human communication systems. The team uses NVIDIA Jetson AGX Xavier modules to analyze drone footage from such areas streamed to a smart controller that can count species population or alert conservationists when species of interest are detected.

Energy-efficient edge AI provided by the Jetson modules, which are equipped with Triton Inference Server, has sped up deep learning inference by 4x compared to the organization’s previous methods, according to Chalmers.

“We chose Triton because of the elasticity of the framework and the many types of models it supports,” he added. “Being able to train the models on the NVIDIA accelerated computing stack means we can make huge improvements on the models very, very quickly.”

Conservation AI trains and inferences its deep learning models with NVIDIA RTX 8000, T4 and A100 Tensor Core GPUs — along with the NVIDIA CUDA toolkit. Fergus called NVIDIA GPUs “game changers in the world of applied AI and conservation, where there are big-data challenges.”

In addition, the team’s species-detection pipeline is built on the NVIDIA DeepStream software development kit for vision AI applications, which enables real-time video inference in the field.

“Without this technology, helicopters would normally be sent up to observe the animals, which is hugely expensive and bad for the environment as it emits huge amounts of carbon dioxide,” Chalmers said. “Conservation AI technology helps reduce this problem and detects threats to animals before it’s too late to intervene.”

Detecting Pangolins, Rhinos and More

The Conservation AI platform has been deployed by Chester Zoo, a renowned conservation society based in the U.K., to detect poachers in real time, including those hunting pangolins in Uganda.

Since many endangered species, like pangolins, are so elusive, obtaining enough imagery of them to train AI models can be difficult. So, the Conservation AI team is working with NVIDIA to explore the use of synthetic data for model training.

The platform is also deployed at a game reserve in Limpopo, South Africa, where the AI keeps an eye on wildlife in the region, including black and white rhinos.

“Pound for pound, rhino horn is worth more than diamond,” Chalmers said. “We’ve basically created a geofence around these rhinos, so the reserve can intervene as soon as a poacher or another type of threat is detected.”

The organization’s long-term goal, Fergus said, is to create a toolkit that supports conservationists with many types of efforts, including wildlife monitoring through satellite imagery, as well as using deep learning models that analyze audio — like animal cries or the sounds of a forest fire.

“The loss of biodiversity is really a ticking time bomb, and the beauty of NVIDIA AI is that it makes every second count,” Chalmers said. “Without the NVIDIA accelerated computing stack, we just wouldn’t be able to do this — we wouldn’t be able to tackle climate change and reverse biodiversity loss, which is the ultimate dream.”

Read more about how NVIDIA technology helps to boost conservation and prevent poaching.

Featured imagery courtesy of Chester Zoo.

Read More

Rise to the Cloud: ‘Monster Hunter Rise’ and ‘Sunbreak’ Expansion Coming Soon to GeForce NOW

Rise to the Cloud: ‘Monster Hunter Rise’ and ‘Sunbreak’ Expansion Coming Soon to GeForce NOW

Fellow Hunters, get ready! This GFN Thursday welcomes Capcom’s Monster Hunter Rise and the expansion Sunbreak to the cloud, arriving soon for members.

Settle down for the weekend with 10 new games supported in the GeForce NOW library, including The Settlers: New Allies.

Plus, Amsterdam and Ashburn are next to light up on the RTX 4080 server map, giving nearby Ultimate members the power of an RTX 4080 gaming rig in the cloud. Keep checking the weekly GFN Thursday to see where the RTX 4080 SuperPOD upgrade rolls out next.

Palicos, Palamutes and Wyverns, Oh My

The hunt is on! Monster Hunter Rise, the popular action role-playing game from Capcom, is joining GeForce NOW soon. Protect the bustling Kamura Village from ferocious monsters; take on hunting quests with a variety of weapons and new hunting actions with the Wirebug; and work alongside a colorful cast of villagers to defend their home from the Rampage — a catastrophic event that nearly destroyed the village 50 years prior.

Members can expand the hunt with Monster Hunter Rise: Sunbreak, which adds new quests, monsters, locales, gear and more. And regular updates keep Hunters on the job, like February’s Free Title Update 4, which marks the return of the Elder Dragon Velkhana, the lord of the tundra that freezes all in its path.

Monster Hunter Rise Sunbreak on GeForce NOW
Carve out more time for monster hunting by playing in the cloud.

Whether playing solo or with a buddy, GeForce NOW members can take on dangerous new monsters anytime, anywhere. Ultimate members can protect Kamura Village at up to 4K at 120 frames per second — or immerse themselves in the most epic monster battles at ultrawide resolutions and 120 fps. Members won’t need to wait for downloads or worry about storage space, and can take the action with them across nearly all of their devices.

Rise to the challenge by upgrading today and get ready for Monster Hunter Rise to hit GeForce NOW soon.

New Week, New Games

The Settlers New Allies on GeForce NOW
Onward! There’s much to explore in the Forgotten Plains.

Kick off the weekend with 10 new titles, including The Settlers: New Allies. Choose among three unique factions and explore this whole new world powered by state-of-the-art graphics. Your settlement has never looked so lively.

Check out the full list of this week’s additions:

  • Labyrinth of Galleria: The Moon Society (New release on Steam)
  • Wanted: Dead (New release on Steam and Epic)
  • Elderand (New release on Steam, Feb. 16)
  • Wild West Dynasty (New release on Steam, Feb. 16)
  • The Settlers: New Allies (New release on Ubisoft, Feb. 17)
  • Across the Obelisk (Steam)
  • Captain of Industry (Steam)
  • Cartel Tycoon (Steam)
  • SimRail — The Railway Simulator (Steam)
  • Warpips (Epic Games Store)

The monthlong #3YearsOfGFN celebration continues on our Twitter and Facebook channels. Members shared the most beautiful place they’ve visited in-game on GFN.

And make sure to check out the question we have this week for GeForce NOW’s third anniversary celebration!

 

Read More

Redefining Workstations: NVIDIA, Intel Unlock Full Potential of Creativity and Productivity for Professionals

Redefining Workstations: NVIDIA, Intel Unlock Full Potential of Creativity and Productivity for Professionals

AI-augmented applications, photorealistic rendering, simulation and other technologies are helping professionals achieve business-critical results from multi-app workflows faster than ever.

Running these data-intensive, complex workflows, as well as sharing data and collaborating across geographically dispersed teams, requires workstations with high-end CPUs, GPUs and advanced networking.

To help meet these demands, Intel and NVIDIA are powering new platforms with the latest Intel Xeon W and Intel Xeon Scalable processors, paired with NVIDIA RTX 6000 Ada generation GPUs, as well as NVIDIA ConnectX-6 SmartNICs.

These new workstations bring together the highest levels of AI computing, rendering and simulation horsepower to tackle demanding workloads across data science, manufacturing, broadcast, media and entertainment, healthcare and more.

“Professionals require advanced power and performance to run the most intensive workflows, like using AI, rendering in real time or running multiple applications simultaneously,” said Bob Pette, vice president of professional visualization at NVIDIA. “The new Intel- and NVIDIA-Ada powered workstations deliver unprecedented speed, power and efficiency, enabling professionals everywhere to take on the most complex workflows across all industries.”

“The latest Intel Xeon W processors — featuring a breakthrough new compute architecture — are uniquely designed to help professional users tackle the most challenging current and future workloads,” said Roger Chandler, vice president and general manager of Creator and Workstation Solutions in the Client Computing Group at Intel. “Combining our new Intel Xeon workstation processors with the latest NVIDIA GPUs will unleash the innovation and creativity of professional creators, artists, engineers, designers, data scientists and power users across the world.”

Serving New Workloads 

Metaverse applications and the rise of generative AI require a new level of computing power from the underlying hardware. Creating digital twins in a simulated photorealistic environment that obeys the laws of physics and planning factories are just two examples of workflows made possible by NVIDIA Omniverse Enterprise, a platform for creating and operating metaverse applications.

BMW Group, for example, is using NVIDIA Omniverse Enterprise to design an end-to-end digital twin of an entire factory. This involves collaboration with thousands of planners, product engineers and facility managers in a single virtual environment to design, plan, simulate and optimize highly complex manufacturing systems before a factory is actually built or a new product is integrated into the real world.

The need for accelerated computing power is growing exponentially due to the explosion of AI-augmented workflows, from traditional R&D and data science workloads to edge devices on factory floors or in security offices, to generative AI solutions for text conversations and text-to-image applications.

Extended reality (XR) solutions for collaborative work also require significant computing resources. Examples of XR applications include design reviews, product design validation, maintenance and support training, rehearsals, interactive digital twins and location-based entertainment. All of these demand high-resolution, photoreal images to create the most intuitive and compelling immersive experiences, whether available locally or streamed to wireless devices.

Next-Generation Platform Features 

With a breakthrough new compute architecture for faster individual CPU cores and new embedded multi-die interconnect bridge packaging, the Xeon W-3400 and Xeon W-2400 series of processors enable unprecedented scalability for increased workload performance. Available with up to 56 cores in a single socket, the top-end Intel Xeon w9-3495X processor features a redesigned memory controller and larger L3 cache, delivering up to 28% more single-threaded(1) and 120% more multi-threaded(2) performance over the previous- generation Xeon W processors.

Based on the NVIDIA Ada Lovelace GPU architecture, the latest NVIDIA RTX 6000 brings incredible power efficiency and performance to the new workstations. It features 142 third-generation RT Cores, 568 fourth-generation Tensor Cores and 18,176 latest-generation CUDA cores combined with 48GB of high-performance graphics memory to provide up to 2x ray-tracing, AI, graphics and compute performance over the previous generation.

NVIDIA ConnectX-6 Dx SmartNICs enable professionals to handle demanding, high-bandwidth 3D rendering and computer-aided design tasks, as well as traditional office work with line-speed network connectivity support based on two 25Gbps ports and GPUDirect technology for increasing GPU bandwidth by 10x over standard NICs. The high-speed, low-latency networking and streaming capabilities enable teams to move and ingest large datasets or to allow remote individuals to collaborate across applications for design and visualization.

Availability 

The new generation of workstations powered by the latest Intel Xeon W and Intel Scalable processors and NVIDIA RTX Ada generation GPUs will be available for preorder beginning today from BOXX and HP, with more coming soon from other workstation system integrators.

To learn more, tune into the launch event.

 

(1) Based on SPEC CPU 2017_Int (1-copy) using Intel validation platform comparing Intel Xeon w9-3495X (56c) versus previous generation Intel Xeon W-3275 (28c).
(2) Based on SPEC CPU 2017_Int (n-copy) using Intel validation platform comparing Intel Xeon w9-3495X (56c) versus previous generation Intel Xeon W-3275 (28c).

Read More

Blender Alpha Release Comes to Omniverse, Introducing Scene Optimization Tools, Improved AI-Powered Character Animation

Blender Alpha Release Comes to Omniverse, Introducing Scene Optimization Tools, Improved AI-Powered Character Animation

Whether creating realistic digital humans that can express emotion or building immersive virtual worlds, 3D artists can reach new heights with NVIDIA Omniverse, a platform for creating and operating metaverse applications.

A new Blender alpha release, now available in the Omniverse Launcher, lets users of the 3D graphics software optimize scenes and streamline workflows with AI-powered character animations.

Save Time, Effort With New Blender Add-Ons

The new scene optimization add-on in the Blender release enables creators to fix bad geometry and generate automatic UVs, or 2D maps of 3D objects. It also reduces the number of polygons that need to be rendered to increase the scene’s overall performance, which significantly brings down file size, as well as CPU and GPU memory usage.

Plus, anyone can now accomplish what used to require a technical rigger or animator using an Audio2Face add-on.

A panel in the add-on makes it easier to use Blender characters in Audio2Face, an AI-enabled tool that automatically generates realistic facial expressions from an audio file.

This new functionality eases the process of bringing generated face shapes back onto rigs — that is, digital skeletons — by applying shapes exported through the Universal Scene Description (USD) framework onto a character even if it is fully rigged, meaning its whole body has a working digital skeleton. The integration of the facial shapes doesn’t alter the rigs, so Audio2Face shapes and animation can be applied to characters — whether for games, shows and films, or simulations — at any point in the artist’s workflow.

Realistic Character Animation Made Easy

Audio2Face puts AI-powered facial animation in the hands of every Blender user who works with Omniverse.

Using the new Blender add-on for Audio2Face, animator and popular YouTuber Marko Matosevic, aka Markom 3D, rigged and animated a Battletoads-inspired character using just an audio file.

Australia-based Matosevic joined Dave Tyner, a technical evangelist at NVIDIA, on a livestream to showcase their live collaboration across time zones, connecting 3D applications in a real-time Omniverse jam session. The two used the new Blender alpha release with Omniverse to make progress on one of Matosevic’s short animations.

The new Blender release was also on display last month at CES in The Artists’ Metaverse, a demo featuring seven artists, across time zones, who used Omniverse Nucleus Cloud, Autodesk, SideFX, Unreal Engine and more to create a short cinematic in real time.

Creators can save time and simplify processes with the add-ons available in Omniverse’s Blender build.

NVIDIA principal artist Zhelong Xu, for example, used Blender and Omniverse to visualize an NVIDIA-themed “Year of the Rabbit” zodiac.

“I got the desired effect very quickly and tested a variety of lighting effects,” said Xu, an award-winning 3D artist who’s previously worked at top game studio Tencent and made key contributions to an animated show on Netflix.

Get Plugged Into the Omniverse 

Learn more about Blender and Omniverse integrations by watching a community livestream on Wednesday, Feb. 15, at 11 a.m. PT via Twitch and YouTube.

And the session catalog for NVIDIA GTC, a global AI conference running online March 20-23, features hundreds of curated talks and workshops for 3D creators and developers. Register free to hear from NVIDIA experts and industry luminaries on the future of technology.

Creators and developers can download NVIDIA Omniverse free. Enterprises can try Omniverse Enterprise free on NVIDIA LaunchPad. Follow NVIDIA Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

Read More

Making a Splash: AI Can Help Protect Ocean Goers From Deadly Rips

Making a Splash: AI Can Help Protect Ocean Goers From Deadly Rips

Surfers, swimmers and beachgoers face a hidden danger in the ocean: rip currents. These narrow channels of water can flow away from the shore at speeds up to 2.5 meters per second, making them one of the biggest safety risks for those enjoying the ocean.

To help keep beachgoers safe, Christo Rautenbach, a coastal and estuarine physical processes scientist, has teamed up with the National Institute of Water and Atmospheric Research in New Zealand to develop a real-time rip current identification tool using deep learning.

On this episode of the NVIDIA AI Podcast, host Noah Kravitz interviews Rautenbach about how AI can be used to identify rip currents and the potential for the tool to be used globally to help reduce the number of fatalities caused by rip currents.

Developed in collaboration with Surf Lifesaving New Zealand, the rip current identification tool has achieved a detection rate of roughly 90% in trials. Rautenbach also shares the research behind the technology, which was published in the November 22 edition of the journal Remote Sensing.

You Might Also Like

Art(ificial) Intelligence: Pindar Van Arman Builds Robots That Paint
Pindar Van Arman, an American artist and roboticist, designs painting robots that explore the differences between human and computational creativity. Since his first system in 2005, he has built multiple artificially creative robots. The most famous, Cloud Painter, was awarded first place at Robotart 2018.

Real or Not Real? Attorney Steven Frank Uses Deep Learning to Authenticate Art
Steven Frank is a partner at the law firm Morgan Lewis, specializing in intellectual property and commercial technology law. He’s also half of the husband-wife team that used convolutional neural networks to authenticate artistic masterpieces, including da Vinci’s Salvador Mundi, with AI’s help.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments
Humans playing games against machines is nothing new, but now computers can develop games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.

Subscribe to the AI Podcast on Your Favorite Platform

You can now listen to the AI Podcast through Amazon Music, Apple Music, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Featured image credit: T. Caulfield

Read More

3D Creators Share Art From the Heart This Week ‘In the NVIDIA Studio’

3D Creators Share Art From the Heart This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Love and creativity are in the air this Valentine’s Day In the NVIDIA Studio, as 3D artist Molly Brady presents a parody scene inspired by the iconic The Birth of Venus (Redux) painting by Sando Botticelli.

Plus, join the #ShareYourHeART challenge by sharing what Valentine’s Day means to you in a scene built with NVIDIA Omniverse, a platform for creating and operating metaverse applications. Use the hashtag to post artwork — whether heartened by love, chocolate, teddy bears or anything else Valentine’s-themed — for a chance to be featured across NVIDIA social media channels.

3D artist Tanja Langgner’s delectable scene with chocolate hearts, featured below, is just one example.

Also, get a chance to win a GeForce RTX 3090 Ti GPU in the NVIDIA Instant NeRF VR sweepstakes. Named by TIME Magazine as one of the best inventions of 2022, NVIDIA Instant NeRF enables creators to rapidly create 3D models from 2D images and use them in virtual scenes. The tool provides a glimpse into the future of photography, 3D graphics and virtual worlds. Enter the sweepstakes by creating your own NeRF scene, and look to influencer Paul Trillo’s Instagram for inspiration.

New NVIDIA Studio laptops powered by GeForce RTX 40 Series Laptop GPUs are now available, including MSI’s Stealth 17 Studio and Razer’s 16 and 18 models — with more on the way. Learn why PC Gamer said the “RTX 4090 pushes laptops to blistering new frontiers: Yes, it’s fast, but also much more.”

Download the latest NVIDIA Studio Driver to enhance existing app features and reduce repetitive tasks. ON1 NoNoise AI, an app that quickly removes image noise while preserving and enhancing photo details, released an update speeding this process by an average of 50% on GeForce RTX 40 Series GPUs.

And NVIDIA GTC, a global conference for the era of AI and the metaverse, is running online March 20-23, with a slew of creator sessions, Omniverse tutorials and more — all free with registration. Learn more below.

A Satirical Valentine

Molly Brady is a big fan of caricatures.

“I love parody,” she gleefully admitted. “Nothing pleases me more than taking the air out of something serious and stoic.”

Botticelli’s The Birth of Venus painting, often referenced and revered, presented Brady with an opportunity for humor through her signature visual style.

According to Brady, “3D allows you to mix stylistic artwork with real-world limitations,” which is why the touchable, cinematic look of stop-motion animation heavily inspires her work.

“Stop-motion reforms found items into set pieces for fantastical worlds, giving them a new life and that brings me immense joy,” she said.

Brady’s portfolio features colorful, vibrant visuals with a touch of whimsical humor.

Brady composited Birth of Venus (Redux) with placeholder meshes, focusing on the central creature figure, before confirming the composition and scale were to her liking. She then sculpted finery details in the flexible 3D modeling app Foundry Modo, assisted by RTX acceleration in OTOY OctaneRender, which was made possible by her GeForce RTX 4090 GPUs.

Advanced sculpting was completed in Modo.

She then applied materials and staged lighting with precision, and added speed with the RTX-accelerated ray tracing renderer. Brady has the option to deploy Octane Render, her preferred 3D renderer, in over 20 3D applications, including Autodesk 3ds Max, Blender and Maxon’s Cinema 4D.

After rendering the image, Brady deployed several post-processing features in Adobe Photoshop to help ensure the colors popped, as well as to add grain to compensate for any compression when posted on social media. Her RTX GPU affords over 30 GPU-accelerated features, such as blur gallery, object selection, liquify and smart sharpen.

“Art has been highly therapeutic for me, not just as an outlet to express emotion but to reflect how I see myself and what I value,” Brady said. “Whenever I feel overwhelmed by the pressure of expectation, whether internal or external, I redirect my efforts and instead create something that brings me joy.”

3D artist Molly Brady.

View more of Brady’s artwork on Instagram.

Valen(time) to Join the #ShareYourHeART Challenge

The photorealistic, chocolate heart plate beside a rose-themed mug and napkins, featured below, is 3D artist and illustrator Tanja Langgner’s stunning #ShareYourHeART challenge entry.

Hungry?

Langgner gathered assets and sculpted the heart shape using McNeel Rhino and Maxon ZBrush. Next, she assembled the pieces in Blender and added textures using Adobe Substance 3D Painter. The scene was then exported from Blender as a USD file and brought into Omniverse Create, where the artist added lighting and virtual cameras to capture the sweets with the perfect illuminations and angles.

“The main reason I started using Omniverse was its capability to link all my favorite apps,” Langgner said. “Saving time on exporting, importing and recreating materials in each app is a dream come true.”

Learn more about Langgner’s creative journey at the upcoming Community Spotlight livestream on the Omniverse Twitch channel and YouTube on Wednesday, Feb. 22, from 11 a.m. to 12 p.m. PT.

Join the #ShareYourHeART challenge by posting your own Valentine’s-themed Omniverse scene on social media using the hashtag. Entries could be featured on the NVIDIA Omniverse Twitter, LinkedIn and Instagram accounts.

Creative Boosts at GTC 

Experience this spring’s GTC for more inspiring content, expert-led sessions and a must-see keynote to accelerate your life’s creative work.

Catch these sessions live or watch on demand:

  • 3D Art Goes Multiplayer: Behind the Scenes of Adobe Substance’s “End of Summer” Project With Omniverse [S51239]
  • 3D and Beyond: How 3D Artists Can Build a Side Hustle in the Metaverse [SE52117]
  • NVIDIA Omniverse User Group [SE52047]
  • Accelerate the Virtual Production Pipeline to Produce an Award-Winning Sci-Fi Short Film [S51496]
  • 3D by AI: How Generative AI Will Make Building Virtual Worlds Easier [S52163]
  • Custom World Building With AI Avatars: The Little Martians Sci-Fi Project [S51360]
  • AI-Powered, Real-Time, Markerless: The New Era of Motion Capture [S51845]

Search the GTC session catalog or check out the Media and Entertainment and Omniverse topics for additional creator-focused talks.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. Learn more about Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

Read More