What is Extended Reality?

What is Extended Reality?

Advances in extended reality have already changed the way we work, live and play, and it’s just getting started.

Extended reality, or XR, is an umbrella category that covers a spectrum of newer, immersive technologies, including virtual reality, augmented reality and mixed reality.

From gaming to virtual production to product design, XR has enabled people to create, collaborate and explore in computer-generated environments like never before.

What Is Extended Reality?

Virtual, augmented and mixed reality are all elements of XR technology.

Virtual reality puts users inside a virtual environment. VR users typically wear a headset that transports them into a virtual world — one moment they’re standing in a physical room, and the next they’re immersed in a simulated environment.

The latest VR technologies push these boundaries, making these environments look and behave more like the real world. They’re also adding support for additional senses, including touch, sound and smell.

With VR, gamers can become fully immersed in a video game, designers and customers can review building projects to finalize details prior to construction, and retailers can test virtual displays before committing to a physical one.

Augmented reality is when a rendered image is overlaid onto the real world. The mobile game Pokémon GO famously brought AR to the mainstream by showing computer-rendered monsters standing on lawns and sidewalks as players roam their neighborhoods.

AR graphics are visible through cell phones, tablets and other devices, bringing a new kind of interactive experience to users. Navigating directions, for example, can be improved with AR. Rather than following a 2D map, a windshield can superimpose directions over one’s view of the road, with simulated arrows directing the driver exactly where to turn.

Mixed reality is a seamless integration of the real world and rendered graphics, which creates an environment in which users can directly interact with the digital and physical worlds together.

With MR, real and virtual objects blend, and are presented together within a single display. Users can experience MR environments through a headset, phone or tablet, and can interact with digital objects by moving them around or placing them in the physical world.

There are two types of MR:

  • Mixing virtual objects into the real world — for instance, where a user sees the real world through cameras in a VR headset with virtual objects seamlessly mixed into the view. See this example video.
  • Mixing real-world objects into virtual worlds — for example, a camera view of a VR participant mixed into the virtual world, like watching a VR gamer playing in a virtual world.

The History of XR

To understand how far XR has come, consider its origins in VR.

VR began in the federal sector, where it was used to train people in flight simulators. The energy and automotive design industries were also early adopters. These simulation and visualization VR use cases required large supercomputers. It also needed dedicated spaces, including powerwalls, which are ultra-high-resolution displays, and VR CAVEs, which are empty rooms that have the VR environment projected on each surface, from the walls to the ceiling.

For decades, VR remained unaffordable for most users, and the small VR ecosystem was mainly composed of large institutions and academic researchers.

But early in the previous decade, several key component technologies reached a tipping point, which precipitated the launch of the HTC Vive and Oculus Rift head-mounted displays (HMDs), along with the SteamVR runtime.

Individuals could now purchase personal HMDs to experience great immersive content. And they could drive those HMDs and experiences from an individual PC or workstation with a powerful GPU.

Suddenly, VR was accessible to millions of individuals, and a large ecosystem quickly sprung up, filled with innovation and enthusiasm.

In recent years, a new wave of VR innovation started with the launch of all-in-one (AIO) headsets. Previously, fully immersive VR experiences required a physical connection to a powerful PC. The HMD couldn’t operate as a self-contained device, as it had no operating system and no ability to compute the image.

But with AIO headsets, users gained access to a dedicated device with a simple setup that could deliver fully tracked VR anywhere, anytime. Coupled with the innovation of VR streaming technology, users could now experience powerful VR environments, even while on the go.

Latest Trends in XR

High-quality XR is becoming increasingly accessible. Consumers worldwide are purchasing AIOs to experience XR, from immersive gaming to remote learning to virtual training. Large enterprises are adding XR into their workflows and design processes. XR drastically improves design implementation with the inclusion of a digital twin.

Image courtesy of Innoactive.

And one of today’s biggest trends is streaming XR experiences through 5G from the cloud. This removes the need to be tethered to workstations or limit experiences to a single space.

By streaming over 5G from the cloud, people can use XR devices and get the computational power to run XR experiences from a data center, regardless of location and time. Advanced solutions like NVIDIA CloudXR are making immersive streaming more accessible, so more XR users can experience high-fidelity environments from anywhere.

AR is also becoming more common. After Pokémon GO became a household name, AR emerged in a number of additional consumer-focused areas. Many social media platforms added filters that users could overlay on their faces. Organizations in retail incorporated AR to showcase photorealistic rendered 3D products, enabling customers to place these products in a room and visualize it in any space.

Plus, enterprises in various industries like architecture, manufacturing, healthcare and more are using the technology to vastly improve workflows and create unique, interactive experiences. For example, architects and design teams are integrating AR for construction project monitoring, so they can see onsite progress and compare it to digital designs.

And though it’s still fairly new, MR is developing in the XR space. Trends are shown through the emergence of many new headsets built for MR, including the Varjo XR-3. With MR headsets, professionals in engineering, design, simulation and research can develop and interact with their 3D models in real life.

Varjo XR-3 headset. Image courtesy of Varjo.

The Future of XR

As XR technology advances, another technology is propelling users into a new era: artificial intelligence.

AI will play a major role in the XR space, from virtual assistants helping designers in VR to intelligent AR overlays that can walk individuals through do-it-yourself projects.

For example, imagine wearing a headset and telling the content what to do through natural speech and gestures. With hands-free and speech-driven virtual agents at the ready, even non-experts will be able to create amazing designs, complete exceedingly complex projects and harness the capabilities of powerful applications.

Platforms like NVIDIA Omniverse have already changed how users create 3D simulations and virtual worlds. Omniverse allows users from across the globe to develop and operate digital twin simulations. The platform provides users with the flexibility to portal into the physically accurate, fully ray-traced virtual world through 2D monitors, or their preferred XR experience, so they can experience vast virtual worlds immersively.

Entering the next evolution of XR, the possibilities are virtually limitless.

Learn more and see how organizations can integrate XR with NVIDIA technologies.

Featured blog image includes KPF and Lenovo.

The post What is Extended Reality? appeared first on NVIDIA Blog.

Read More

From Cloud to Car: How NIO Develops Intelligent Vehicles on NVIDIA HGX

From Cloud to Car: How NIO Develops Intelligent Vehicles on NVIDIA HGX

Building next-generation intelligent vehicles requires an AI infrastructure that pushes the cutting edge.

Electric vehicle maker NIO is using NVIDIA HGX to build a comprehensive data center infrastructure for developing AI-powered, software-defined vehicles. With high-performance compute, the automaker can continuously iterate on sophisticated deep learning models, creating robust autonomous driving algorithms in a closed-loop environment.

“The complex scenarios faced by mass-produced cars and the massive amount of data these fleets generate are the cornerstones of NIO’s autonomous driving capabilities,” said Bai Yuli, head of AI Platforms at NIO. “By using NVIDIA high-performance compute solutions, NIO can accelerate the path to autonomous driving.”

NIO has already launched intelligent vehicles developed on this infrastructure, such as its fully electric, intelligent flagship sedan, the ET7. Its mid-size performance sedan, the ET5 is scheduled to debut in September.

In addition to high-performance data center development, both models are built on the Adam supercomputer, powered by four NVIDIA DRIVE Orin systems-on-a-chip. These vehicles feature autonomous driving and intelligent cockpit capabilities that are continuously iterated upon and improved in the data center for a revolutionary customer experience.

Building a High-Performance AI Infrastructure With NVIDIA GPUs and Networking

The role of the data center is to ingest, curate and label massive amounts of data for AI model training at scale.

Data collection fleets generate hundreds of petabytes of data and billions of images each year. This data is then used to optimize the deep neural networks (DNNs) that will run in the vehicles.

NIO’s scalable AI infrastructure is powered by NVIDIA HGX with eight A100 Tensor Core GPUs and NVIDIA ConnectX-6 InfiniBand adapters. This scalable supercomputer cluster consists of NVME SSD servers and interconnects through the high-speed NVIDIA Quantum InfiniBand network platform.

This powerful infrastructure allows large amounts of deep learning training data to be transferred to supercomputer memory or NVIDIA A100 video memory at ultra-high speeds of up to 200Gbps.

NVIDIA HGX A100 is a high-performance server platform designed for AI scenarios, including big datasets and complicated models like those that power autonomous vehicles. It incorporates a fully optimized NVIDIA AI software stack in NGC.

The platform sets a new compute density benchmark, condensing 5 petaflops of AI performance and replaces siloed infrastructures with a single platform for a wide range of complex AI applications.

With the HGX A100, NIO is able to flexibly develop and deploy scalable AI systems. It also enables the company to increase model development efficiency by up to 20x, allowing it to launch autonomous vehicles sooner and evolve to newer, faster architectures.

Forging Ahead

NIO is already off to the races with its software-defined lineup, announcing plans to double the capacity of its plant in Hefei, China, to 240,000 vehicles per year, with a facility capable of producing up to 300,000.

As NIO scales production capabilities and continues its expansion into global markets, NVIDIA HGX is scaling with them, enabling the deployment of one of the most advanced AI platforms in the automotive industry.

The post From Cloud to Car: How NIO Develops Intelligent Vehicles on NVIDIA HGX appeared first on NVIDIA Blog.

Read More

‘Fortnite’ Arrives This GFN Thursday With GeForce Performance You Can Touch

‘Fortnite’ Arrives This GFN Thursday With GeForce Performance You Can Touch

Fortnite on GeForce NOW with touch controls on mobile is now available to all members, streaming through the Safari web browser on iOS and the GeForce NOW Android app.

The full launch — including the removal of the waitlist — follows a successful beta period in which more than 500,000 participants streamed over 4 million sessions to hundreds of mobile device types.

The closed beta  gave the GeForce NOW team the opportunity to test and learn, which resulted in optimized on-screen touch controls and game menus, with gameplay that feels intuitive. The end result is a touch experience for gamers to enjoy.

We’re thanking the beta participants, whose millions of streamed sessions made these improvements possible, with a three-day Priority membership — or three-day extension for existing RTX 3080 and Priority members. See below for details.

Streaming ‘Fortnite’ on GeForce NOW

GeForce NOW provides millions of gamers the opportunity to stream their favorite PC games — like Fortnite — to nearly any device. Games render on PCs in the cloud with high-performance NVIDIA GPUs and deliver gameplay back to those devices in a fraction of a second.

Getting started is easy. Visit the GeForce NOW membership page and choose from one of three levels of performance, including a Free membership option. RTX 3080 and Priority members get an upgraded experience with higher quality graphics, faster access to servers and more features. After signing up, download native GeForce NOW game streaming apps for PC, Mac, Android or TV, or start streaming right away from play.geforcenow.com.

Members can link their Epic Games account from the Settings > Connections menu to automatically sign in to Fortnite upon game launch.

Touch on GeForce NOW
Get in touch with your inner Battle Royale champion playing Fortnite across mobile devices.

With the addition of touch controls, Fortnite mobile players get GeForce performance they can touch. RTX 3080 members stream improved graphics that render at up to 120 frames per second on select 120Hz Android devices and can play with millions of other Fortnite players around the world.

GeForce NOW manages patches and game updates for members. As new seasons arrive or a new mode drops, members are always game ready. When no-build Battle Royale arrived in Fortnite with the launch of Fortnite Zero Build, GeForce NOW members were able to play right away.

For tips on gameplay mechanics or a refresher on playing Fortnite with touch controls, check out Fortnite’s Getting Started page.

A Three-Day Thank You to Beta Participants

As a thank you to members who helped improve Fortnite mobile touch controls during the closed beta period, participants can redeem a three-day Priority membership to GeForce NOW. Priority members get higher performance, access to premium servers and extended six-hour session lengths, for an upgraded gaming experience.

Members redeem the free promotional membership — or extension of their existing Founders, Priority or RTX 3080 membership — by logging into the redemption portal and following the instructions to activate. Beta participants will also receive an email in the coming days with additional details.

Get to Gaming

Vampire The Masquerade Swansong on GeForce NOW
Sink your teeth into Vampire: The Masquerade Swansong, a heart-pounding vampire story.

GFN Thursday always means more games. Members can find these and more streaming on the cloud this week:

Sign up for an RTX 3080 or Priority membership to play these and any of the 1,300 games available on the service.

With the power to take great gaming on the go with you, we’ve got a question this weekend. Let us know your answer on Twitter or in the comments below.

The post ‘Fortnite’ Arrives This GFN Thursday With GeForce Performance You Can Touch appeared first on NVIDIA Blog.

Read More

Mission Made Possible: Real-Time Rendering Helps Studio Create Cinematic Battle Between Characters From ‘Diablo Immortal’

Mission Made Possible: Real-Time Rendering Helps Studio Create Cinematic Battle Between Characters From ‘Diablo Immortal’

Real-time rendering is helping one studio take virtual production to impossible heights.

In their latest project, the creators at Los Angeles-based company Impossible Objects were tasked with depicting an epic battle between characters from the upcoming video game, Diablo Immortal. But the showdown had to take place on the surface of a Google Pixel phone, set in the living room of a live actor.

The team at Impossible Objects brought this vision to life using accelerated virtual production workflows to blend visual effects with live action. Using Epic Games’ Unreal Engine and NVIDIA A6000-powered Dell Precision 7920 workstations, the team created all the stunning cinematics and graphics, from high-fidelity textures and reflections to realistic camera movement and lighting.

These advanced technologies helped the artists make instant creative decisions as they could view high-quality virtual imagery rendered in real time.

“We can build larger, photorealistic worlds and not worry about relying on outdated creative workflows,” said Joe Sill, founder of Impossible Objects. “With Unreal Engine and NVIDIA RTX-powered Dell Precision workstations, we brought these Diablo Immortal characters to life.”

Real-Time Technologies Deliver Impossible Results

Previously, to tackle a project like this, the Impossible Objects team would look at concept art and storyboards to get an idea of what the visuals were supposed to look like. But with virtual production, the creators can work in a nonlinear way, bridging the gap between imagination and the final high-resolution images faster than before.

For the Diablo Immortals project, Impossible Objects used Unreal Engine for previsualization — where the artists were able to make creative, intentional decisions because they were experiencing high-fidelity images in real time. Moreover, the previsualization happened simultaneously with the virtual art department and layout phases.

The team used NVIDIA A6000-powered Dell Precision 7920 workstations — an advanced combination that allowed the artists to enhance the virtual production and creative workflows. The A6000 GPU delivers 48 gigabytes of VRAM, a crucial spec when offline rendering in Unreal Engine. With more GPU memory, the team had room for more geometry and higher resolution textures.

“Rendering would not have been possible without the A6000 — we maxed out on its 48 gigs of memory, using all that room for textures, environments and geometry,” said Luc Delamare, head of Technology at Impossible Objects. “We could throw anything at the GPU, and we’d still have plenty of performance for real-time workflows.”

Typically, this project would have taken up to six months to complete. But the nonlinear approach enabled by the real-time pipeline allowed Impossible Objects to cut the production time in half.

The video game characters in the commercial were prebuilt and provided by Blizzard. Impossible Objects used Autodesk Maya to up-res the characters and scale them to perform better in a cinematic setting.

The team often toggled between compositing software, Autodesk Maya and Unreal Engine as they ported animation back and forth between the applications. And as the project started to get bigger, Impossible Objects turned to another solution: NVIDIA Deep Learning Super Sampling, an AI rendering technology that uses a neural network to boost frame rates and produce sharp images.

“NVIDIA DLSS was incredibly important, as we were able to use it in the real-time workflow, even with characters that had high polygon counts,” said Delamare. “This solution became really helpful, especially as the project started to get denser and denser.”

At the animation stage, Unreal Engine and NVIDIA RTX allowed the team to simultaneously update cinematography and lighting in real time. The end result was fewer department handoffs, which resulted in time saved and efficient creative communication gained.

With all of these advanced technologies combined, Impossible Objects had the power to create a more efficient, iterative process — one that allowed the team to ditch linear pipelines and instead take on a much more creative, collaborative workflow.

To learn more about the project, watch the video below:

Learn more about NVIDIA RTX technology in media and entertainment.

 

The post Mission Made Possible: Real-Time Rendering Helps Studio Create Cinematic Battle Between Characters From ‘Diablo Immortal’ appeared first on NVIDIA Blog.

Read More

AI on the Ball: Startup Shoots Computer Vision to the Soccer Pitch

AI on the Ball: Startup Shoots Computer Vision to the Soccer Pitch

Eyal Ben-Ari just took his first shot on a goal of bringing professional-class analytics to amateur soccer players.

The CEO of startup Track160, in Tel Aviv, has seen his company’s AI-powered sports analytics software tested and used in the big leagues. Now he’s turning his attention to underserved amateurs in the clubs and community teams he says make up “the bigger opportunity” among the world’s 250 million soccer players.

“Almost everyone in professional sports uses data analytics today. Now we’re trying to enable any team at any level to capture their own data and analytics, and the only way to do it is leveraging AI,” he said.

A Kickoff Down Under

In April, the company launched its Coach160 software in Australia, where it’s getting kudos from amateur soccer clubs in Victoria and Queensland. It uses computer vision to let teams automatically generate rich reports and annotated videos with an off-the-shelf camera and a connection to the cloud.

“The analysis and data provided by Track160 will prove a wonderful resource for our coaches and players,” said Vaughn Coveny, a retired pro soccer player now working with multiple youth teams in the region.

Startup With an AI Heritage

Miky Tamir, a serial entrepreneur in sports tech, co-founded Track160 in 2017. The company’s investors include the Deutsche Fussball Liga, Germany’s national soccer league, which contributed annotated datasets from several of its seasons.

“That helped set a baseline, then we applied transfer learning and developed an ever-growing internal database,” said Tamir Anavi, Track160’s CTO.

Using video from a single camera, the company’s software identifies and tracks players as 3D skeletons, then tags events and actions as they move.

“We use deep learning in every step to understand where the camera is, where the pitch is and where the players are on it,” Anavi said.

With that information, the software delivers detailed analytics and more. It constructs a 3D model so players and coaches can view any part of the game from any perspective, providing what Ben-Ari calls “a metaverse experience.”

Software Certified by the Pros

The Coach160 software got high scores for speed and accuracy in a benchmark for electronic tracking systems created by FIFA, the global federation of more than 200 pro soccer leagues. “We delivered the same performance as others who used six times more cameras,” said Anavi.

One pro league uses the code to get real-time data on game days. It processes 4K video streams with four NVIDIA GPUs and libraries that accelerate the work.

When it comes to AI, Track160 relies on NVIDIA TensorRT to make its models lean so they run fast.

“We couldn’t do inference without it. The work went from being impossible to running smoothly and that got our system from a prototype to production,” said Anavi.

Track160 frequently signed on as a member of NVIDIA Metropolis, a program for companies in intelligent video analytics. Ben Ari says he’ll tap the program’s early access to technology and expertise to accelerate his company’s growth.

Looking Beyond Oz

Australia was a natural first target given its penchant for new technology and large number of amateur soccer players and clubs, said Ben-Ari, who is already planning a launch in the U.S.

Long term, the company plans to train models for other sports, too.

“We see a kind of viral effect where everyone will want to have this,” he said.

“As a dad, I want to know what’s happening when my daughter plays, and even if they’re not pros, people want to know their performance,” said Ben-Ari, who likes to pour over his stats from  triathlons.

The post AI on the Ball: Startup Shoots Computer Vision to the Soccer Pitch appeared first on NVIDIA Blog.

Read More

Concept Artist Pablo Muñoz Gómez Enlivens Fantasy Creatures ‘In the NVIDIA Studio’

Concept Artist Pablo Muñoz Gómez Enlivens Fantasy Creatures ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

Concept artist Pablo Muñoz Gómez dives In the NVIDIA Studio this week, showcasing artwork that depicts a fantastical myth.

Gómez, a creator based in Australia, is equally passionate about helping digital artists, teaching 3D classes and running the Zbrush guides website with his creative specialties: concept and character artistry.

“For me, everything starts with a story,” Muñoz Gómez said.

His 3D Forest Creature contains a fascinating myth. “The story of the forest creature is rather simple … a very small fantasy character that lives in the forest and spends his life balancing rocks, the larger the stones he manages to balance and stack on top of each other, the larger he’ll grow and the more invisible he’ll become. Eventually, he’ll reach a colossal size and disappear.”

3D Forest Creature sketch by Pablo Muñoz Gómez.

Gómez begins his journey in a 2D app, Krita, with a preliminary sketch. The idea is to figure out how many 3D assets will be needed while adding a little bit of color as reference for the palette later on.

Next, Gómez moves to Zbrush, where he uses custom brushes to sculpt basic models for the creature, rocks and plants. It’s the first of multiple leaps in his 2D to 3D workflow, detailed in this two-part 3D Forest Creature tutorial.

 

Gómez then turns to Adobe Substance 3D Painter to apply various colors and materials directly to his 3D models. Here, the benefits of NVIDIA RTX acceleration shine. NVIDIA Iray technology in the viewport enables Gómez to edit in real time and use ray-traced baking for faster rendering speeds — all accelerated by his GeForce RTX 3090 GPU.

Building and applying custom photorealistic textures in Adobe Substance 3D Sampler.

“Since I switched to the GeForce RTX 3090, I’m simply able to spend more time in the creative stages.”

Seeking further customization for his background, Gómez downloads and imports a grass asset from the Substance 3D asset library into Substance 3D Sampler, adjusting a few sliders to create a photorealistic material. RTX-exclusive interactive ray tracing lets Gómez apply realistic wear-and-tear effects in real time, powered by his GPU.

3D workflows can be incredibly demanding. As Gómez notes, the right GPU allows him to focus on content creation. “Since I switched to the GeForce RTX 3090, I’m simply able to spend more time in the ‘creative stages’ and testing things to refine my concept when I don’t have to wait for a render or worry about optimizing a scene so I can see it in real time,” he said.

Getting close to exporting final renders.

Gómez sets up his scene in Marmoset 4, critically changing the denoiser from CPU to GPU. Doing so unlocks real-time ray tracing and smooth visuals in the viewport while he works. This can be done by accessing the Lighting then Ray Tracing selections in the main menu and changing the denoiser from CPU to GPU.

With the scene in a good place after some edits, Gómez generates his renders.

 

He makes final composition, lighting and color correction in Adobe Photoshop. With the addition of a new background, the scene is complete.

Thankfully, the 3D Forest Creature hasn’t disappeared … yet!

More 3D to Explore

Gómez has created several tutorials demonstrating 3D content creation techniques to aspiring artists. Check out this one on how to build a 3D scene from scratch.

Part one of the Studio Session, Creating Stunning 3D Crystals, offers an inside look at sketching and concepting in Krita and modeling in Zbrush, while part two focuses on baking in Adobe Substance 3D Painter and texturing in Marmoset Toolbag 4.

Generally, low-polygon models for 3D workflows are great to work with on hardware that can’t handle high-poly counts. Gómez’s Studio Session, Creating a 3D Low-Poly Floating Island, demonstrates how to build low-poly models like his Floating Island within Zbrush and touch up in Adobe Photoshop.

However, with the graphics horsepower and AI benefits of NVIDIA RTX and GeForce RTX GPUs, 3D artists can work with high-polygon models quickly and easily.

Learning how to create in 3D takes ingenuity, notes Gómez: “You become more resourceful making your tools work for you in the way you want, even if that means finding a better tool to solve a particular process.” But with enough practice, as seen from the variety of Gómez’s portfolio, the results can be stunning.

Concept artist Pablo Muñoz Gómez.

Gómez is the founder of ZBrushGuides and the 3DConceptArtist academy. View his courses, tutorials, projects and more on his website.

Follow NVIDIA Studio on Facebook, Twitter and Instagram. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

The post Concept Artist Pablo Muñoz Gómez Enlivens Fantasy Creatures ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

Broom, Broom: WeRide Revs Up Self-Driving Street Sweepers Powered by NVIDIA

Broom, Broom: WeRide Revs Up Self-Driving Street Sweepers Powered by NVIDIA

When it comes to safety, efficiency and sustainability, autonomous vehicles are delivering a clean sweep.

Autonomous vehicle company and NVIDIA Inception member WeRide this month began a public road pilot of its Robo Street Sweepers. The vehicles, designed to perform round-the-clock cleaning services, are built on the high-performance, energy-efficient compute of NVIDIA.

The fleet of 50 vehicles is sweeping, sprinkling and spraying disinfectant in Guangzhou, China, all without a human driver at the wheel. The robo-sweepers run on a cloud-based fleet management platform that automatically schedules and dispatches vehicles using real-time information on daily traffic and routes.

Street sweeping is a critical municipal service. In addition to keeping dirt and debris off the road, it helps ensure trash and hazardous materials don’t flow into storm drains and pollute local waterways.

As cities grow, applying autonomous driving technology to street cleaning vehicles enables these fleets to run more efficiently and maintain cleaner and healthier public spaces.

Sweep Smarts

While street sweepers typically operate at lower speeds and in more constrained environments than robotaxis, trucks or other autonomous vehicles, they still require robust AI compute to safely operate.

Street cleaning vehicles must be able to drive in dense urban traffic, as well as in low-visibility conditions, such as nighttime and early morning. In addition, they have to detect and classify objects in the road as they clean.

To do so without a human at the wheel, these vehicles must process massive amounts of data from onboard sensors in real time. Redundant and diverse deep neural networks (DNNs) must work together to accurately perceive relevant information from this sensor data.

As a high-performance, software-defined AI compute platform, NVIDIA’s solution is designed to handle the large number of applications and DNNs that run simultaneously in autonomous vehicles, while achieving systemic safety standards.

A Model Lineup

The WeRide Robo Street Sweepers are the latest in the company’s stable of autonomous vehicles and its second purpose-built and mass-produced self-driving vehicle model.

WeRide has been developing autonomous technology on NVIDIA since 2017, building robotaxis, mini robobuses and robovans with the goal of accelerating intelligent urban transportation.

Its robotaxis have already provided more than 350,000 rides for 180,000 passengers since 2019, while its mini robobuses began pilot operations to the public in January.

The company is currently building its next-generation self-driving solutions on NVIDIA DRIVE Orin, using the high-performance AI compute platform to commercialize its autonomous lineup.

And with the addition of these latest vehicles, WeRide’s fleets are set to make a clean sweep.

The post Broom, Broom: WeRide Revs Up Self-Driving Street Sweepers Powered by NVIDIA appeared first on NVIDIA Blog.

Read More

Urban Jungle: AI-Generated Endangered Species Mix With Times Square’s Nightlife

Urban Jungle: AI-Generated Endangered Species Mix With Times Square’s Nightlife

Bengal tigers, red pandas and mountain gorillas are among the world’s most familiar endangered species, but tens of thousands of others — like the Karpathos frog, the Perote deer mouse or the Mekong giant catfish — are largely unknown.

Typically perceived as lacking star quality, these species are now roaming massive billboards in one of the world’s busiest destinations. An AI-powered initiative is spotlighting lesser-known endangered creatures on Times Square billboards this month, nightly in the few minutes before midnight across nearly 100 screens.

The project, dubbed Critically Extant, uses AI to illustrate the limited public data available on critically endangered flora and fauna. It’s the first deep learning art display in the Times Square Arts program’s decade-long history.

“A neural network can only create images based on what it’s seen in training data, and there’s very little information online about some of these critically endangered species,” said artist Sofia Crespo, who created the work with support from Meta Open Arts, using NVIDIA GPUs for AI training and inference. “This project is ultimately about representation — for us to recognize that we are biased towards some species versus others.”

Artwork courtesy of Sofia Crespo

These biases in representation have implications on the effort and funding given to save different species. Research has shown that a small subset of endangered species that are considered charismatic, cute or marketable receive more funding than they need, while most others receive little to no support.

When endangered species of any size — such as insects, fungi or plants — are left without conservation resources, they’re more vulnerable to extinction, contributing to a severe loss of biodiversity that makes ecosystems and food webs less resilient.

Intentionally Imperfect Portraits

The AI model, created by Crespo and collaborator Feileacan McCormick, was trained on a paired dataset of nearly 3 million nature images and text describing around 10,000 species. But this still wasn’t enough data to create true-to-life portraits of the less popular endangered species.

So the deep learning model, a generative adversarial network, does the best it can, guessing the features of a given endangered species based on related species. Due to the limited source data, many of the AI-generated creatures have a different color or body shape than their real-life counterparts — and that’s the point.

“Part of the project was relying on the open-source data that’s available right now,” said Crespo. “If that’s all the data we have, and species go extinct, what kind of knowledge and imagination do we have about the world that was lost?”

Critically Extant features more than 30 species, including amphibians, birds, fish, flowering plants, fungi and insects. After feeding species names to the generative AI model, Crespo animated and processed the synthetic images further to create the final moving portraits.

 

The AI model behind this project was trained using a cluster of NVIDIA Tensor Core GPUs. Crespo used a desktop NVIDIA RTX A6000 GPU for what she called “lightning-quick” inference.

AI in the Public Square

Critically Extant’s Times Square display premiered on May 1 and will be shown nightly through the end of the month.

Image by Michael Hull/Times Square Arts

The three-minute display features all 30+ specimens in a randomized arrangement that shifts every 30 seconds or so. Crespo said that using the NVIDIA RTX A6000 GPU was essential to generate the high-resolution images needed to span dozens of digital billboards.

Crespo and McCormick, who run an ecology and AI-focused studio, also enhanced the art display with an AI-generated soundtrack trained on a diverse range of animal sounds.

“The idea is to show diversity with many creatures, and overwhelm the audience with creatures that look very different from one another,” Crespo said.

The project began as an exhibition on Instagram, with the goal of adding representation of critically endangered species to social media conversations. At Times Square, the work will reach an audience of hundreds of thousands more.

“Crespo’s work brings the natural world directly into the center of the very urban environment at odds with these at-risk species, and nods to the human changes that will be required to save them,” reads the Times Square Arts post.

Crespo and McCormick have showcased their work at NVIDIA GTC, most recently an AI-generated fragment of coral reef titled Beneath the Neural Waves.

Learn more about AI artwork by Crespo and McCormick on the NVIDIA AI Art Gallery, and catch Critically Extant in Times Square through May 31.

Times Square images courtesy of Times Square Arts, photographed by Michael Hull. Artwork by Sofia Crespo. 

The post Urban Jungle: AI-Generated Endangered Species Mix With Times Square’s Nightlife appeared first on NVIDIA Blog.

Read More

GFN Thursday Gets Groovy As ‘Evil Dead: The Game’ Marks 1,300 Games on GeForce NOW

GFN Thursday Gets Groovy As ‘Evil Dead: The Game’ Marks 1,300 Games on GeForce NOW

Good. Bad. You’re the Guy With the Gun this GFN Thursday.

Get ready for some horrifyingly good fun with Evil Dead: The Game streaming on GeForce NOW tomorrow at release. It’s the 1,300th game to join GeForce NOW, joining on Friday the 13th.

And it’s part of eight total games joining the GeForce NOW library this week.

Hail to the King, Baby

Step into the shoes of Ash Williams and friends from the iconic Evil Dead franchise in Evil Dead: The Game (Epic Games Store), streaming on GeForce NOW at release tomorrow.

Work together in a game loaded with over-the-top co-op and PvP action across nearly all your devices. Grab your boomsticks, chainsaws and cleavers to fight against the armies of darkness, even on a Mac. Or take control of the Kandarian Demon to hunt the heroes by possessing Deadites, the environment and even the survivors themselves with a mobile phone.

For RTX 3080 members, the horror comes to life with realistic visuals and a physics-based gore system, enhanced by NVIDIA DLSS – the groundbreaking AI rendering technology that increases graphics performance by boosting frame rates and generating beautiful, sharp images.

Plus, everything is better in 4K. Whether you’re tearing a Deadite in two with Ash’s chainsaw hand or flying through the map as the Kandarian Demon, RTX 3080 members playing from the PC and Mac apps can bring the bloodshed in all its glory, streaming at up to 4K resolution and 60 frames per second.

There’s No Time Like Playtime

Brigandine The Legend of Runersia on GeForce NOW
Become a ruler, command knights and monsters, and outplay your enemies in Brigandine The Legend of Runersia.

Not a spooky fan? That’s okay. There’s fun for everyone with eight new games streaming this week:

  • Achilles: Legends Untold (New release on Steam)
  • Brigandine The Legend of Runersia (New release on Steam)
  • Neptunia x SENRAN KAGURA: Ninja Wars (New release on Steam)
  • Songs of Conquest (New release on Steam and Epic Games Store)
  • Cepheus Protocol Anthology (New release on Steam, May 13)
  • Evil Dead: The Game (New release on Epic Games Store, May 13)
  • Pogostuck: Rage With Your Friends (Steam)
  • Yet Another Zombie Defense HD (Steam)

With the armies of darkness upon us this weekend, we’ve got a question for you. Let us know how your chances are looking on Twitter or in the comments below.

The post GFN Thursday Gets Groovy As ‘Evil Dead: The Game’ Marks 1,300 Games on GeForce NOW appeared first on NVIDIA Blog.

Read More

Creator Karen X. Cheng Brings Keen AI for Design ‘In the NVIDIA Studio’

Creator Karen X. Cheng Brings Keen AI for Design ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows.

The future of content creation is in AI. This week In the NVIDIA Studio, discover how AI-assisted painting is bringing a new level of inspiration to the next generation of artists.

San Francisco-based creator Karen X. Cheng is on the forefront of using AI to design amazing visuals. Her innovative work produces eye-catching effects to social media videos for brands like Adobe, Beats by Dre and Instagram.

Cheng’s work bridges the gap between emerging technologies and creative imagery, and her inspiration can come from anywhere. “I usually get ideas when I’m observing things — whether that’s taking a walk or scrolling in my feed and seeing something cool,” she said. “Then, I’ll start jotting down ideas and sketching them out. I’ve got a messy notebook full of ideas.”

When inspiration hits, it’s important to have the right tools. Cheng’s ASUS Zenbook Pro Duo — an NVIDIA Studio laptop that comes equipped with up to a GeForce RTX 3080 GPU — gives her the power she needs to create anywhere.

Paired with the NVIDIA Canvas app, a free download available to anyone with an NVIDIA RTX or GeForce RTX GPU, Cheng can easily create and share photorealistic imagery. Canvas is powered by the GauGAN2 AI model and accelerated by Tensor Cores found exclusively on RTX GPUs.

“I never had much drawing skill before, so I feel like I have art superpowers.”

The app uses AI to interpret basic lines and shapes, translating them into realistic landscape images and textures. Artists of all skill levels can use this advanced AI to quickly turn simple brushstrokes into realistic images, speeding up concept exploration and allowing for increased iteration, while freeing up valuable time to visualize ideas.

 

“I’m excited to use NVIDIA Canvas to be able to sketch out the exact landscapes I’m looking for,” said Cheng. “This is the perfect sketch to communicate your vision to an art director or location scout. I never had much drawing skill before, so I feel like I have art superpowers with this thing.”

Powered by GauGAN2, Canvas turns Cheng’s scribbles into gorgeous landscapes.

Cheng plans to put these superpowers to the test in an Instagram live stream on Thursday, May 12, where she and her AI Sketchpad collaborator Don Allen Stevenson III will race to paint viewer challenges using Canvas.

The free Canvas app is updated regularly, adding new materials, styles and more.

Tune in to contribute, and download NVIDIA Canvas to see how easy it is to paint by AI.

With AI, Anything Is Possible

Empowering scribble-turn-van Gogh painting abilities is just one of the ways that NVIDIA Studio is transforming creative technology through AI.

NVIDIA Broadcast uses AI running on RTX GPUs to improve audio and video for broadcasters and live streamers. The newest version can run multiple neural networks to apply background removal, blur and auto-frame for webcams, and remove noise from incoming and outgoing sound.

3D artists can take advantage of AI denoising in Autodesk Maya and Blender software, refine color detail across high-resolution RAW images with Lightroom’s Enhance Details tool, enable smooth slow motion with retained b-frames using DaVinci Resolve’s SpeedWarp and more.

NVIDIA AI researchers are working on new models and methods to fuel the next generation of creativity. At GTC this year, NVIDIA debuted Instant NeRF technology, which uses AI models to transform 2D images into high-resolution 3D scenes, nearly instantly.

Instant NeRF is an emerging AI technology that Cheng already plans to implement. She and her collaborators have started experimenting with bringing 2D scenes to 3D life.

More AI Tools In the NVIDIA Studio

AI is being used to tackle complex and incredibly challenging problems. Creators can benefit from the same AI technology that’s applied to healthcare, automotive, robotics and countless other fields.

The NVIDIA Studio YouTube channel offers a wide range of tips and tricks, tutorials and sessions for beginning to advanced users.

CGMatter hosts Studio speedhack tutorials for beginners, showing how to use AI viewport denoising and AI render denoising in Blender.

Many of the most popular creative applications from Adobe have AI-powered features to speed up and improve the creative process.

Neural Filters in Photoshop, Auto Reframe and Scene Edit Detection in Premiere Pro, and Image to Material in Substance 3D all make creating quicker and easier through the power of AI.

Follow NVIDIA Studio on Instagram, Twitter and Facebook; access tutorials on the Studio YouTube channel; and get updates directly in your inbox by signing up for the NVIDIA Studio newsletter.

The post Creator Karen X. Cheng Brings Keen AI for Design ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More