Do Pass Go, Do Collect More Games: Xbox Game Pass Coming to GeForce NOW

Do Pass Go, Do Collect More Games: Xbox Game Pass Coming to GeForce NOW

Xbox Game Pass support is coming to GeForce NOW.

Members will soon be able to play supported PC games from the Xbox Game Pass catalog through NVIDIA’s cloud gaming servers. Learn more about how support for Game Pass and Microsoft Store will roll out in the coming months.

Plus, Age of Empires IV: Anniversary Edition is the first from the world’s most popular real-time strategy franchise to arrive on GeForce NOW.

A Game Pass-tic Partnership

Announced over the weekend, Game Pass members will soon be able to play supported PC games from the Game Pass catalog with GeForce NOW.

We’re working closely with Microsoft to enable members to play select PC titles from Microsoft Store, just as they can today on GeForce NOW with their Steam, Epic Games Store, Ubisoft Connect and GOG.com accounts. Members who are subscribed to PC Game Pass or Xbox Game Pass Ultimate will be able to stream these select PC titles from the Game Pass library — without downloads or additional purchases for instant gaming from the cloud.

With hundreds of PC titles available in the Game Pass catalog, Xbox and PC gamers together can look forward to future GFN Thursdays to see what’s next. PC games from Xbox Game Studios and Bethesda on Steam and Epic Games Store will continue to be released, giving members more ways to play their favorite Xbox titles.

And with the ability for GeForce NOW members to stream at high performance across devices, including PCs, Macs, mobile devices, smart TVs, gaming handheld devices and more, gamers everywhere will be able to take their Xbox PC games wherever they go, along with the over 1,600 titles in the GeForce NOW library.

For an even more upgraded experience, upgrade to Ultimate and Priority memberships to skip the waiting lines over free members and get into gaming even faster.

Build Your Empire — and Library

Age of Empires IV on GeForce NOW
Siege the moment!

Conquer the lands in Microsoft’s award-winning Age of Empires franchise this week.

Age of Empires IV: Anniversary Edition takes the world’s most popular real-time strategy game to the next level with familiar and new ways for players to expand their empire. The Anniversary Edition brings all the latest updates, including new civilizations — the Ottomans and Malians — maps, languages, challenges and more. Choose the path to greatness and become a part of history through Campaign Story Mode with a tutorial designed for first-time players, or challenge the world in competitive or cooperative online matches that include ranked seasons.

Ultimate members can rule the kingdom in stunning 4K or ultrawide resolutions, and settle in with up to eight-hour streaming sessions.

What to Play This Week

Dordogne on GeForce NOW
Hand-painted nostalgia in the cloud this summer.

Take a look at the two new games available to stream this week:

  • Dordogne (New release on Steam)
  • Age of Empires IV: Anniversary Edition (Steam)

Before the weekend arrives, check out our question of the week. Let us know your answer on Twitter or in the comments below.

Read More

Forged in Flames: Startup Fuses Generative AI, Computer Vision to Fight Wildfires

Forged in Flames: Startup Fuses Generative AI, Computer Vision to Fight Wildfires

When California skies turned orange in the wake of devastating wildfires, a startup fused computer vision and generative AI to fight back.

“With the 2020 wildfires, it became very personal, so we asked fire officials how we could help,” said Emrah Gultekin, the Turkish-born CEO of Chooch, a Silicon Valley-based leader in computer vision.

California utilities and fire services, they learned, were swamped with as many as 2,000 false positives a week from an existing wildfire detection system. The wrong predictions came from fog, rain and smudges on the lenses of a network of cameras they used.

So, in a pilot project, Chooch linked its fire detection software to the camera network. It analyzed snapshots every 15 minutes, seeking signs of smoke or fire.

Generative AI Sharpens Computer Vision

Then, the team led by Hakan Gultekin — Emrah’s brother, a software wiz and Chooch’s CTO — had an idea.

They built a generative AI tool that automatically created descriptions of each image, helping reviewers discern when smoke is present. False positives dropped from 2,000 a week to eight.

Startup Chooch uses generative AI and computer vision to detect wildfires.
Chooch detects smoke and fire despite bad weather or dirty camera lenses.

“Fire chiefs were excited about launching the technology in their monitoring centers and what it could achieve,” said Michael Liou, the president of Chooch, who detailed the project in a recent webinar.

Chooch’s generative AI tool gives fire fighters in California’s Kern County a dashboard on their smartphones and PCs, populated in real time with alerts, so they can detect wildfires fast.

In 2020, California experienced 9,900 wildfires that burned 4.3 million acres of forest and caused $19 billion in losses. Stopping one fire from spreading out of control would pay for the wildfire detection system for 50 years, the company estimates.

A Vision for Gen AI

Chooch’s CEO says it’s also the shape of things to come.

Emrah Gultekin, CEO of Chooch
Emrah Gultekin

“The fusion of large language models and computer vision will bring about even more powerful and accurate products that are easier to deploy,” said Gultekin.

For example, utilities can connect the software to drones and fixed cameras to detect corrosion on capacitors or vegetation encroaching on power lines.

The technology could see further validation as Chooch enters an $11 million Xprize challenge on detecting and fighting wildfires. Sponsors include PG&E and Lockheed Martin that’s building an AI lab to predict and respond to wildfires in a separate collaboration with NVIDIA.

Startup Chooch deliver real time alerts to smartphone and desktop PC dashboards for firefighters
Dashboards for PCs and smartphones can update firefighters with real-time alerts from Chooch’s software.

Chooch applies its technology to a host of challenges in manufacturing, retail and security.

For example, one manufacturer uses Chooch’s models to detect defects before products ship. Eliminating just 20% of the faults will pay for the system several times over.

Inception of a Partnership

Back in 2019, a potential customer in the U.S. government asked for support with edge deployments it planned on NVIDIA GPUs. Chooch joined NVIDIA Inception, a free program that nurtures cutting-edge startups.

Using NGC, NVIDIA’s hub for accelerated software, Hakan was able to port Chooch’s code to NVIDIA GPUs over a weekend. Now its products run on NVIDIA Jetson modules and “have been tested in the wild with full-motion video and multispectral data,” Emrah said.

Since then, the company rolled out support for GPUs in data centers and beyond. For example, the wildfire use case runs on NVIDIA A100 Tensor Core GPUs in the cloud.

Along the way, Chooch embraced software like Triton Inference Server and the NVIDIA DeepStream software development kit.

“The combination of DeepStream and Triton increased our capacity 8x to run more video streams on more AI models — that’s a huge win,” Emrah said.

A Wide Horizon

Now Chooch is expanding its horizons.

The company is a member of the partner ecosystems for NVIDIA Metropolis for intelligent video analytics and NVIDIA Clara Guardian, edge AI software for smart hospitals. Chooch also works with NVIDIA’s retail and telco teams.

The software is opening new doors and expanding the use cases it can address.

“It’s hard work because there’s so much uncharted territory, but that’s also what makes it exciting,” Emrah said.

Learn more about generative AI for enterprises, and explore NVIDIA’s solutions for power grid modernization.

Read More

Filmmaker Sara Dietschy Talks AI This Week ‘In the NVIDIA Studio’

Filmmaker Sara Dietschy Talks AI This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

With over 900,000 subscribers on her YouTube channel, editor and filmmaker Sara Dietschy creates docuseries, reviews and vlogs that explore the intersection of technology and creativity. The Los Angeles-based creator shares her AI-powered workflow this week In the NVIDIA Studio, and it’s just peachy — a word that happens to rhyme with her last name.

Dietschy explained in a recent video how five AI tools helped save over 100 hours of work, powered by NVIDIA Studio technology.

“If you do any kind of 3D rendering on the go, a dedicated NVIDIA RTX GPU is nonnegotiable.” — Sara Dietschy

She shows a practical approach to how these tools, running on laptops powered by GeForce RTX 40 Series GPUs, tackle the otherwise manual work that can make nonlinear editing tedious. Using tools like AI Relighting, Video Text Editing and more in Davinci Resolve software, Dietschy saves time on every project — and for creators, time is money, she said.

The NVIDIA Studio team spoke with Dietschy about how she uses AI, how technology can simplify artists’ processes, and how the NVIDIA Studio platform supercharged her creativity and video-editing workflows.

Dietschy takes a break to sit down with the Studio team.

Studio team: What AI features do you use most commonly?

Dietschy: In DaVinci Resolve, there’s neural engine text-based editing, automatic subtitles, Magic Mask and Detect Scene Cuts — all AI-powered features I use daily. And the relighting feature in DaVinci Resolve is crazy good.

In addition, ChatGPT and Notion AI sped up copywriting for my website and social media posts, so I could focus on video editing.

Now you see Dietschy, now you don’t — Magic Mask in DaVinci Resolve.

Studio team: How do you use Adobe Premiere Pro? 

Dietschy: In the beta version, my entire video can be transcribed quickly, and Premiere Pro can even detect silence. Just click on the three dots in the text, hit delete and boom — AI conveniently edits out that awkward pause. No need for me to hop back and forth.

Plus, Auto Reframe and Unsharp Mask are popular AI features in Premiere Pro that are worth looking into.

AI detects pauses and jump cuts.

Studio team: What prompted the regular use of AI-powered tools and features? 

Dietschy: My biggest pet peeve is when a program offers really cool features but requires uploading everything to a web app or starting a completely new workflow. Once these features were made available directly in the apps I already use, things became so much more efficient, which is why I now use them on the daily.

Access numerous AI-powered features in DaVinci Resolve.

Studio team: For the non-technical people out there, why does GPU acceleration in creative apps matter?

Dietschy: For video editors, GPU acceleration — which is basically a graphics card making the features and effects in creative apps faster — especially in DaVinci Resolve, is everything. It scrubs through footage and playback, and crushes export times. This ASUS Zenbook Pro 14 OLED Studio laptop exported a recent hour-plus-long 4K video in less than 14 minutes. If you release new content every week, like me, time saved is gold.

NVIDIA GeForce RTX 4070 GPU-accelerated encode (NVENC) speeds up video exporting up to 5x.

Studio team: Would you recommend GeForce RTX GPUs to other video editors?

Dietschy: Absolutely. A big unlock for me was getting a desktop computer with a nice processor and an NVIDIA GPU. I was just amazed at how much smoother things went.

The ASUS Zenbook Pro 14 OLED NVIDIA Studio laptop.

Studio team: If you could go back to the beginning of your creative journey, what advice would you give yourself?

Dietschy: Don’t focus so much on quantity. Instead, take the time to add structure to your process, because being a “messy creative” only seems cool at first. Organization is already paying crazy dividends in better sleep and mental health.

For more AI insights, watch Dietschy’s video on the dozen-plus AI tools creators should use:

Find more on Sara Dietschy’s YouTube channel.

Influencer and creator Sara Dietschy.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

Rendered.ai Integrates NVIDIA Omniverse for Synthetic Data Generation

Rendered.ai Integrates NVIDIA Omniverse for Synthetic Data Generation

Rendered.ai is easing AI training for developers, data scientists and others with its platform-as-a-service for synthetic data generation, or SDG.

Training computer vision AI models requires massive, high-quality, diverse and unbiased datasets. These can be challenging and costly to obtain, especially with increasing demands both of and for AI.

The Rendered.ai platform-as-a-service helps to solve this issue by generating physically accurate synthetic data — data that’s created from 3D simulations — to train computer vision models.

“Real-world data often can’t capture all of the possible scenarios and edge cases necessary to generalize an AI model, which is where SDG becomes key for AI and machine learning engineers,” said Nathan Kundtz, founder and CEO of Rendered.ai, which is based in Bellevue, Wash., a Seattle suburb.

A member of the NVIDIA Inception program for cutting-edge startups, Rendered.ai has now integrated into its platform NVIDIA Omniverse Replicator, a core extension of the Omniverse platform for developing and operating industrial metaverse applications.

Omniverse Replicator enables developers to generate labeled synthetic data for many such applications, including visual inspection, robotics and autonomous driving. It’s built on open standards for 3D workflows, including Universal Scene Description (“OpenUSD”), Material Definition Language (MDL) and PhysX.

Synthetic images generated with Rendered.ai have been used to model landscapes and vegetation for virtual worlds, detect objects in satellite imagery, and even test the viability of human oocytes, or egg cells.

Synthetic imagery generated using Omniverse Replicator. Image courtesy of Rendered.ai.

With Rendered.ai tapping into the RTX-accelerated functionalities of Omniverse Replicator — such as ray tracing, domain randomization and multi-sensor simulation — computer vision engineers, data scientists and other users can quickly and easily generate synthetic data through a simple web interface in the cloud.

“The data that we have to train AI is really the dominant factor on the AI’s performance,” Kundtz said. “Integrating Omniverse Replicator into Rendered.ai will enable new levels of ease and efficiency for users tapping synthetic data to train bigger, better AI models for applications across industries.”

Rendered.ai will demonstrate its platform integration with Omniverse Replicator at the Conference on Computer Vision and Pattern Recognition (CVPR), running June 18-22 in Vancouver, Canada.

Synthetic Data Generation in the Cloud

Rendered.ai, now available through AWS Marketplace, brings to the cloud a collaborative web interface for developers and teams to design SDG applications that can be easily configured by computer vision engineers and data scientists.

It’s a one-stop shop for people to share workspaces containing SDG datasets, tasks, graphs and more — all through a web browser.

A view of the Rendered.ai platform-as-a-service, available on a web browser. Image courtesy of Rendered.ai.

Omniverse Replicator tutorials, examples and other 3D assets can now be easily tapped in a Rendered.ai channel running on AWS infrastructure. This gives developers a starting point for building their own SDG capabilities in the cloud. And the process will be especially seamless for Omniverse Replicator users who are already familiar with these tools.

Rendered.ai focuses on offering physically accurate synthetic data through the platform, as this enables introducing new information to AI systems based on how physical processes work in the real world, according to Kundtz.

“In the future, every company using AI is going to have a synthetic data engineering team for physics-based modeling and domain-specific AI-model building,” he added. “It’s often not good enough to simulate something once, as complex AI tasks require training on large batches of data covering diverse scenarios — this is where synthetic data becomes key.”

Hear more from Kundtz on the NVIDIA AI Podcast.

Learn more about Omniverse Replicator, and visit Rendered.ai at CVPR booth 1125 to see a demo of this platform integration.

Explore the new cloud capability on the Rendered.ai site and sign up for a 30-day trial of the Rendered.ai Developer account using the content code “OVDEMO” — which will enable access to the new Rendered.ai Channel for Omniverse.

Featured image courtesy of Rendered.ai.

Read More

NVIDIA and Hexagon Deliver Suite of Solutions for Accelerating Industrial Digitalization

NVIDIA and Hexagon Deliver Suite of Solutions for Accelerating Industrial Digitalization

For industrial businesses to reach the next level of digitalization, they need to create accurate, virtual representations of their physical systems.

NVIDIA is working with Hexagon, the Stockholm-based global leader in digital reality solutions combining sensor, software and autonomous technologies, to equip enterprises with the tools and solutions they need to build physically accurate, perfectly synchronized, AI-enabled digital twins that can be used to transform their organizations.

Hexagon is building integrations from their HxDR reality-capture and Nexus manufacturing platforms to NVIDIA Omniverse, an open platform for developing and operating industrial metaverse applications via Universal Scene Description (“OpenUSD”) plug-ins. The connected platforms, powered by NVIDIA AI technologies, will provide benefits across Hexagon’s major ecosystems, including agriculture, autonomous mobility, buildings, cities, defense, infrastructure, manufacturing and mining.

Together, these solutions deliver seamless, collaborative planning through a unified view, so industrial customers can better optimize workflows and improve efficiencies at scale. Professionals and developers will be able to use advanced capabilities in reality capture, digital twins, AI, simulation and visualization to enhance the most complex graphics workflows — from virtual prototyping to digital factories.

Fusing Physical and Digital Worlds Into One Reality

The $46 trillion manufacturing industry encompasses millions of factories worldwide designing and developing new products. Digitalization allows manufacturers to tackle the most complex engineering problems in more efficient, productive ways. It also brings industrial businesses one step closer to automating their workflows  and becoming software-defined, which means improving operational efficiency and transforming their services with software.

At the HxGN LIVE Global event, Hexagon and NVIDIA showcased how their integrated offering can help teams accelerate their digitalization journeys. Watch the demo below to see how designers, engineers and others can use the Omniverse platform to quickly aggregate and simulate ultra-complex data from Hexagon’s HxDR and Nexus platforms.

Hexagon is also developing an AI-enabled web application, based on Omniverse, which will allow teams to see real-time comparisons of digital twins and their physical counterparts, so they can accelerate decision-making while optimizing planning and operations. This solution will help enterprises unlock more collaborative workflows and achieve rapid iteration across their teams, wherever they’re located.

With this announcement, the Omniverse ecosystem will benefit from Hexagon’s digital reality expertise, including geospatial reality capture, sensors, software and autonomous technologies. Enterprises will be able to build, simulate, operate and optimize virtual worlds faster, more accurately and easier than ever before.

Learn more about NVIDIA Omniverse. Read Hexagon’s latest announcement, and see the latest demos and exhibits at HxGN LIVE Global 2023. 

Read More

Meet the Maker: Software Engineer Ramps Up NVIDIA Jetson to Build Self-Driving Skate Park

Meet the Maker: Software Engineer Ramps Up NVIDIA Jetson to Build Self-Driving Skate Park

Kirk Kaiser

Kirk Kaiser grew up a fan of the video game Paperboy, where players act as cyclists delivering newspapers while encountering various obstacles, like ramps that appear in the middle of the street.

This was the inspiration behind the software developer’s latest project using the NVIDIA Jetson platform for edge AI and robotics — a self-driving skate ramp.

“I wanted the absurdity and fun of Paperboy to be a part of my life,” said Kaiser, an avid skateboarder based in Naples, Fla. “I was boarding one day with my dog Benji running beside me and I was like, ‘What if I had ramps that came with me?’”

He’s now building just that — technology that could lead to a portable, autonomous skate park.

So far, he’s developed an electric platform that can elevate a ramp and make it level with the ground. It’s steerable using a PS4 controller linked via Bluetooth to an NVIDIA Jetson Nano Developer Kit.

Now, he’s collecting data to train AI models that’ll enable the platform to recognize streets and obstacles — and eventually become fully autonomous — with the help of the new NVIDIA Jetson Orin Nano Developer Kit.

It’s a project for when he isn’t engrossed in his work as the head of developer relations at Gitpod, a startup that provides cloud development environments for software makers.

About the Maker

Kaiser learned software engineering at a young age and received a scholarship to a prestigious high school specialized in tech. There, he honed his programming skills before taking time in his early adulthood to see and experience the world in completely different ways.

At 18 years old, he packed a bag and lived for a year in a wildlife refuge in Costa Rica, where he worked on a permaculture farm, growing food and collecting rainwater to drink. Relocating to Vermont, Kaiser then spent a year farming with a Zen Buddhist before hiking 1,000 miles of Appalachian Trail, passing through four states.

Upon leaving the trail, Kirk launched a travel website, got his first software job at a cosmetics company, and worked in R&D for a lighting company before rekindling his passion for software engineering as a way to provide for his family — including his now four-year-old son.

His Inspiration

Before all of this, skateboarding was Kaiser’s greatest love. “I just wanted to skateboard as a kid,” he said. “I wanted to maximize the amount of time I could spend skateboarding.”

He built his own skate parks growing up, which made him familiar with the mechanics of building a wooden ramp — knowledge that came in handy when building the foundation of his latest Jetson-powered project.

And to inspire others to embark on inventive projects with technology, Kaiser authored Make Art With Python, a step-by-step introduction to programming for creative people.

He was spurred to write the book while talking to high school students at a biohacker bootcamp in New York.

“What the high schoolers said blew my mind — they basically thought that software engineering was for overachievers,” he said. “So I wanted to write a book that would convince younger people that programming is fundamentally a platform for creating worlds, and it can be for anyone, which is a really exciting thing.”

His Favorite Jetson Projects

Kaiser kicked off his self-driving skate park project 18 months ago, intending to start with a ramp about the size of a golf cart. The electrical components needed to steer it were prohibitively expensive, however, and getting such a large platform to break along two axes of rotation was incredibly challenging, he said.

Rescaling the project to the size of a skateboard itself, Kaiser bought a welder and a metal brake, learned how to use both tools for the first time, and built a platform that can raise and lower, as well as accept any kind of ramp.

It’s fully steerable along both axes thanks to the edge capabilities of NVIDIA Jetson. And the developer’s now training the platform’s self-driving features using Robot Operating System repositories available through the NVIDIA Isaac platform for accelerated, AI-powered robotics.

“In the machine learning space, NVIDIA is really the only show in town,” he said. “The Jetson platform is the industry standard for edge AI, and its compatibility with other development platforms and the onboard GPU are huge pluses.”

Kaiser dives deeper into the technical aspects of his skate ramp project on his blog.

The developer’s other favorite projects using the NVIDIA Jetson platform include training an AI model for turning lights off and on using a dab and T-pose, as well as creating an AI-powered camera for bird-watching.

“The acceleration of smaller-scale robotics is becoming more accessible to everyone,” Kaiser said, “which is really exciting because I think robotics is so damn cool.”

Go along for the ride by keeping up with Kaiser’s work, and learn more about the NVIDIA Jetson platform.

Read More

Eye in the Sky With AI: UCSB Initiative Aims to Pulverize Space Threats Using NVIDIA RTX

Eye in the Sky With AI: UCSB Initiative Aims to Pulverize Space Threats Using NVIDIA RTX

When meteor showers occur every few months, viewers get to watch a dazzling scene of shooting stars and light streaks scattering across the night sky.

Normally, meteors are just small pieces of rock and dust from space that quickly burn up upon entering Earth’s atmosphere. But the story would take a darker turn if a comet or asteroid is a little too large and heading directly toward Earth’s surface with minimal warning time.

Such a scenario is what physics professor Philip Lubin and some of his undergraduates at the University of California, Santa Barbara, are striving to counteract.

The team recently received phase II funding from NASA to explore a new, more practical approach to planetary defense — one that would allow them to detect and mitigate any threats much faster and more efficiently. Their initiative is called PI-Terminal Planetary Defense, with the PI standing for “Pulverize It.”

To help the team train and speed up the AI and machine learning algorithms they’re developing to detect threats that are on a collision course with Earth, NVIDIA, as part of its Applied Research Accelerator Program, has given the group an NVIDIA RTX A6000 graphics card.

Taking AI to the Sky

Every day, approximately 100 tons of small debris rain down on Earth, but they quickly disintegrate in the atmosphere with very few surviving to reach the surface. Larger asteroids, however, like those responsible for the craters visible on the moon’s surface, pose a real danger to life on Earth.

On average, about every 60 years, an asteroid that’s larger than 65 feet in diameter will appear, similar to the one that exploded over Chelyabinsk, Russia, in 2013, with the energy equivalent of about 440,000 tons of TNT, according to NASA.

The PI-Terminal Planetary Defense initiative aims to detect relevant threats sooner, and then use an array of hypervelocity kinetic penetrators to pulverize and disassemble an asteroid or small comet to greatly minimize the threat.

The traditional approach for planetary defense has involved deflecting threats, but Pulverize-It turns to effectively breaking up the asteroid or comet into much smaller fragments, which then burn up in the Earth’s atmosphere at high altitudes, causing little ground damage. This allows much more rapid mitigation.

Recognizing threats is the first critical step — this is where Lubin and his students tapped into the power of AI.

Many modern surveys collect massive amounts of astrophysical data, but the speed of data collection is faster than the ability to process and analyze the collected images. Lubin’s group is designing a much larger survey specifically for planetary defense that would generate even larger amounts of data that need to be rapidly processed.

Through machine learning, the group trained a neural network called You Only Look Once Darknet. It’s a near real-time object detection system that operates in less than 25 milliseconds per image. The group used a large dataset of labeled images to pretrain the neural network, allowing the model to extract low-level, geometric features like lines, edges and circles, and in and in particular threats such as asteroids and comets.

Early results showed that the source extraction through machine learning was up to 10x faster and nearly 3x more accurate than traditional methods.

Lubin and his group accelerated their image analysis process by approximately 100x, with the help of the NVIDIA RTX A6000 GPU, as well as the CUDA parallel computing platform and programming model.

“Initially, our pipeline — which aims for real-time image processing — took 10 seconds for our subtraction step,” said Lubin. “By implementing the NVIDIA RTX A6000, we immediately cut this processing time to 0.15 seconds.”

Combining this new computational power with the expanded 48GB of VRAM enabled the team to implement new CuPy-based algorithms, which greatly reduced their subtraction and identification time, allowing the entire pipeline to run in just six seconds.

NVIDIA RTX Brings Meteor Memory

One of the group’s biggest technical challenges has been meeting the GPU memory requirement, as well as decreasing the run-time of the training processes. As the project grows, Lubin and his students accumulate increasingly large amounts of data for training. But as the datasets expanded, they needed a GPU that could handle the massive file sizes.

The RTX A6000’s 48GB of memory allows teams to handle the most complex graphics and datasets without worrying about hindering performance.

“Each image will be about 100 megapixels, and we’re putting many images inside the memory of the RTX GPU,” said Lubin. “It helps mitigate the bottleneck of getting data in and out.”

The group works on simulations that demonstrate various phases from the project, including the ground effects from shock waves, as well as the optical light pulses from each fragment that burns in the Earth’s atmosphere. These simulations are done locally, running on custom-developed codes written in multithreaded, multiprocessor C++ and Python.

The image processing pipeline for rapid threat detection runs on custom C++, Python and CUDA codes using multiple Intel Xeon processors and the NVIDIA RTX A6000 GPU.

Other simulations, like one that features the hypervelocity intercept of the threat fragments, are accomplished using the NASA Advanced Supercomputing (NAS) facility at the NASA Ames Research Center. The facility is constantly upgraded and offers over 13 petaflops of computing performance. These visualizations run on the NAS supercomputers equipped with Intel Xeon CPUs and NVIDIA RTX A6000 GPUs.

Check out some of these simulations on the UCSB Group’s Deepspace YouTube channel.

Learn more about the PI-Terminal Planetary Defense project and NVIDIA RTX.

Read More

Link-credible: Get in the Game Faster With Steam, Epic Games Store and Ubisoft Account Linking on GeForce NOW

Link-credible: Get in the Game Faster With Steam, Epic Games Store and Ubisoft Account Linking on GeForce NOW

Get into your favorite games faster by linking GeForce NOW to Steam, Epic Games Store and Ubisoft accounts.

And get a peek at more games coming to GeForce NOW later this year by tuning in to Ubisoft Forward on Monday, June 12, when the game publisher will reveal its latest news and announcements.

Plus, two new games are available to stream from the cloud this week, as well as the newest season for Tom Clancy’s The Division 2 from Ubisoft.

Linked In

GeForce NOW makes gaming convenient and easy for members by enabling them to link their accounts from Steam, Epic and, most recently, Ubisoft, directly to the service. Instead of signing into their accounts for each play session, members can be automatically signed in across their devices after linking them up just once.

Account Linking on GeForce NOW
Automatic, supersonic.

Starting today, launching Ubisoft Connect games requires members to link their Ubisoft accounts in the app. Once that’s completed, members can effortlessly play hit Ubisoft games, including Rainbow Six Siege, Far Cry 6 and The Division 2. 

Members also have the benefit of library account syncing, which automatically syncs supported GeForce NOW games from Ubisoft Connect and Steam libraries — helping members find their Ubisoft games instantly.

For an even more streamlined experience, upgrade to an Ultimate or Priority membership to skip the waiting lines over free members and get into gaming faster.

The Mission: More Games

The Division 2 on GeForce NOW
Get caught up on the newest season of The Division 2. 

“Season 1: Broken Wings” is the newest season for Tom Clancy’s The Division 2, kicking off Year Five for the hit game from Ubisoft. It introduces a new game mode — Descent — a rogue-lite for teams of up to four players. Begin each match without any gear, perks or specializations and unlock them through game progression to work up through the ranks. The rest of the year will bring more seasons, each with their own manhunts, events and leagues. Stream “Broken Wings” on GeForce NOW today.

And take a look at the two new games available to stream this week:

  • Amnesia: The Bunker (New release on Steam)
  • Harmony: The Fall of Reverie (New release Steam, June 8)

Before the weekend arrives, let’s take things back with our question of the week. Let us know your answer on Twitter or in the comments below.

Read More

Taking AI to School: A Conversation With MIT’s Anant Agarwal

Taking AI to School: A Conversation With MIT’s Anant Agarwal

In the latest episode of NVIDIA’s AI Podcast, Anant Agarwal, founder of edX and chief platform officer at 2U, shared his vision for the future of online education and how AI is revolutionizing the learning experience.

Agarwal, a strong advocate for massive open online courses, or MOOCs, discussed the importance of accessibility and quality in education. The MIT professor and renowned edtech pioneer also highlighted the implementation of AI-powered features in the edX platform, including the ChatGPT plug-in and edX Xpert, an AI-powered learning assistant.

You Might Also Like

Jules Anh Tuan Nguyen Explains How AI Lets Amputee Control Prosthetic Hand, Video Games

A postdoctoral researcher at the University of Minnesota discusses his efforts to allow amputees to control their prosthetic limb — right down to the finger motions — with their minds.

Overjet’s Ai Wardah Inam on Bringing AI to Dentistry

Overjet, a member of NVIDIA Inception, is moving fast to bring AI to dentists’ offices. Dr. Wardah Inam, CEO of the company, discusses using AI to improve patient care.

Immunai CTO and Co-Founder Luis Voloch on Using Deep Learning to Develop New Drugs

Luis Voloch, co-founder and chief technology officer of Immunai, talks about tackling the challenges of the immune system with a machine learning and data science mindset.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better. Have a few minutes to spare? Fill out this listener survey.

Read More

What Is Photogrammetry?

What Is Photogrammetry?

Thanks to “street views,” modern mapping tools can be used to scope out a restaurant before deciding to go there, better navigate directions by viewing landmarks in the area or simulate the experience of being on the road.

The technique for creating these 3D views is called photogrammetry — the process of capturing images and stitching them together to create a digital model of the physical world.

It’s almost like a jigsaw puzzle, where pieces are collected and then put together to create the bigger picture. In photogrammetry, each puzzle piece is an image. And the more images that are captured and collected, the more realistic and detailed the 3D model will be.

How Photogrammetry Works

Photogrammetry techniques can also be used across industries, including architecture and archaeology. For example, an early example of photogrammetry was from 1849, when French officer Aimé Laussedat used terrestrial photographs to create his first perspective architectural survey at the Hôtel des Invalides in Paris.

By capturing as many photos of an area or environment as possible, teams can build digital models of a site that they can view and analyze.

Unlike 3D scanning, which uses structured laser light to measure the locations of points in a scene, photogrammetry uses actual images to capture an object and turn it into a 3D model. This means good photogrammetry requires a good dataset. It’s also important to take photos in the right pattern, so that every area of a site, monument or artifact is covered.

Types of Photogrammetry Methods

Those looking to stitch together a scene today take multiple pictures of a subject from varying angles, and then run them through a specialized application, which allows them to combine and extract the overlapping data to create a 3D model.

Image courtesy of 3ds-scan.de.

There are two types of photogrammetry: aerial and terrestrial.

Aerial photogrammetry stations the camera in the air to take photos from above. This is generally used on larger sites or in areas that are difficult to access. Aerial photogrammetry is one of the most widely used methods for creating geographic databases in forestry and natural resource management.

Terrestrial photogrammetry, aka close-range photogrammetry, is more object-focused and usually relies on images taken by a camera that’s handheld or on a tripod. It enables speedy onsite data collection and more detailed image captures.

Accelerating Photogrammetry Workflows With GPUs

For the most accurate photogrammetry results, teams need a massive, high-fidelity dataset. More photos will result in greater accuracy and precision. However, large datasets can take longer to process, and teams need more computational power to handle the files.

The latest advancements in GPUs help teams address this. Using advanced GPUs like NVIDIA RTX cards allows users to speed up processing and maintain higher-fidelity models, all while inputting larger datasets.

For example, construction teams often rely on photogrammetry techniques to show progress on construction sites. Some companies capture images of a site to create a virtual walkthrough. But an underpowered system can result in a choppy visual experience, which detracts from a working session with clients or project teams.

With the large memory of RTX professional GPUs, architects, engineers and designers can easily manage massive datasets to create and handle photogrammetry models faster.

Archaeologist Daria Dabal uses NVIDIA RTX to expand her skills in photogrammetry, creating and rendering high-quality models of artifacts and sites.

Photogrammetry uses GPU power to assist in vectorization of the photo, which accelerates stitching thousands of images together. And with the real-time rendering and AI capabilities of RTX professional GPUs, teams can accelerate 3D workflows, create photorealistic renderings and keep 3D models up to date.

History and Future of Photogrammetry

The idea of photogrammetry dates to the late 1400s, nearly four centuries before the invention of photography. Leonardo da Vinci developed the principles of perspective and projective geometry, which are foundational pillars of photogrammetry.

Geometric perspective is a method that enables illustrating a 3D object in a 2D field by creating points that showcase depth. On top of this foundation, aspects such as geometry, shading and lighting are the building blocks of realistic renderings.

Photogrammetry advancements now allow users to achieve new levels of immersiveness in 3D visualizations. The technique has also paved the way for other groundbreaking tools like reality-capture technology, which collects data on real-world conditions to give users reliable, accurate information about physical objects and environments.

NVIDIA Research is also developing AI techniques that rapidly generate 3D scenes from a small set of images.

Instant NeRF and Neuralangelo, for example, use neural networks to render complete 3D scenes from just a few-dozen still photos or 2D video clips. Instant NeRF could be a powerful tool to help preserve and share cultural artifacts through online libraries, museums, virtual-reality experiences and heritage-conservation projects. Many artists are already creating beautiful scenes from different perspectives with Instant NeRF.


Learn More About Photogrammetry

Objects, locations and even industrial digital twins can be rendered volumetrically — in real time — to be shared and preserved, thanks to advances in photogrammetric technology. Photogrammetry applications are expanding across industries and becoming increasingly accessible.

Museums can provide tours of items or sites they otherwise wouldn’t have had room to display. Buyers can use augmented-reality experiences to see how a product might fit in a space before purchasing it. And sports fans can choose seats with the best view.

Learn more about NVIDIA RTX professionals GPUs and photogrammetry by joining an upcoming NVIDIA webinar, Getting Started With Photogrammetry for AECO Reality Capture, on Thursday, June 22, at 10 a.m. PT.

Read More