Crossing Continents: XPENG G9 SUV and P7 Sedan Set Course for Scandinavia, the Netherlands

Crossing Continents: XPENG G9 SUV and P7 Sedan Set Course for Scandinavia, the Netherlands

Electric automaker XPENG’s flagship G9 SUV and P7 sports sedan are now available for order in Sweden, Denmark, Norway and the Netherlands — an expansion revealed last week at the eCar Expo in Stockholm.

The intelligent electric vehicles are built on the high-performance NVIDIA DRIVE Orin centralized compute architecture and deliver AI capabilities that are continuously upgradable through over-the-air software updates.

“This announcement represents a significant milestone as we build our presence in Europe,” said Brian Gu, vice chair and president of XPENG. “We believe both vehicles deliver a new level of sophistication and a people-first mobility experience, and will be the electric vehicles of choice for many European customers.”

Safety Never Takes a Back Seat 

The XPENG G9 and P7 come equipped with XPENG’s proprietary XPILOT Advanced Driver Assistance System, which offers safety, driving and parking support through a variety of smart functions.

The system is supported by 29 sensors, including high-definition radars, ultrasonic sensors, and surround-view and high-perception cameras, enabling the vehicles to safely tackle diverse driving scenarios.

The EVs are engineered to meet the European New Car Assessment Programme’s five-star safety standards, along with the European Union’s stringent whole vehicle type approval certification.

Leading Charge for the Long Haul

The rear-wheel-drive (RWD), long-range version of the XPENG G9 can travel up to 354 miles on a single charge and features a new powertrain system for ultrafast charging, going from 10% to 80% in just 20 minutes. The P7 RWD, long-range model also has optimized charging power to reach 80% in 29 minutes, while offering up to 358 miles on a single charge.

To ensure an easy, fast charging experience, XPENG customers can access more than 400,000 charging stations in Europe through the automaker’s collaboration with major third-party charging operations and mobility service providers.

The XPENG G9 features faster charging and longer range, up to 354 miles on a single charge.

Beauty Backed by Brains and Brawn

With the high compute power found only with DRIVE Orin, the XPENG G9 and P7 advanced driving systems boast superior performance, while sporting sleek and elegant designs, quality craftsmanship and comfort to meet the most discerning of tastes.

The upgraded in-car Xmart operating system (OS) features a new 3D user interface that offers support in English and other European languages, depending on the market. The OS comes with an improved voice assistant that can distinguish voice commands from four zones in the cabin. It also features wide infotainment screens and a library of in-car apps to assist and entertain both the driver and passengers.

The G9 and P7 are available in all-wheel drive (AWD) or RWD configurations. XPENG reports that the G9’s AWD version delivers up to 551 horsepower, and can accelerate from 0 to 100 kilometers per hour in 3.9 seconds, while the upgraded P7 AWD model can do the same in 4.1 seconds.

The XPENG P7 features an immersive cabin experience.

Deliveries of the P7 will begin in June, while the G9 is expected to start in September. To support demand in key European markets, XPENG plans to open delivery and service centers in Lørenskog, Norway, this month — as well as in Badhoevedorp, the Netherlands; Stäket, Sweden; and Hillerød, Denmark in Q2 2023.

The EV maker expects to open additional authorized service locations in other European countries by the end of the year.

Featured image: Next-gen XPENG P7 sports sedan, powered by NVIDIA DRIVE Orin.

Read More

3D Artist Brings Ride and Joy to Automotive Designs With Real-Time Renders Using NVIDIA RTX

3D Artist Brings Ride and Joy to Automotive Designs With Real-Time Renders Using NVIDIA RTX

Designing automotive visualizations can be incredibly time consuming. To make the renders look as realistic as possible, artists need to consider material textures, paints, realistic lighting and reflections, and more.

For 3D artist David Baylis, it’s important to include these details and still create high-resolution renders in a short amount of time. That’s why he uses the NVIDIA RTX A6000 GPU, which allows him to use features like real-time ray tracing so he can quickly get the highest-fidelity image.

The RTX A6000 also enables Baylis to handle massive amounts of data with 48GB of VRAM, which means more GPU memory. In computer graphics, the higher the resolution of the image, the more memory is used. And with the RTX A6000, Baylis can extract more data without worrying about memory limits slowing him down.

Bringing Realistic Details to Life With RTX

To create his automotive visualizations, Baylis starts with 3D modeling in Autodesk 3ds Max software. He’ll set up the scene and work on the car model before importing it to Unreal Engine, where he works on lighting and shading for the final render.

In Unreal Engine, Baylis can experiment with details such as different car paints to see what works best on the 3D model. Seeing all the changes in real time enables Baylis to iterate and experiment with design choices, so he can quickly achieve the look and feel he’s aiming for.

In one of his latest projects, Baylis created a scene with an astounding polycount of more than 50 million triangles. Using the RTX A6000, he could easily move around the scene to see the car from different angles. Even in path-traced mode, the A6000 allows Baylis to maintain high frame rates while switching from one angle to the next.

Rendering at a higher resolution is important to create photorealistic visuals. In the example below, Baylis shows a car model rendered at 4K resolution. But when zoomed in, the graphics start to appear blurry.

When the car is rendered at 12K resolution, the details on the car become sharper. By rendering at higher resolutions, the artist can include extra details to make the car look even more realistic. With the RTX A6000, Baylis said the 12K render took under 10 minutes to complete.

It’s not just the real-time ray tracing and path tracing that help Baylis enhance his designs. There’s another component he said he never thought would make an impact on creative workflows — GPU memory.

The RTX A6000 GPU is equipped with 48GB of VRAM, which allows Baylis to load incredibly high-resolution textures and high-polygon assets. The VRAM is especially helpful for automotive renders because the datasets behind them can be massive.

The large memory of the RTX A6000 allows him to easily manage the data.

If we throw more polygons into the scene, or if we include more scanned assets, it tends to use a lot of VRAM, but the RTX A6000 can handle all of it,” explained Baylis. “It’s great not having to think about optimizing all those assets in the scene. Instead, we can just scan the data in, even if the assets are 8K, 16K or even 24K resolution.”

When Baylis rendered one still frame at 8K resolution, he saw it only took up 24GB of VRAM. So he pushed the resolution higher to 12K, using almost 35GB of VRAM — with plenty of headroom to spare.

This is an important feature to highlight, because when people look at new GPUs, they immediately look at benchmarks and how fast it can render things,” said Baylis. “And it’s good if you can render graphics a minute or two faster, but if you really want to take projects to the finish line, you need more VRAM.”

Using NVLink, Baylis can bridge two NVIDIA RTX A6000 GPUs together to scale memory and performance. With one GPU, it takes just about a minute to render a path-traced image of the car. But using dual RTX A6000 GPUs with NVLink, it reduces the render time by almost half. NVLink also combines GPU memory, providing 96 GB VRAM total. This makes Baylis’ animation workflows much faster and easier to manage.

Check out more of Baylis’ work in the video below, and learn more about NVIDIA RTX. And join us at NVIDIA GTC, which takes place March 20-23, to learn more about the latest technologies shaping the future of design and visualization.

Read More

Gather Your Party: GFN Thursday Brings ‘Baldur’s Gate 3’ to the Cloud

Gather Your Party: GFN Thursday Brings ‘Baldur’s Gate 3’ to the Cloud

Venture to the Forgotten Realms this GFN Thursday in Baldur’s Gate 3, streaming on GeForce NOW.

Celebrations for the cloud gaming service’s third anniversary continue with a Dying Light 2 reward that’s to die for. It’s the cherry on top of three new titles joining the GeForce NOW library this week.

Roll for Initiative

Mysterious abilities are awakening inside you. Embrace corruption or fight against darkness itself in Baldur’s Gate 3 (Steam) – a next-generation role-playing game, set in the world of Dungeons and Dragons.

Choose from a wide selection of D&D races and classes, or play as an origin character with a handcrafted background on underpowered PCs and Macs. Adventure, loot, battle and romance as you journey through the Forgotten Realms and beyond from mobile devices. Play alone and select companions carefully, or as a party of up to four in multiplayer.

Level up to the GeForce NOW Ultimate membership to experience the power of an RTX 4080 in the cloud and all of its benefits, including up to 4K 120 frames per second gameplay on PC and Mac, and ultrawide resolution support for a truly immersive experience.

Dying 2 Celebrate This Anniversary

To celebrate the third anniversary of GeForce NOW, members can now check their accounts to make sure they received the gift of free Dying Light 2 rewards.

Dying Light 2 GeForce NOW Anniversary Reward
You’re all set to survive the post-apocalyptic wasteland with this loadout.

Claim a new in-game outfit dubbed “Post-Apo,” complete with a Rough Duster, Bleak Pants, Well-Worn Boots, Tattered Leather Gauntlets, Dystopian Mask and Spiked Bracers to scavenge around and parkour in. Members who upgrade to Ultimate and Priority memberships can claim extra loot with this haul, including the Patchy Paraglider and Scrap Slicer weapon.

Visit the GeForce NOW Rewards portal to start receiving special offers and in-game goodies.

Welcome to the Weekend

Recipe for Disaster on GeForce NOW
Uh… maybe we should order takeout.

Buckle up for three more games supported in the GeForce NOW library this week.

  • Recipe for Disaster (Free on Epic Games, Feb. 9-16)
  • Baldur’s Gate 3 (Steam)
  • Inside the Backrooms (Steam)

Members continue to celebrate #3YearsOfGFN on our social channels, sharing their favorite cloud gaming devices:

Follow #3YearsOfGFN on Twitter and Facebook all month long and check out this week’s question.

 

Read More

New NVIDIA Studio Laptops Powered by GeForce RTX 4090, 4080 Laptop GPUs Unleash Creativity

New NVIDIA Studio Laptops Powered by GeForce RTX 4090, 4080 Laptop GPUs Unleash Creativity

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

The first NVIDIA Studio laptops powered by GeForce RTX 40 Series Laptop GPUs are now available, starting with systems from MSI and Razer — with many more to come.

Featuring GeForce RTX 4090 and 4080 Laptop GPUs, the new Studio laptops use the NVIDIA Ada Lovelace architecture and fifth-generation Max-Q technologies for maximum performance and efficiency. They’re fueled by powerful NVIDIA RTX technology like DLSS 3, which routinely increases frame rates by 2x or more.

Backed by the NVIDIA Studio platform, these laptops give creators exclusive access tools and apps — including NVIDIA Omniverse, Canvas and Broadcast — and deliver breathtaking visuals with full ray tracing and time-saving AI features.

They come preinstalled with regularly updated NVIDIA Studio Drivers. This month’s driver is available for download starting today.

And when creating turns to gaming, the laptops enable playing at previously impossible levels of detail and speed.

Plus, In the NVIDIA Studio this week highlights the making of The Artists’ Metaverse, a video showcasing the journey of 3D collaboration between seven creators, across several time zones, using multiple creative apps simultaneously — all powered by NVIDIA Omniverse.

The Future of Content Creation, Anywhere

NVIDIA Studio laptops, powered by new GeForce RTX 40 Series Laptop GPUs, deliver the largest-ever generational leap in portable performance and are the world’s fastest laptops for creating and gaming.

These creative powerhouses run up to 3x more efficiently than the previous generation, enabling users to power through creative workloads in a fraction of the time, all using thin, light laptops — with 14-inch designs coming soon for the first time.

MSI’s Stealth 17 Studio comes with up to a GeForce RTX 4090 Laptop GPU.

MSI’s Stealth 17 Studio comes with up to a GeForce RTX 4090 Laptop GPU and an optional 17-inch, Mini LED 4K, 144Hz, 1000 Nits, DisplayHDR 1000 display — perfect for creators of all types. It’s available in various configurations at Amazon, Best Buy, B&H and Newegg.

New Razer Blade Studio Laptops come preinstalled with NVIDIA Broadcast.

Razer is upgrading its Blade laptops with up to a GeForce RTX 4090 Laptop GPU. Available with a 16- or 18-inch HDR-capable, dual-mode, mini-LED display, they feature a Creator mode that enables sharp, ultra-high-definition+ native resolution at 120Hz. It’s available at Razer, Amazon, Best Buy, B&H and Newegg.

The MSI Stealth 17 Studio and Razer Blade 16 and 18 come preinstalled with NVIDIA Broadcast. The app’s recent update to version 1.4 added an Eye Contact feature, ideal for content creators who want to record themselves while reading notes or avoid having to stare directly at the camera. The feature also lets video conference presenters appear as if they’re looking at their audience, improving engagement.

Designed for gamers, new units from ASUS, GIGABYTE and Lenovo are also available today and deliver great performance in creator applications with access to NVIDIA Studio benefits.

Groundbreaking Performance

The new Studio laptops have been put through rigorous testing, and many reviewers are detailing the new levels of performance and AI-powered creativity that GeForce RTX 4090 and 4080 Laptop GPUs make possible. Here’s what some are saying:

“NVIDIA’s GeForce RTX 4090 pushes laptops to blistering new frontiers: Yes, it’s fast, but also much more.” — PC World

“GeForce RTX 4090 Laptops can also find the favor of content creators thanks to NVIDIA Studio as well as AV1 support and the double NVENC encoder.” — HDBLOG.IT

“With its GeForce RTX 4090… and bright, beautiful dual-mode display, the Razer Blade 16 can rip through games with aplomb, while being equally adept at taxing, content creations workloads.” — Hot Hardware

“The Nvidia GeForce RTX 4090 mobile GPU is a step up in performance, as we’d expect from the hottest graphics chip.” — PC Magazine

“Another important point – particularly in the laptop domain – is the presence of enhanced AV1 support and dual hardware encoders. That’s really useful for streamers or video editors using a machine like this.” – KitGuru

Pick up the latest Studio systems or configure a custom system today.

Revisiting ‘The Artists’ Metaverse’

Seven talented artists join us In the NVIDIA Studio this week to discuss building The Artists’ Metaverse — a spotlight demo from last month’s CES. The group reflected on how easy it was to collaborate in real time from different parts of the world.

It started in NVIDIA Omniverse, a hub to interconnect 3D workflows replacing linear pipelines with live-sync creation. The artists connected to the platform via Omniverse Cloud.

“Setting up the Omniverse Cloud collaboration demo was a super easy process,” said award-winning 3D creator Rafi Nizam. “It was cool to see avatars appearing as people popped in, and the user interface makes it really clear when you’re working in a live state.”

Assets were exported into Omniverse with ease, thanks to the Universal Scene Description format.

Filmmaker Jae Solina, aka JSFILMZ, animated characters in Omniverse using Xsens and Unreal Engine.

“Prior to Omniverse, creating animations was such a hassle, let alone getting photorealistic animations,” Solina said. “Instead of having to reformat and upload files individually, everything is done in Omniverse in real time, leading to serious time saved.”

 

Jeremy Lightcap reflected on the incredible visual quality of the virtual scene, highlighting the seamless movement within the viewport.

The balloon 3D model was sculpted by hand in Gravity Sketch and imported into Omniverse.

“We had three Houdini simulations, a volume database file storm cloud, three different characters with motion capture and a very dense Western town set with about 100 materials,” Lightcap said. “I’m not sure how many other programs could handle that and still give you instant, path-traced lighting results.”

 

For Ashley Goldstein, an NVIDIA 3D artist and tutorialist, the demo highlighted the versatility of Omniverse. “I could update the scene and save it as a new USD layer, so when someone else opened it up, they had all of my updates immediately,” she said. “Or, if they were working on the scene at the same time, they’d be instantly notified of the updates and could fetch new content.”

Applying colors and textures to the ballon in Adobe Substance 3D Painter.

Edward McEvenue, aka edstudios, reflected on the immense value Omniverse on RTX hardware provides, displaying fully ray-traced graphics with instant feedback. “3D production is a very iterative process, where you have to make hundreds if not thousands of small decisions along the way before finalizing a scene,” he said. “Using GPU acceleration with RTX path tracing in the viewport makes that process so much easier, as you get near-instant feedback on the changes you’re making, with all of the full-quality lighting, shadows, reflections, materials and post-production effects directly in the working viewport.”

Edits to the 3D model in Blender are reflected in real time with photorealistic detail in Omniverse.

3D artist Shangyu Wang noted Omniverse is his preferred 3D collaborative content-creation platform. “Autodesk’s Unreal Live Link for Maya gave me a ray-traced, photorealistic preview of the scene in real time, no waiting to see the final render result,” he said.

Fellow 3D artist Pekka Varis mentioned Omniverse’s positive trajectory. “New features are coming in faster than I can keep up!” he said. “It can become the main standard of the metaverse.”

Omniverse transcends location, time and apps, where collaboration, communication and creativity reign supreme.

Download Omniverse today, free for all NVIDIA and GeForce RTX GPU owners — including those with new GeForce RTX 40 Series laptops.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. Learn more about Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

Read More

Vietnam’s VinBrain Deploys Healthcare AI Models to 100+ Hospitals

Vietnam’s VinBrain Deploys Healthcare AI Models to 100+ Hospitals

Doctors rarely make diagnoses based on a single factor — they look at a mix of data types, such as a patient’s symptoms, laboratory and radiology reports, and medical history.

VinBrain, a Vietnam-based health-tech startup, is ensuring that AI diagnostics can take a similarly holistic view across vital signs, blood tests, medical images and more.

“Multimodal data is key to delivering precision care that can improve patient outcomes,” said Steven Truong, CEO of VinBrain. “Our medical imaging models, for instance, can analyze chest X-rays and make automated observations about abnormal findings in a patient’s heart, lungs and bones.”

If a medical-imaging AI model reports that a patient’s scan shows lung consolidation, Truong explained, doctors could combine the X-ray analysis with a large language model that reads health records to learn the patient has a fever — helping clinicians more quickly determine a more specific diagnosis of pneumonia.

Funded by Vingroup — one of Vietnam’s largest public companies — VinBrain is the creator of DrAid, which is the only AI software for automated X-ray diagnostics in Southeast Asia, and among the first AI platforms to be cleared by the FDA to detect features suggestive of collapsed lungs from chest X-rays.

Trained on a dataset of more than 2.5 million images, DrAid is deployed in more than 100 hospitals in Vietnam, Myanmar, New Zealand and the U.S. The software applies AI analysis to medical images for more than 120,000 patients each month. VinBrain is also building a host of other AI applications, including a telehealth product that analyzes lab test results, medical reports and other electronic health records.

The company is part of NVIDIA Inception, a global program designed to offer cutting-edge startups expertise, technology and go-to-market support. The VinBrain team has also collaborated with Microsoft and with academic researchers at Stanford University, Harvard University, the University of Toronto and the University of California, San Diego to develop its core AI technology and submit research publications to top conferences.

Many Models, Easy Deployment

The VinBrain team has developed more than 300 AI models that process speech, text, video and images — including X-ray, CT and MRI data.

“Healthcare is complex, so the pipeline requires hundreds of models for each step, such as preprocessing, segmentation, object detection and post-processing,” Truong said. “We aim to package these models together so everything runs on GPU servers at the hospital — like a refrigerator or household appliance.”

VinBrain recently launched DrAid Appliance, an on-premises, NVIDIA GPU-powered device for automatic screening of medical imaging studies that could improve doctors’ productivity by up to 80%, the team estimates.

The company also offers a hybrid solution, where images are preprocessed at the edge with DrAid Appliance, then sent to NVIDIA GPUs in the cloud for more demanding computational workloads.

Another way to access VinBrain’s DrAid software is through Ferrum Health, an NVIDIA Inception company that has developed a secure platform to help healthcare organizations deploy AI applications across therapeutic areas.

Accelerating AI Training and Inference

VinBrain trains its AI models — which include medical imaging, intelligent video analytics, automatic speech recognition, natural language processing and text-to-speech — using NVIDIA DGX SuperPOD. Adopting DGX SuperPOD enabled Vinbrain to achieve near-linear-level speedups for model training, achieving 100x faster training compared with CPU-only training and significantly shortening the turnaround time for model development.

The team is using software from NVIDIA AI Enterprise, an end-to-end solution for production AI, which includes the NVIDIA Clara platform, the MONAI open-source framework for medical imaging development and the NVIDIA NeMo conversational AI toolkit for its transcription model.

“To develop good AI models, you can’t just train once and be done,” said Truong. “It’s an evolving process to refine the neural networks.”

VinBrain has set up an early validation pipeline for its AI projects: The company tests its early-stage models across a couple dozen hospitals in Vietnam to collect performance data, gather feedback and fine-tune its neural networks.

In addition to using NVIDIA DGX SuperPOD for AI training, the company has adopted NVIDIA GPUs to improve run-time efficiency and deployment. It uses the NVIDIA Triton inference server and NVIDIA TensorRT to streamline inference for more than hundreds of AI models on cloud-based NVIDIA Tensor Core GPUs.

“We shifted to NVIDIA GPUs for inference because of the higher throughput, faster response time and, most importantly, the cost ratio,” Truong said.

After switching from CPUs to NVIDIA Tensor Core GPUs, the team was able to accelerate inference for medical imaging AI by more than 3x, and video streaming by more than 30x.

“In the coming years, we want to become the top company solving the problem of multimodality in healthcare data,” said Truong. “Using AI and edge computing, we aim to improve the quality and accessibility of healthcare, making intelligent insights accessible to patients and doctors across countries.”

Register for NVIDIA GTC, taking place online March 20-23, to learn more about AI in healthcare.

Read More

AI Joins Hunt for ET: Study Finds 8 Potential Alien Signals

AI Joins Hunt for ET: Study Finds 8 Potential Alien Signals

Artificial intelligence is now a part of the quest to find extraterrestrial life.

Researchers have developed an AI system that outperforms traditional methods in the search for alien signals. And early results were intriguing enough to send scientists back to their radio telescopes for a second look.

The study, published last week in Nature Astronomy, highlights the crucial role that AI techniques will play in the ongoing search for extraterrestrial intelligence.

The team behind the paper trained an AI to recognize signals that natural astrophysical processes couldn’t produce. They then fed it a massive dataset of over 150 terabytes of data collected by the Green Bank Telescope, one of the world’s largest radio telescopes, located in West Virginia.

The AI flagged more than 20,000 signals of interest, with eight showing the tell-tale characteristics of what scientists call “technosignatures,” such as a radio signal that could tip scientists off to the existence of another civilization.

In the face of a growing deluge of data from radio telescopes, it’s critical to have a fast and effective means of sorting through it all.

That’s where the AI system shines.

The system was created by Peter Ma, an undergraduate student at the University of Toronto and the lead author of the paper co-authored by a constellation of experts affiliated with the University of Toronto, UC Berkeley and Breakthrough Listen, an international effort launched in 2015 to search for signs of alien civilizations.

Ma, who taught himself how to code, first became interested in computer science in high school. He started working on a project where he aimed to use open-source data and tackle big data problems with unanswered questions, particularly in the area of machine learning.

“I wanted a big science problem with open source data and big, unanswered questions,” Ma says. “And finding aliens is big.”

Despite initially facing some confusion and disbelief from his teachers, Ma continued to work on his project throughout high school and into his first year of college, where he reached out to others and found support from researchers at the University of Toronto, UC Berkeley and Breakthrough Listen to identify signals from extraterrestrial civilizations.

The paper describes a two-step AI method to classify signals as either radio interference or a potential technosignature.

The first step uses an autoencoder to identify salient features in the data. This system, built using the TensorFlow API, was accelerated by four NVIDIA TITAN X GPUs at UC Berkeley.

The second step feeds those features to a random forest classifier, which decides whether a signal is noteworthy or just interference.

The AI system is particularly adept at identifying narrowband signals with a non-zero drift rate. These signals are much more focused and specific than natural phenomena and suggest that they may be coming from a distant source.

Additionally, the signals only appear in observations of some regions of the sky, further evidence of a celestial origin.

To train the AI system, Ma inserted simulated signals into actual data, allowing the autoencoder to learn what to look for. Then the researchers fed the AI more than 150 terabytes of data from 480 observing hours at the Green Bank Telescope.

The AI identified 20,515 signals of interest, which the researchers had to inspect manually. Of those, eight had the characteristics of technosignatures and couldn’t be attributed to radio interference.

The researchers then returned to the telescope to look at systems from which all eight signals originated but couldn’t re-detect them.

“Eight signals looked very suspicious, but after we took another look at the targets with our telescopes, we didn’t see them again,” Ma says. “It’s been almost five to six years since we took the data, but we still haven’t seen the signal again. Make of that what you will.”

To be sure, because they don’t have real signals from an extraterrestrial civilization, the researchers had to rely on simulated signals to train their models. The researchers note that this could lead to the AI system learning artifacts that aren’t there.

Still, Cherry Ng, one of the paper’s co-authors, points out the team has a good idea of what to look for.

“A classic example of human-generated technology from space that we have detected is the Voyager,” said Ng, who studies fast radio bursts and pulsars, and is currently affiliated with the French National Centre for Scientific Research, known as CNRS.

“Peter’s machine learning algorithm is able to generate these signals that the aliens may or may not have sent,” she said.

And while aliens haven’t been found — yet, the study shows the potential of AI in SETI research and the importance of analyzing vast quantities of data.

“We’re hoping to extend this search capacity and algorithm to other kinds of telescope setups,” Ma said, connecting the efforts to advancements made in a broad array of fields thanks to AI.

There will be plenty of opportunities to see what AI can do.

Despite efforts dating back to the ‘60s, only a tiny fraction of stars in the Milky Way have been monitored, Ng says. However, with advances in technology, astronomers are now able to conduct more observations in parallel and maximize their scientific output.

Even the data that has been collected, such as the Green Bank data, has yet to be fully searched, Ng explains.

And with the next-generation radio telescopes, including MeerKAT, the Very Large Array (VLA), Square Kilometre Array, and the next-generation VLA (ngVLA) gathering vast amounts of data in the search for extraterrestrial intelligence, implementing AI will become increasingly important to overcome the challenges posed by the sheer volume of data.

So will we find anything?

“I’m skeptical about the idea that we are alone in the universe,” Ma said, pointing to breakthroughs over the past decade showing our planet is not as unique as we once thought it was. “Whether we will find anything is up to science and luck to verify, but I believe it is very naive to believe we are alone.”

Image Credit: NASA, JPL-Caltech, Susan Stolovy (SSC/Caltech) et al.

 

Read More

Three Cheers: GFN Thursday Celebrates Third Anniversary With 25 New Games

Three Cheers: GFN Thursday Celebrates Third Anniversary With 25 New Games

Cheers to another year of cloud gaming! GeForce NOW celebrates its third anniversary with a look at how far cloud gaming has come, a community celebration and 25 new games supported in February.

Members can celebrate all month long, starting with a sweet Dying Light 2 reward and support for nine more games this week, including Deliver Us Mars with RTX ON.

It’s Sweet to Be Three

Three years ago, GeForce NOW launched out of a beta period to let anyone sign up to experience PC gaming from the cloud. Since then, members have streamed more than 700 million hours from the cloud, bringing home the victory on devices that could never stand up to the action on their own.

Gamers have experienced the unparalleled cinematic quality of RTX ON, with more than 50 titles taking advantage of real-time ray tracing and NVIDIA DLSS. And with 1,500+ games supported in the GeForce NOW library, the action never has to stop.

GeForce NOW Third Anniversary
Three years of cloud gaming by the numbers.

Members across the globe have the gaming power they need, with GeForce NOW technology in more than 30 global data centers, including in regions powered by GeForce NOW Alliance partners in Japan, Turkey and Latin America.

The performance available to members has expanded in the past three years, too. Members could initially play at up to 1080p, 60 frames per second gameplay on the Priority membership. In 2021, an upgrade to RTX 3080-class performance at up to 4K 60 fps became available.

Now, the new Ultimate membership unlocks unrivaled performance at up to 4K 120 fps streaming, or up to 1080p 240 fps in NVIDIA Reflex-supported games on rigs powered by GeForce RTX 4080 GPUs.

Ultimate members can stream at ultrawide resolutions — a first for cloud gaming. And with the NVIDIA Ada Lovelace architecture in the cloud, members can experience full ray tracing and NVIDIA DLSS 3 in supported games across their devices for a truly cinematic experience.

GeForce NOW Ultimate membership
The Ultimate membership brings RTX 4080 performance to the cloud.

As the cloud gaming service has evolved, so have the devices members can use to keep the gaming going. GeForce NOW runs on PC and macOS from the native app, or on iOS Safari and Android for on-the-go gaming. Members can also choose to stream from Chromebooks and the Chrome browser.

Last year brought touch controls for Fortnite, Genshin Impact and more titles, removing the need to carry a gamepad everywhere. And with support for handheld devices like the Logitech Cloud G and Razer Edge 5G, plus support for the latest smart TVs from LG and Samsung, nearly any screen can become a PC-gaming battlestation.

New Ways to Play GeForce NOW
More ways to play from the cloud.

None of this would be possible without the GeForce NOW community and its more than 25 million members. Their feedback and passion is invaluable, and they believe in the future of gaming powered by the cloud: a future in which everyone’s a PC gamer, even if they’re not on a PC.

Celebrate all month on Twitter and Facebook by sharing the best ways to play from the cloud using #3YearsOfGFN for a chance to be featured in a GeForce NOW highlight reel.

It’s been a packed three years, and we’re just getting started. Cheers to all of the cloud gamers, and here’s to the future of GeForce NOW!

Rewards to Light Up Your Day

The anniversary celebration wouldn’t be complete without giving back to the community. Starting next week, GeForce NOW members can score free Dying Light 2 rewards: a new outfit dubbed “Post-Apo,” complete with a Rough Duster, Bleak Pants, Well-Worn Boots, Tattered Leather Gauntlets, Dystopian Mask and Spiked Bracers to scavenge around and parkour in.

Dying Light 2 Reward on GeForce NOW
Survive in this post-apocalyptic wasteland with a new outfit, paraglider and weapon.

Gamers who upgrade to Ultimate and Priority memberships get additional rewards, including the Patchy Paraglider and Scrap Slicer weapon.

Claim this reward beginning Thursday, Feb. 9. Make sure to visit the GeForce NOW Rewards portal and update your settings to start receiving special offers and in-game goodies. Better hurry — these rewards are available for a limited time on a first-come, first-serve basis.

The February Lineup

Deliver Us Mars on GeForce NOW
Blast off!

Take a leap to another planet with Deliver Us Mars from Frontier Foundry. In an atmospheric sci-fi adventure with a mission to recover lost ARK colony ships on Mars, members will be able to traverse the hazardous environments of the red planet with out-of-this-world, ray-traced shadows and lighting that make the dangerous terrain feel strikingly realistic.

If space isn’t your jam, check out the list of nine games that will be available to play this week:

February also brings support for 18 more games:

  • Dark and Darker playtest (Available on Steam, Feb. 6-13)
  • Labyrinth of Galleria: The Moon Society (New release on Steam, Feb. 14)
  • Wanted: Dead (New release on Steam and Epic Games, Feb. 14)
  • Elderand (New release on Steam, Feb. 16)
  • Wild West Dynasty (New release on Steam, Feb. 16)
  • The Settlers: New Allies (New release on Ubisoft Store, Feb. 17)
  • Atomic Heart (New release on Steam, Feb. 20)
  • Chef Life — A Restaurant Simulator (New release on Steam, Feb. 23)
  • Blood Bowl 3 (New release on Steam and Epic Games Store, Feb. 23)
  • Scars Above (New release on Steam, Feb. 28)
  • Heads Will Roll: Reforged (Steam)
  • Above Snakes (Steam)
  • Across the Obelisk (Steam)
  • Captain of Industry (Steam)
  • Cartel Tycoon (Steam and Epic Games Store)
  • Ember Knights (Steam)
  • Inside the Backrooms (Steam)
  • SimRail — The Railway Simulator (Steam)

January’s a Wrap

January came to a close with eight extra games on top of the 19 announced. It’s like finding an extra french fry in the bag:

Two games announced last month, Occupy Mars: The Game (Steam) and Grimstar: Crystals are the New Oil! (Steam), didn’t make it, due to shifts in their release dates.

What will you play first this weekend? Let us know in the comments below or on Twitter and Facebook, and make sure to check out #3YearsOfGFN!

Read More

NVIDIA A100 Aces Throughput, Latency Results in Key Inference Benchmark for Financial Services Industry

NVIDIA A100 Aces Throughput, Latency Results in Key Inference Benchmark for Financial Services Industry

NVIDIA A100 Tensor Core GPUs running on Supermicro servers have captured leading results for inference in the latest STAC-ML Markets benchmark, a key technology performance gauge for the financial services industry.

The results show NVIDIA demonstrating unrivaled throughput — serving up thousands of inferences per second on the most demanding models — and top latency on the latest STAC-ML inference standard.

The results are closely followed by financial institutions, three-quarters of which rely on machine learning, deep learning or high performance computing, according to a recent survey.

NVIDIA A100: Top Latency Results

The STAC-ML inference benchmark is designed to measure the latency of long short-term memory (LSTM) model inference — the time from receiving new input data until the model output is computed. LSTM is a key model approach used to discover financial time-series data like asset prices.

The benchmark includes three LSTM models of increasing complexity. NVIDIA A100 GPUs, running in a Supermicro Ultra SuperServer, demonstrated low latencies in the 99th percentile.

Accelerated Computing for STAC-ML and STAC-A2, STAC-A3 Benchmarks

Considering the A100 performance on STAC-ML for inference — in addition to its record-setting performance in the STAC-A2 benchmark for option price discovery and the STAC-A3 benchmark for model backtesting — provides a glimpse at how NVIDIA AI computing can accelerate a pipeline of modern trading environments.

It also shows A100 GPUs deliver leading performance and workload versatility for financial institutions.

Predictable Performance for Consistent Low Latency

Predictable performance is crucial for low-latency environments in finance, as extreme outliers can cause substantial losses during fast market moves. 

Notably, there were no large outliers in NVIDIA’s latency, as the maximum latency was no more than 2.3x the median latency across all LSTMs and the number of model instances, ranging up to 32 concurrent instances.1

NVIDIA is the first to submit performance results for what’s known as the Tacana Suite of the benchmark. Tacana is for inference performed on a sliding window, where a new timestep is added and the oldest removed for each inference operation. This is helpful for high-frequency trading, where inference needs to be performed on every market data update.

A second suite, Sumaco, performs inference on an entirely new set of data, which reflects the use case where an event prompts inference based on recent history.

Leading Throughput in Benchmark Results

NVIDIA also submitted a throughput-optimized configuration on the same hardware for the Sumaco Suite in FP16 precision.2

On the least complex LSTM in the benchmark, A100 GPUs on Supermicro servers helped serve up more than 1.7 million inferences per second.3

For the most complex LSTM, these systems handled as many as 12,800 inferences per second.4

NVIDIA A100: Performance and Versatility 

NVIDIA GPUs offer multiple advantages that lower the total cost of ownership for electronic trading stacks.

For one, NVIDIA AI provides a single platform for training and inference. Whether developing, backtesting or deploying an AI model, NVIDIA AI delivers leading performance — and developers don’t need to learn different programming languages and frameworks for research and trading.

Moreover, the NVIDIA CUDA programming model enables development, optimization and deployment of applications across GPU-accelerated embedded systems, desktop workstations, enterprise data centers, cloud-based platforms and HPC supercomputers.

Efficiencies for Reduced Operating Expenses

The financial services industry stands to benefit from not only data throughput advances but also improved operational efficiencies.

Reduced energy and square footage usage for systems in data centers can make a big difference in operating expenses. That’s especially pressing as IT organizations make the case for budgetary outlays to cover new high-performance systems.

On the most demanding LSTM model, NVIDIA A100 exceeded 17,700 inferences per second per kilowatt while consuming 722 watts, offering leading energy efficiency.5

The benchmark results confirm that NVIDIA GPUs are unrivaled in terms of throughput and energy efficiency for workloads like backtesting and simulation.

Learn about NVIDIA delivering smarter, more secure financial services.

[1] SUT ID NVDA221118b, max of STAC-ML.Markets.Inf.T.LSTM_A.2.LAT.v1

[2] SUT ID NVDA221118a

[3] STAC-ML.Markets.Inf.S.LSTM_A.4.TPUT.v1

[4] STAC-ML.Markets.Inf.S.LSTM_C.[1,2,4].TPUT.v1

[5] SUT ID NVDA221118a, STAC-ML.Markets.Inf.S.LSTM_C.[1,2,4].ENERG_EFF.v1

Read More

Survey Reveals Financial Industry’s Top 4 AI Priorities for 2023

Survey Reveals Financial Industry’s Top 4 AI Priorities for 2023

For several years, NVIDIA has been working with some of the world’s leading financial institutions to develop and execute a wide range of rapidly evolving AI strategies. For the past three years, we’ve asked them to tell us collectively what’s on the top of their minds.

Sometimes the results are just what we thought they’d be, and other times they’re truly surprising. This year’s survey, conducted in a time of continued macroeconomic uncertainty, the results were a little of both.

From banking and fintech institutions to insurance and asset management firms, the goals remain the same — find ways to more accurately manage risk, enhance efficiencies to reduce operating costs, and improve experiences for clients and customers. By digging in deeper, we were able to learn which areas of AI are of most interest as well as a bit more.

Below are the top four findings we gleaned from our “State of AI in Financial Services: 2023 Trends” survey taken by nearly 500 global financial services professionals.

Hybrid Cloud Is Coming on Strong

Financial services firms, like other enterprises, are looking to optimize spending for AI training and inference — with the knowledge that sensitive data can’t be migrated to the cloud. To do so cost-effectively, they’re moving many of their compute-intensive workloads to the hybrid cloud.

This year’s survey found that nearly half of respondents’ firms are moving to the hybrid cloud to optimize AI performance and reduce costs. Recent announcements from leading cloud service providers and platforms reinforce this shift and make data portability, MLOps management and software standardization across cloud and on-prem instances a strategic imperative for cost and efficiency.

Large Language Models Top the List of AI Use Cases 

The survey results, focused on companies based in the Americas and Europe, with a sample size of over 200, found the top AI use cases to be natural language processing and large language models (26%), recommender systems and next-best action (23%), portfolio optimization (23%) and fraud detection (22%). Emerging workloads for the metaverse, synthetic data generation and virtual worlds were also common.

Banks, trading firms and hedge funds are adopting these technologies to create personalized customer experiences. For example, Deutsche Bank recently announced a multi-year innovation partnership with NVIDIA to embed AI into financial services across use cases, including intelligent avatars, speech AI, fraud detection and risk management, to slash total cost of ownership by up to 80%. The bank plans to use NVIDIA Omniverse to build a 3D virtual avatar to help employees navigate internal systems and respond to HR-related questions.

Banks Seeing More Potential for AI to Grow Revenue

The survey found that AI is having a quantifiable impact on financial institutions. Nearly half of survey takers said that AI will help increase annual revenue for their organization by at least 10%. More than a third noted that AI will also help decrease annual costs by at least 10%.

Financial services professionals highlighted how AI has enhanced business operations — particularly improving customer experience (46%), creating operational efficiencies (35%) and reducing total cost of ownership (20%).

For example, computer vision and natural language processing are helping automate financial document analysis and claims processing, saving companies time, expenses and resources. AI also helps prevent fraud by enhancing anti-money laundering and know-your-customer processes, while recommenders create personalized digital experiences for a firm’s customers or clients.

The Biggest Obstacle: Recruiting and Retaining AI Talent 

But there are challenges to achieving AI goals in the enterprise. Recruiting and retaining AI experts is the single biggest obstacle, a problem reported by 36% of survey takers. There is also inadequate technology to enable AI innovation, according to 28% of respondents.

Insufficient data sizes for model training and accuracy is another pressing issue noted by 26% of financial services professionals. This could be addressed through the use of generative AI to produce accurate synthetic financial data used to train AI models.

Executive Support for AI at New High

Despite the challenges, the future for AI in FSI is getting brighter. Increasing executive buy-in for AI is a new theme in the survey results. Some 64% of those surveyed noted that “my executive leadership team values and believes in AI,” compared with 36% a year ago. In addition, 58% said that “AI is important to my company’s future success,” up from 39% a year ago.

Financial institutions plan to continue building out enterprise AI in the future. This will include scaling up and scaling out AI infrastructure, including hardware, software and services.

Empowering data scientists, quants and developers while minimizing bottlenecks requires a sophisticated, full stack AI platform. Executives have seen the ROI of deploying AI-enabled applications. In 2023, these leaders will focus on scaling AI across the enterprise, hiring more data scientists and investing in accelerated computing technology to support training and deployment of AI applications.

Download the “State of AI in Financial Services: 2023 Trends” report for in-depth results and insights.

Watch on-demand sessions from NVIDIA GTC featuring industry leaders from Capital One, Deutsche Bank, U.S. Bank and Ubiquant. And learn more about delivering smarter, more secure financial services and the AI-powered bank.

Read More

Meet the Omnivore: Architectural Researcher Lights Up Omniverse Scenes With ‘SunPath’ Extension

Meet the Omnivore: Architectural Researcher Lights Up Omniverse Scenes With ‘SunPath’ Extension

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.

Things are a lot sunnier these days for designers looking to visualize their projects in NVIDIA Omniverse, a platform for creating and operating metaverse applications.

Pingfan Wu

Pingfan Wu, a senior architectural researcher at the Hunan Architectural Design Institute (HNADI) Group in south-central China, developed an Omniverse extension that makes controlling the sun and its effects on scenes within the platform more intuitive and precise.

Wu has won multiple awards for his work with Omniverse extensions — core building blocks that let anyone create and extend functions of the Omniverse using the popular Python or C++ programming languages.

The “SunPath” extension lets users easily add, manipulate and update a sun-path diagram within the Omniverse viewport.

This enables designers to visualize how the sun will impact a site or building at different times of day throughout the year, which is critical for the architecture, engineering, construction and operations (AECO) industry.

“To achieve digitization in the AECO industry, the first task is to build a bridge for data interoperability between design tools, and the only platform that can fill this role is Omniverse,” Wu said. “Based on the Universal Scene Description framework, Omniverse enables designers to collaborate using different software.”

And the excellent rendering power of Omniverse makes the sunlight look very realistic, Wu added.

Award-Winning Omniverse Extension

The extension shined brightly in NVIDIA’s inaugural #ExtendOmniverse contest last fall, winning the “scene modifier or manipulator tools” category.

The competition invited developers to use the Omniverse Code app to create their own Omniverse extensions for a chance to win an NVIDIA RTX GPU.

Wu decided to join, as he saw the potential for Omniverse’s real-time, collaborative, AI-powered tools to let designers “focus more on ideas and design rather than on rendering and struggling to connect models from different software,” he said.

Many design tools that come with a skylight feature lack realistic shadows, the researcher had noticed. His “SunPath” extension — built in just over a month — solves this problem, as designers can import scenes to Omniverse and create accurate sun studies quickly and easily.

They can even use “SunPath” to perform energy-consumption analyses to make design results more efficient, Wu added.

More Wins With Omniverse

Participating in the #ExtendOmniverse contest inspired Wu to develop further applications of Omniverse for AECO, he said.

He led his team at HNADI, which includes eight members who are experts in both architecture and technology, to create a platform that enables users to customize AECO-specific digital twins. Dubbed HNADI-AECKIT, the platform extends an Omniverse Connector to Rhino, a 3D computer graphics and computer-aided design software.

It won two awards at last year’s AEC Hackathon in China: first prize overall and in the “Best Development of the Omniverse” track.

“The core technical advantage of HNADI-AECKIT is that it opens up a complete linkage between Rhino and Omniverse,” Wu said. “Any output from Rhino can be quickly converted into a high-fidelity, information-visible, interactive, customizable digital-twin scene in Omniverse.”

Learn more about HNADI-AECKIT:

Join In on the Creation

Creators and developers across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Discover how to build an Omniverse extension in less than 10 minutes.

For a deeper dive into developing on Omniverse, attend these sessions at GTC, a global conference for the era of AI and the metaverse, running March 20-23.

Find additional documentation and tutorials in the Omniverse Resource Center, which details how developers like Wu can build custom USD-based applications and extensions for the platform.

To discover more free tools, training and a community for developers, join the NVIDIA Developer Program.

Follow NVIDIA Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

Read More