AI Joins Hunt for ET: Study Finds 8 Potential Alien Signals

AI Joins Hunt for ET: Study Finds 8 Potential Alien Signals

Artificial intelligence is now a part of the quest to find extraterrestrial life.

Researchers have developed an AI system that outperforms traditional methods in the search for alien signals. And early results were intriguing enough to send scientists back to their radio telescopes for a second look.

The study, published last week in Nature Astronomy, highlights the crucial role that AI techniques will play in the ongoing search for extraterrestrial intelligence.

The team behind the paper trained an AI to recognize signals that natural astrophysical processes couldn’t produce. They then fed it a massive dataset of over 150 terabytes of data collected by the Green Bank Telescope, one of the world’s largest radio telescopes, located in West Virginia.

The AI flagged more than 20,000 signals of interest, with eight showing the tell-tale characteristics of what scientists call “technosignatures,” such as a radio signal that could tip scientists off to the existence of another civilization.

In the face of a growing deluge of data from radio telescopes, it’s critical to have a fast and effective means of sorting through it all.

That’s where the AI system shines.

The system was created by Peter Ma, an undergraduate student at the University of Toronto and the lead author of the paper co-authored by a constellation of experts affiliated with the University of Toronto, UC Berkeley and Breakthrough Listen, an international effort launched in 2015 to search for signs of alien civilizations.

Ma, who taught himself how to code, first became interested in computer science in high school. He started working on a project where he aimed to use open-source data and tackle big data problems with unanswered questions, particularly in the area of machine learning.

“I wanted a big science problem with open source data and big, unanswered questions,” Ma says. “And finding aliens is big.”

Despite initially facing some confusion and disbelief from his teachers, Ma continued to work on his project throughout high school and into his first year of college, where he reached out to others and found support from researchers at the University of Toronto, UC Berkeley and Breakthrough Listen to identify signals from extraterrestrial civilizations.

The paper describes a two-step AI method to classify signals as either radio interference or a potential technosignature.

The first step uses an autoencoder to identify salient features in the data. This system, built using the TensorFlow API, was accelerated by four NVIDIA TITAN X GPUs at UC Berkeley.

The second step feeds those features to a random forest classifier, which decides whether a signal is noteworthy or just interference.

The AI system is particularly adept at identifying narrowband signals with a non-zero drift rate. These signals are much more focused and specific than natural phenomena and suggest that they may be coming from a distant source.

Additionally, the signals only appear in observations of some regions of the sky, further evidence of a celestial origin.

To train the AI system, Ma inserted simulated signals into actual data, allowing the autoencoder to learn what to look for. Then the researchers fed the AI more than 150 terabytes of data from 480 observing hours at the Green Bank Telescope.

The AI identified 20,515 signals of interest, which the researchers had to inspect manually. Of those, eight had the characteristics of technosignatures and couldn’t be attributed to radio interference.

The researchers then returned to the telescope to look at systems from which all eight signals originated but couldn’t re-detect them.

“Eight signals looked very suspicious, but after we took another look at the targets with our telescopes, we didn’t see them again,” Ma says. “It’s been almost five to six years since we took the data, but we still haven’t seen the signal again. Make of that what you will.”

To be sure, because they don’t have real signals from an extraterrestrial civilization, the researchers had to rely on simulated signals to train their models. The researchers note that this could lead to the AI system learning artifacts that aren’t there.

Still, Cherry Ng, one of the paper’s co-authors, points out the team has a good idea of what to look for.

“A classic example of human-generated technology from space that we have detected is the Voyager,” said Ng, who studies fast radio bursts and pulsars, and is currently affiliated with the French National Centre for Scientific Research, known as CNRS.

“Peter’s machine learning algorithm is able to generate these signals that the aliens may or may not have sent,” she said.

And while aliens haven’t been found — yet, the study shows the potential of AI in SETI research and the importance of analyzing vast quantities of data.

“We’re hoping to extend this search capacity and algorithm to other kinds of telescope setups,” Ma said, connecting the efforts to advancements made in a broad array of fields thanks to AI.

There will be plenty of opportunities to see what AI can do.

Despite efforts dating back to the ‘60s, only a tiny fraction of stars in the Milky Way have been monitored, Ng says. However, with advances in technology, astronomers are now able to conduct more observations in parallel and maximize their scientific output.

Even the data that has been collected, such as the Green Bank data, has yet to be fully searched, Ng explains.

And with the next-generation radio telescopes, including MeerKAT, the Very Large Array (VLA), Square Kilometre Array, and the next-generation VLA (ngVLA) gathering vast amounts of data in the search for extraterrestrial intelligence, implementing AI will become increasingly important to overcome the challenges posed by the sheer volume of data.

So will we find anything?

“I’m skeptical about the idea that we are alone in the universe,” Ma said, pointing to breakthroughs over the past decade showing our planet is not as unique as we once thought it was. “Whether we will find anything is up to science and luck to verify, but I believe it is very naive to believe we are alone.”

Image Credit: NASA, JPL-Caltech, Susan Stolovy (SSC/Caltech) et al.

 

Read More

Three Cheers: GFN Thursday Celebrates Third Anniversary With 25 New Games

Three Cheers: GFN Thursday Celebrates Third Anniversary With 25 New Games

Cheers to another year of cloud gaming! GeForce NOW celebrates its third anniversary with a look at how far cloud gaming has come, a community celebration and 25 new games supported in February.

Members can celebrate all month long, starting with a sweet Dying Light 2 reward and support for nine more games this week, including Deliver Us Mars with RTX ON.

It’s Sweet to Be Three

Three years ago, GeForce NOW launched out of a beta period to let anyone sign up to experience PC gaming from the cloud. Since then, members have streamed more than 700 million hours from the cloud, bringing home the victory on devices that could never stand up to the action on their own.

Gamers have experienced the unparalleled cinematic quality of RTX ON, with more than 50 titles taking advantage of real-time ray tracing and NVIDIA DLSS. And with 1,500+ games supported in the GeForce NOW library, the action never has to stop.

GeForce NOW Third Anniversary
Three years of cloud gaming by the numbers.

Members across the globe have the gaming power they need, with GeForce NOW technology in more than 30 global data centers, including in regions powered by GeForce NOW Alliance partners in Japan, Turkey and Latin America.

The performance available to members has expanded in the past three years, too. Members could initially play at up to 1080p, 60 frames per second gameplay on the Priority membership. In 2021, an upgrade to RTX 3080-class performance at up to 4K 60 fps became available.

Now, the new Ultimate membership unlocks unrivaled performance at up to 4K 120 fps streaming, or up to 1080p 240 fps in NVIDIA Reflex-supported games on rigs powered by GeForce RTX 4080 GPUs.

Ultimate members can stream at ultrawide resolutions — a first for cloud gaming. And with the NVIDIA Ada Lovelace architecture in the cloud, members can experience full ray tracing and NVIDIA DLSS 3 in supported games across their devices for a truly cinematic experience.

GeForce NOW Ultimate membership
The Ultimate membership brings RTX 4080 performance to the cloud.

As the cloud gaming service has evolved, so have the devices members can use to keep the gaming going. GeForce NOW runs on PC and macOS from the native app, or on iOS Safari and Android for on-the-go gaming. Members can also choose to stream from Chromebooks and the Chrome browser.

Last year brought touch controls for Fortnite, Genshin Impact and more titles, removing the need to carry a gamepad everywhere. And with support for handheld devices like the Logitech Cloud G and Razer Edge 5G, plus support for the latest smart TVs from LG and Samsung, nearly any screen can become a PC-gaming battlestation.

New Ways to Play GeForce NOW
More ways to play from the cloud.

None of this would be possible without the GeForce NOW community and its more than 25 million members. Their feedback and passion is invaluable, and they believe in the future of gaming powered by the cloud: a future in which everyone’s a PC gamer, even if they’re not on a PC.

Celebrate all month on Twitter and Facebook by sharing the best ways to play from the cloud using #3YearsOfGFN for a chance to be featured in a GeForce NOW highlight reel.

It’s been a packed three years, and we’re just getting started. Cheers to all of the cloud gamers, and here’s to the future of GeForce NOW!

Rewards to Light Up Your Day

The anniversary celebration wouldn’t be complete without giving back to the community. Starting next week, GeForce NOW members can score free Dying Light 2 rewards: a new outfit dubbed “Post-Apo,” complete with a Rough Duster, Bleak Pants, Well-Worn Boots, Tattered Leather Gauntlets, Dystopian Mask and Spiked Bracers to scavenge around and parkour in.

Dying Light 2 Reward on GeForce NOW
Survive in this post-apocalyptic wasteland with a new outfit, paraglider and weapon.

Gamers who upgrade to Ultimate and Priority memberships get additional rewards, including the Patchy Paraglider and Scrap Slicer weapon.

Claim this reward beginning Thursday, Feb. 9. Make sure to visit the GeForce NOW Rewards portal and update your settings to start receiving special offers and in-game goodies. Better hurry — these rewards are available for a limited time on a first-come, first-serve basis.

The February Lineup

Deliver Us Mars on GeForce NOW
Blast off!

Take a leap to another planet with Deliver Us Mars from Frontier Foundry. In an atmospheric sci-fi adventure with a mission to recover lost ARK colony ships on Mars, members will be able to traverse the hazardous environments of the red planet with out-of-this-world, ray-traced shadows and lighting that make the dangerous terrain feel strikingly realistic.

If space isn’t your jam, check out the list of nine games that will be available to play this week:

February also brings support for 18 more games:

  • Dark and Darker playtest (Available on Steam, Feb. 6-13)
  • Labyrinth of Galleria: The Moon Society (New release on Steam, Feb. 14)
  • Wanted: Dead (New release on Steam and Epic Games, Feb. 14)
  • Elderand (New release on Steam, Feb. 16)
  • Wild West Dynasty (New release on Steam, Feb. 16)
  • The Settlers: New Allies (New release on Ubisoft Store, Feb. 17)
  • Atomic Heart (New release on Steam, Feb. 20)
  • Chef Life — A Restaurant Simulator (New release on Steam, Feb. 23)
  • Blood Bowl 3 (New release on Steam and Epic Games Store, Feb. 23)
  • Scars Above (New release on Steam, Feb. 28)
  • Heads Will Roll: Reforged (Steam)
  • Above Snakes (Steam)
  • Across the Obelisk (Steam)
  • Captain of Industry (Steam)
  • Cartel Tycoon (Steam and Epic Games Store)
  • Ember Knights (Steam)
  • Inside the Backrooms (Steam)
  • SimRail — The Railway Simulator (Steam)

January’s a Wrap

January came to a close with eight extra games on top of the 19 announced. It’s like finding an extra french fry in the bag:

Two games announced last month, Occupy Mars: The Game (Steam) and Grimstar: Crystals are the New Oil! (Steam), didn’t make it, due to shifts in their release dates.

What will you play first this weekend? Let us know in the comments below or on Twitter and Facebook, and make sure to check out #3YearsOfGFN!

Read More

NVIDIA A100 Aces Throughput, Latency Results in Key Inference Benchmark for Financial Services Industry

NVIDIA A100 Aces Throughput, Latency Results in Key Inference Benchmark for Financial Services Industry

NVIDIA A100 Tensor Core GPUs running on Supermicro servers have captured leading results for inference in the latest STAC-ML Markets benchmark, a key technology performance gauge for the financial services industry.

The results show NVIDIA demonstrating unrivaled throughput — serving up thousands of inferences per second on the most demanding models — and top latency on the latest STAC-ML inference standard.

The results are closely followed by financial institutions, three-quarters of which rely on machine learning, deep learning or high performance computing, according to a recent survey.

NVIDIA A100: Top Latency Results

The STAC-ML inference benchmark is designed to measure the latency of long short-term memory (LSTM) model inference — the time from receiving new input data until the model output is computed. LSTM is a key model approach used to discover financial time-series data like asset prices.

The benchmark includes three LSTM models of increasing complexity. NVIDIA A100 GPUs, running in a Supermicro Ultra SuperServer, demonstrated low latencies in the 99th percentile.

Accelerated Computing for STAC-ML and STAC-A2, STAC-A3 Benchmarks

Considering the A100 performance on STAC-ML for inference — in addition to its record-setting performance in the STAC-A2 benchmark for option price discovery and the STAC-A3 benchmark for model backtesting — provides a glimpse at how NVIDIA AI computing can accelerate a pipeline of modern trading environments.

It also shows A100 GPUs deliver leading performance and workload versatility for financial institutions.

Predictable Performance for Consistent Low Latency

Predictable performance is crucial for low-latency environments in finance, as extreme outliers can cause substantial losses during fast market moves. 

Notably, there were no large outliers in NVIDIA’s latency, as the maximum latency was no more than 2.3x the median latency across all LSTMs and the number of model instances, ranging up to 32 concurrent instances.1

NVIDIA is the first to submit performance results for what’s known as the Tacana Suite of the benchmark. Tacana is for inference performed on a sliding window, where a new timestep is added and the oldest removed for each inference operation. This is helpful for high-frequency trading, where inference needs to be performed on every market data update.

A second suite, Sumaco, performs inference on an entirely new set of data, which reflects the use case where an event prompts inference based on recent history.

Leading Throughput in Benchmark Results

NVIDIA also submitted a throughput-optimized configuration on the same hardware for the Sumaco Suite in FP16 precision.2

On the least complex LSTM in the benchmark, A100 GPUs on Supermicro servers helped serve up more than 1.7 million inferences per second.3

For the most complex LSTM, these systems handled as many as 12,800 inferences per second.4

NVIDIA A100: Performance and Versatility 

NVIDIA GPUs offer multiple advantages that lower the total cost of ownership for electronic trading stacks.

For one, NVIDIA AI provides a single platform for training and inference. Whether developing, backtesting or deploying an AI model, NVIDIA AI delivers leading performance — and developers don’t need to learn different programming languages and frameworks for research and trading.

Moreover, the NVIDIA CUDA programming model enables development, optimization and deployment of applications across GPU-accelerated embedded systems, desktop workstations, enterprise data centers, cloud-based platforms and HPC supercomputers.

Efficiencies for Reduced Operating Expenses

The financial services industry stands to benefit from not only data throughput advances but also improved operational efficiencies.

Reduced energy and square footage usage for systems in data centers can make a big difference in operating expenses. That’s especially pressing as IT organizations make the case for budgetary outlays to cover new high-performance systems.

On the most demanding LSTM model, NVIDIA A100 exceeded 17,700 inferences per second per kilowatt while consuming 722 watts, offering leading energy efficiency.5

The benchmark results confirm that NVIDIA GPUs are unrivaled in terms of throughput and energy efficiency for workloads like backtesting and simulation.

Learn about NVIDIA delivering smarter, more secure financial services.

[1] SUT ID NVDA221118b, max of STAC-ML.Markets.Inf.T.LSTM_A.2.LAT.v1

[2] SUT ID NVDA221118a

[3] STAC-ML.Markets.Inf.S.LSTM_A.4.TPUT.v1

[4] STAC-ML.Markets.Inf.S.LSTM_C.[1,2,4].TPUT.v1

[5] SUT ID NVDA221118a, STAC-ML.Markets.Inf.S.LSTM_C.[1,2,4].ENERG_EFF.v1

Read More

Survey Reveals Financial Industry’s Top 4 AI Priorities for 2023

Survey Reveals Financial Industry’s Top 4 AI Priorities for 2023

For several years, NVIDIA has been working with some of the world’s leading financial institutions to develop and execute a wide range of rapidly evolving AI strategies. For the past three years, we’ve asked them to tell us collectively what’s on the top of their minds.

Sometimes the results are just what we thought they’d be, and other times they’re truly surprising. This year’s survey, conducted in a time of continued macroeconomic uncertainty, the results were a little of both.

From banking and fintech institutions to insurance and asset management firms, the goals remain the same — find ways to more accurately manage risk, enhance efficiencies to reduce operating costs, and improve experiences for clients and customers. By digging in deeper, we were able to learn which areas of AI are of most interest as well as a bit more.

Below are the top four findings we gleaned from our “State of AI in Financial Services: 2023 Trends” survey taken by nearly 500 global financial services professionals.

Hybrid Cloud Is Coming on Strong

Financial services firms, like other enterprises, are looking to optimize spending for AI training and inference — with the knowledge that sensitive data can’t be migrated to the cloud. To do so cost-effectively, they’re moving many of their compute-intensive workloads to the hybrid cloud.

This year’s survey found that nearly half of respondents’ firms are moving to the hybrid cloud to optimize AI performance and reduce costs. Recent announcements from leading cloud service providers and platforms reinforce this shift and make data portability, MLOps management and software standardization across cloud and on-prem instances a strategic imperative for cost and efficiency.

Large Language Models Top the List of AI Use Cases 

The survey results, focused on companies based in the Americas and Europe, with a sample size of over 200, found the top AI use cases to be natural language processing and large language models (26%), recommender systems and next-best action (23%), portfolio optimization (23%) and fraud detection (22%). Emerging workloads for the metaverse, synthetic data generation and virtual worlds were also common.

Banks, trading firms and hedge funds are adopting these technologies to create personalized customer experiences. For example, Deutsche Bank recently announced a multi-year innovation partnership with NVIDIA to embed AI into financial services across use cases, including intelligent avatars, speech AI, fraud detection and risk management, to slash total cost of ownership by up to 80%. The bank plans to use NVIDIA Omniverse to build a 3D virtual avatar to help employees navigate internal systems and respond to HR-related questions.

Banks Seeing More Potential for AI to Grow Revenue

The survey found that AI is having a quantifiable impact on financial institutions. Nearly half of survey takers said that AI will help increase annual revenue for their organization by at least 10%. More than a third noted that AI will also help decrease annual costs by at least 10%.

Financial services professionals highlighted how AI has enhanced business operations — particularly improving customer experience (46%), creating operational efficiencies (35%) and reducing total cost of ownership (20%).

For example, computer vision and natural language processing are helping automate financial document analysis and claims processing, saving companies time, expenses and resources. AI also helps prevent fraud by enhancing anti-money laundering and know-your-customer processes, while recommenders create personalized digital experiences for a firm’s customers or clients.

The Biggest Obstacle: Recruiting and Retaining AI Talent 

But there are challenges to achieving AI goals in the enterprise. Recruiting and retaining AI experts is the single biggest obstacle, a problem reported by 36% of survey takers. There is also inadequate technology to enable AI innovation, according to 28% of respondents.

Insufficient data sizes for model training and accuracy is another pressing issue noted by 26% of financial services professionals. This could be addressed through the use of generative AI to produce accurate synthetic financial data used to train AI models.

Executive Support for AI at New High

Despite the challenges, the future for AI in FSI is getting brighter. Increasing executive buy-in for AI is a new theme in the survey results. Some 64% of those surveyed noted that “my executive leadership team values and believes in AI,” compared with 36% a year ago. In addition, 58% said that “AI is important to my company’s future success,” up from 39% a year ago.

Financial institutions plan to continue building out enterprise AI in the future. This will include scaling up and scaling out AI infrastructure, including hardware, software and services.

Empowering data scientists, quants and developers while minimizing bottlenecks requires a sophisticated, full stack AI platform. Executives have seen the ROI of deploying AI-enabled applications. In 2023, these leaders will focus on scaling AI across the enterprise, hiring more data scientists and investing in accelerated computing technology to support training and deployment of AI applications.

Download the “State of AI in Financial Services: 2023 Trends” report for in-depth results and insights.

Watch on-demand sessions from NVIDIA GTC featuring industry leaders from Capital One, Deutsche Bank, U.S. Bank and Ubiquant. And learn more about delivering smarter, more secure financial services and the AI-powered bank.

Read More

Meet the Omnivore: Architectural Researcher Lights Up Omniverse Scenes With ‘SunPath’ Extension

Meet the Omnivore: Architectural Researcher Lights Up Omniverse Scenes With ‘SunPath’ Extension

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.

Things are a lot sunnier these days for designers looking to visualize their projects in NVIDIA Omniverse, a platform for creating and operating metaverse applications.

Pingfan Wu

Pingfan Wu, a senior architectural researcher at the Hunan Architectural Design Institute (HNADI) Group in south-central China, developed an Omniverse extension that makes controlling the sun and its effects on scenes within the platform more intuitive and precise.

Wu has won multiple awards for his work with Omniverse extensions — core building blocks that let anyone create and extend functions of the Omniverse using the popular Python or C++ programming languages.

The “SunPath” extension lets users easily add, manipulate and update a sun-path diagram within the Omniverse viewport.

This enables designers to visualize how the sun will impact a site or building at different times of day throughout the year, which is critical for the architecture, engineering, construction and operations (AECO) industry.

“To achieve digitization in the AECO industry, the first task is to build a bridge for data interoperability between design tools, and the only platform that can fill this role is Omniverse,” Wu said. “Based on the Universal Scene Description framework, Omniverse enables designers to collaborate using different software.”

And the excellent rendering power of Omniverse makes the sunlight look very realistic, Wu added.

Award-Winning Omniverse Extension

The extension shined brightly in NVIDIA’s inaugural #ExtendOmniverse contest last fall, winning the “scene modifier or manipulator tools” category.

The competition invited developers to use the Omniverse Code app to create their own Omniverse extensions for a chance to win an NVIDIA RTX GPU.

Wu decided to join, as he saw the potential for Omniverse’s real-time, collaborative, AI-powered tools to let designers “focus more on ideas and design rather than on rendering and struggling to connect models from different software,” he said.

Many design tools that come with a skylight feature lack realistic shadows, the researcher had noticed. His “SunPath” extension — built in just over a month — solves this problem, as designers can import scenes to Omniverse and create accurate sun studies quickly and easily.

They can even use “SunPath” to perform energy-consumption analyses to make design results more efficient, Wu added.

More Wins With Omniverse

Participating in the #ExtendOmniverse contest inspired Wu to develop further applications of Omniverse for AECO, he said.

He led his team at HNADI, which includes eight members who are experts in both architecture and technology, to create a platform that enables users to customize AECO-specific digital twins. Dubbed HNADI-AECKIT, the platform extends an Omniverse Connector to Rhino, a 3D computer graphics and computer-aided design software.

It won two awards at last year’s AEC Hackathon in China: first prize overall and in the “Best Development of the Omniverse” track.

“The core technical advantage of HNADI-AECKIT is that it opens up a complete linkage between Rhino and Omniverse,” Wu said. “Any output from Rhino can be quickly converted into a high-fidelity, information-visible, interactive, customizable digital-twin scene in Omniverse.”

Learn more about HNADI-AECKIT:

Join In on the Creation

Creators and developers across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Discover how to build an Omniverse extension in less than 10 minutes.

For a deeper dive into developing on Omniverse, attend these sessions at GTC, a global conference for the era of AI and the metaverse, running March 20-23.

Find additional documentation and tutorials in the Omniverse Resource Center, which details how developers like Wu can build custom USD-based applications and extensions for the platform.

To discover more free tools, training and a community for developers, join the NVIDIA Developer Program.

Follow NVIDIA Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

Read More

Deloitte’s Nitin Mittal on the Secrets of ‘All-In’ AI Success

Deloitte’s Nitin Mittal on the Secrets of ‘All-In’ AI Success

Artificial intelligence is the new electricity. The fifth industrial revolution. And companies that go all-in on AI are reaping the rewards. So how do you make that happen?

That big question — how? — is explored by Nitin Mittal, principal at Deloitte, one of the world’s largest professional services organizations, and co-author Thomas Davenport in their new book “All in on AI: How Smart Companies Win Big with Artificial Intelligence.” 

On the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz speaks with Mittal, who leads Deloitte’s artificial intelligence growth platform. He describes how companies across a wide variety of industries have used AI to radically transform their organizations and achieve competitive advantage.

The book, from the Harvard Business Review Press, explores the importance of a company-wide commitment to AI and the role of leadership in driving the adoption and implementation of the technology. Mittal emphasizes that companies must have a clear strategy and plan, and invest in the necessary technology and talent to make the most of AI.

You Might Also Like

Art(ificial) Intelligence: Pindar Van Arman Builds Robots That Paint
Pindar Van Arman, an American artist and roboticist, designs painting robots that explore the differences between human and computational creativity. Since his first system in 2005, he has built multiple artificially creative robots. The most famous, Cloud Painter, was awarded first place at Robotart 2018.

Real or Not Real? Attorney Steven Frank Uses Deep Learning to Authenticate Art
Steven Frank is a partner at the law firm Morgan Lewis, specializing in intellectual property and commercial technology law. He’s also half of the husband-wife team that used convolutional neural networks to authenticate artistic masterpieces, including da Vinci’s Salvador Mundi, with AI’s help.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments
Humans playing games against machines is nothing new, but now computers can develop games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.

Subscribe to the AI Podcast on Your Favorite Platform

You can now listen to the AI Podcast through Amazon Music, Apple Music, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Read More

Cyberpunk 2077 Brings a Taste of the Future with DLSS

Cyberpunk 2077 Brings a Taste of the Future with DLSS

Analyst reports. Academic papers. Ph.D. programs. There are a lot of places you can go to get a glimpse of the future. But the best place might just be El Coyote Cojo, a whiskey-soaked dive bar that doesn’t exist in real life.

Fire up Cyberpunk 2077 and you’ll see much more than the watering hole’s colorful clientele. You’ll see refractions and reflections, shadows and smoke, all in the service of creating more than just eye candy — each element works in tandem with the game’s expansive and engaging story.

Patching In: Cyberpunk 2077’s DLSS 3 Upgrade

It’s a tale that gets more mesmerizing with every patch — the updates game developers periodically release to keep their games at the cutting edge. Today’s addition brings NVIDIA DLSS 3, the latest in neural graphics.

DLSS 3 is a package that includes a number of sophisticated technologies. Combining DLSS Super Resolution, all-new DLSS Frame Generation, and NVIDIA Reflex, running on the new hardware capabilities of GeForce RTX 40 Series GPUs, DLSS 3 multiplies performance while maintaining great image quality and responsiveness.

The performance uplift this delivers lets PC gamers experience more of Cyberpunk 2077’s gritty glory. And it sets the stage for the pending Ray Tracing Overdrive Mode, an update that will escalate the game’s ray tracing, a technique long used to create blockbuster films and enhance the game’s already-incredible visuals.

The gaming press — perhaps the most brutal critics of the visual arts — are already raving about DLSS 3.

“I’m deeply in love with DLSS with Frame Generation,” gushes PC Gamer. “DLSS 3 is incredible, and NVIDIA’s tech is undeniably a selling point for the [GeForce RTX] 4080,” asserts PCGamesN. “[I]t’s a phenomenal achievement in graphics performance,” states Digital Foundry.

Twenty-one games now support DLSS 3, including Dying Light 2 Stay Human, Hitman 3, Marvel’s Midnight Suns, Microsoft Flight Simulator, Portal with RTX, The Witcher 3: Wild Hunt and Warhammer 40,000: Darktide. More are coming, including Atomic Heart, ILL SPACE and Warhaven.

Playing with the Future

There are many tales on the increasingly immersive streets of Cyberpunk 2077’s Night City, but the one even non-gamers should pay attention to the story behind these stories: gaming as a proving ground for the technologies that will shape the future Cyberpunk 2077 is simulating right before our eyes.

This is the best of the best. CD PROJEKT RED is known for supporting its flagship titles like Cyberpunk 2077 and The Witcher 3: Wild Hunt for extended periods of time with a variety of patches that take advantage of modern hardware. It has earned a reputation as a game development studio that embraces emerging technologies.

That makes its games more than a cultural phenomenon. They’re a technology-proving ground, a position held over the past two decades by a string of titles revered by gamers, such as Crysis, Metro and Far Cry.

PC Games Unleash Global Innovation

Building digital worlds such as these is the hard computing problem — the meanest streets in our increasingly digital world — out of which the parallel computing engines that are GPUs emerged.

A decade ago, GPUs sparked the deep-learning revolution that has upended trillion-dollar industries around the world, one that continues with the latest advancements in generative AI such as ChatGPT and Dall-E that have erupted over the past month into a global cultural sensation.

It’s a case study in the disruptive innovations Harvard Business School Professor Clayton Christensen identified as lurking in unexpected places.

DLSS brings that revolution full circle, using the same deep-learning techniques harnessed for everything from cutting-edge science to self-driving cars to advance the visual quality of games.

Trained on NVIDIA’s supercomputers, DLSS enhances a new generation of games that demand ever more performance. And the use of DLSS 3 is just one example of this benchmark game’s innovations — innovations woven into the texture of the game’s storytelling.

CD PROJEKT RED uses DirectX Ray Tracing, for example, a lighting technique that emulates the way light reflects and refracts in the real world to provide a more believable environment than what’s typically seen using static lighting in more traditional games.

The game uses several ray-tracing techniques to render a massive future city at incredible levels of detail. The current version of the game uses ray-traced shadows, reflections, diffuse illumination and ambient occlusion.

And if you turn on “Psycho mode” in the game’s ray-traced lighting settings, you’ll even see ray-traced global illumination as sunlight bounces realistically around the scene.

Cyberpunk 2077’s Visual Storytelling Packs a Punch 

The result of all these features is a visually stunning experience that complements the world’s story and tone: sprawling cityscapes that use subtle shadows to define depth, districts are bathed in neon lights, and windows, mirrors and puddles glistening with accurate reflections.

With realistic shadows and lighting and the added performance of NVIDIA DLSS 3, no other platform will compare to the Cyberpunk 2077 experience on a GeForce RTX-powered PC.

But that’s just part of the bigger story.

Games like these offer a window into the kind of visual capabilities now at the fingertips of architects and designers. It’s a taste of the simulation capabilities being put to work by engineers at NASA and Lawrence Livermore Labs. And it shows what’s possible in the next-generation environments for digital collaboration and simulation now being harnessed at scale by manufacturers such as BMW.

So muscle the geek in your life aside from the PC for an evening, grab the latest patch for Cyberpunk 2077 and a GeForce 40 Series GPU and gawk at the game’s abundance of power and potential, put on display right in front of your face.

It’s where we’ll see the future first, and that future is looking better than ever.

Find out more on GeForce.com.

Read More

Broadcaster ‘Nilson1489’ Shares Livestreaming Techniques and More This Week ‘In the NVIDIA Studio’

Broadcaster ‘Nilson1489’ Shares Livestreaming Techniques and More This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Broadcasters have an arsenal of new features and technologies at their disposal.

These include the eighth-generation NVIDIA video encoder on RTX 40 Series GPUs with support for the open AV1 video-coding format; new NVIDIA Broadcast app effects like Eye Contact and Vignette; and support for AV1 streaming in Discord — joining integrations with software including OBS Studio, Blackmagic Design’s DaVinci Resolve, Adobe Premiere Pro via the Voukoder plugin, Wondershare Filmora and Jianying.

Livestreamer, video editor and entertainer Nilson1489 steps In the NVIDIA Studio this week to demonstrate how these broadcasting advancements elevate his livestreams — in style and substance — using a GeForce RTX 4090 GPU and the power of AI.

In addition, the Warbb World Challenge, hosted by famed 3D artist Warbb, is underway. It invites artists to create their own 3D worlds. Prizes include an NVIDIA Studio laptop, RTX 40 Series GPUs from MSI and ArtStation gift cards. Learn more below.

Better Broadcast Benefits

Content creators looking to get into the livestreaming hustle, professional YouTubers and other broadcasters regardless of skill level or audience can benefit from using GeForce RTX 40 Series GPUs — featuring the eighth-generation NVIDIA video encoder, NVENC, with support for AV1.

The new AV1 encoder delivers 40% better efficiency. This means livestreams will appear as if bandwidth was increased by 40% — a big boost in image quality — in popular broadcast apps like OBS Studio.

Discord, a communication platform with over 150 million active monthly users, has enabled end-to-end livestreams with AV1. This dramatically improves screen sharing — whether for livestreaming, online classes or virtual hangouts with friends — with crisp, clear image quality at up to 4K resolution and 60 frames per second.

AV1 increases bandwidth and video quality by up to 40%.

The integration takes advantage of AV1’s advanced compression efficiency, so users with AV1 decode-capable hardware will experience even higher-quality video. Plus, users with slower internet connections can now enjoy higher-quality video streams at up to 4K and 60fps resolution.

In addition, NVIDIA Studio recently released NVIDIA Broadcast 1.4 — a tool for livestreaming and video conferencing that turns virtually any room into a home studio — with two effects, Eye Contact and Vignette, as well as an enhancement to Virtual Background that uses temporal information. Learn more about Broadcast — available for all RTX GPU owners including this week’s featured artist, Nilson1489.

Give a Boost to Broadcasts

Hailing from Hamburg, Germany, Nilson1489 is a self-taught livestreamer. He possesses a deep passion — stemmed from his involvement in the livestreaming community — for helping to improve the creative workflows of emerging broadcasters who are eager to learn.

Nilson1489 said he invested in a GeForce RTX 4090 GPU expecting better visual livestreaming quality across the board and considerable time savings in his creative workflows. And that’s exactly what he experienced.

“With NVIDIA Broadcast, I’m able to look on my display to read notes or focus on tutorial elements without losing eye contact with the audience.”
—Nilson1489

“NVIDIA RTX GPUs have the best GPU acceleration for my creative apps as well as the best quality when it comes to recording inside OBS Studio,” the livestreamer said.

Nilson1489 streams primarily in OBS Studio, which means the AV1 encoder automatically boosts bandwidth by 40%, dramatically improving video quality.

As a teacher for creators and consultant for various brands and clients, Nilson1489 leads daily calls and workshops over Microsoft Teams, Zoom and other video conference apps supported by NVIDIA Broadcast. He can read notes and present while keeping strong eye contact with his followers, made possible by NVIDIA Broadcast’s new Eye Contact feature.

His GeForce RTX 4090 GPU proved especially handy when exporting final video files with its dual AV1 video encoders, he said. When enabled in video-editing and livestreaming apps — such as Adobe Premiere Pro via the Voukoder plug-in, DaVinci Resolve, Wondershare Filmora and Jianying — export times are cut in half, with improved video quality. This enabled Nilson1489 to export from Premiere Pro and upload his videos to YouTube at least twice as fast as his competitors.

NVIDIA GeForce RTX GPUs.

The right GeForce RTX GPU can make a massive difference in the quality and quantity of content creation, as it did for Nilson1489.

Livestreamer Nilson1489.

Check out Nilson1489’s YouTube channel for streaming tutorials.

Create a 3D World, Win Serious Studio Hardware

3D talent Robin Snijders, aka Warbb, along with NVIDIA Studio presents the Warbb World Challenge, where 3D artists are invited to transform a traditionally boring space into an extraordinary scene using assets provided by Warbb. Everyone starts with the same template: an empty room, table, laptop and person.

A panel of creative talents, including Warbb, In the NVIDIA Studio artist I Am Fesq, Noxx_art and two NVIDIA reps will judge entries based on creativity, originality and visual appeal. Contest winners will receive incredible prizes, including an MSI Creator Z16P 3080 Ti Studio Laptop, RTX 40 Series GPUs from MSI and ArtStation gift cards.

The Warbb World Challenge’s grand prize: an MSI Creator Z16P Studio Laptop equipped with an NVIDIA RTX 3080 Ti GPU.

Enter by downloading the challenge assets, upload the submission to ArtStation with the hashtags #WarbbWorld and #NVIDIAStudio, then share on social media channels with #WarbbWorld and #NVIDIAStudio. NVIDIA Studio could feature you in an in-depth interview to add exposure to your world.

The challenge runs through Sunday, Feb. 19. Terms and conditions apply.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

What Are Large Language Models Used For?

What Are Large Language Models Used For?

AI applications are summarizing articles, writing stories and engaging in long conversations — and large language models are doing the heavy lifting.

A large language model, or LLM, is a deep learning algorithm that can recognize, summarize, translate, predict and generate text and other content based on knowledge gained from massive datasets.

Large language models are among the most successful applications of transformer models. They aren’t just for teaching AIs human languages, but for understanding proteins, writing software code, and much, much more.

In addition to accelerating natural language processing applications — like translation, chatbots and AI assistants — large language models are used in healthcare, software development and use cases in many other fields.

What Are Large Language Models Used For?

Language is used for more than human communication.

Code is the language of computers. Protein and molecular sequences are the language of biology. Large language models can be applied to such languages or scenarios in which communication of different types is needed.

These models broaden AI’s reach across industries and enterprises, and are expected to enable a new wave of research, creativity and productivity, as they can help to generate complex solutions for the world’s toughest problems.

For example, an AI system using large language models can learn from a database of molecular and protein structures, then use that knowledge to provide viable chemical compounds that help scientists develop groundbreaking vaccines or treatments.

Large language models are also helping to create reimagined search engines, tutoring chatbots, composition tools for songs, poems, stories and marketing materials, and more.

How Do Large Language Models Work?

Large language models learn from huge volumes of data. As its name suggests, central to an LLM is the size of the dataset it’s trained on. But the definition of “large” is growing, along with AI.

Now, large language models are typically trained on datasets large enough to include nearly everything that has been written on the internet over a large span of time.

Such massive amounts of text are fed into the AI algorithm using unsupervised learning — when a model is given a dataset without explicit instructions on what to do with it. Through this method, a large language model learns words, as well as the relationships between and concepts behind them. It could, for example, learn to differentiate the two meanings of the word “bark” based on its context.

And just as a person who masters a language can guess what might come next in a sentence or paragraph — or even come up with new words or concepts themselves — a large language model can apply its knowledge to predict and generate content.

Large language models can also be customized for specific use cases, including through techniques like fine-tuning or prompt-tuning, which is the process of feeding the model small bits of data to focus on, to train it for a specific application.

Thanks to its computational efficiency in processing sequences in parallel, the transformer model architecture is the building block behind the largest and most powerful LLMs.

Top Applications for Large Language Models

Large language models are unlocking new possibilities in areas such as search engines, natural language processing, healthcare, robotics and code generation.

The popular ChatGPT AI chatbot is one application of a large language model. It can be used for a myriad of natural language processing tasks.

The nearly infinite applications for LLMs also include:

  • Retailers and other service providers can use large language models to provide improved customer experiences through dynamic chatbots, AI assistants and more.
  • Search engines can use large language models to provide more direct, human-like answers.
  • Life science researchers can train large language models to understand proteins, molecules, DNA and RNA.
  • Developers can write software and teach robots physical tasks with large language models.
  • Marketers can train a large language model to organize customer feedback and requests into clusters, or segment products into categories based on product descriptions.
  • Financial advisors can summarize earnings calls and create transcripts of important meetings using large language models. And credit-card companies can use LLMs for anomaly detection and fraud analysis to protect consumers.
  • Legal teams can use large language models to help with legal paraphrasing and scribing.

Running these massive models in production efficiently is resource-intensive and requires expertise, among other challenges, so enterprises turn to NVIDIA Triton Inference Server, software that helps standardize model deployment and deliver fast and scalable AI in production.

Where to Find Large Language Models

In June 2020, OpenAI released GPT-3 as a service, powered by a 175-billion-parameter model that can generate text and code with short written prompts.

In 2021, NVIDIA and Microsoft developed Megatron-Turing Natural Language Generation 530B, one of the world’s largest models for reading comprehension and natural language inference, which eases tasks like summarization and content generation.

And HuggingFace last year introduced BLOOM, an open large language model that’s able to generate text in 46 natural languages and over a dozen programming languages.

Another LLM, Codex, turns text to code for software engineers and other developers.

NVIDIA offers tools to ease the building and deployment of large language models:

  • NVIDIA NeMo LLM service provides a fast path to customizing large language models and deploying them at scale using NVIDIA’s managed cloud API, or through private and public clouds.
  • NVIDIA NeMo Megatron, part of the NVIDIA AI platform, is a framework for easy, efficient, cost-effective training and deployment of large language models. Designed for enterprise application development, NeMo Megatron provides an end-to-end workflow for automated distributed data processing, training large-scale, customized GPT-3, T5 and multilingual T5 models, and deploying models for inference at scale.
  • NVIDIA BioNeMo is a domain-specific managed service and framework for large language models in proteomics, small molecules, DNA and RNA. It’s built on NVIDIA NeMo Megatron for training and deploying large biomolecular transformer AI models at supercomputing scale.

Challenges of Large Language Models

Scaling and maintaining large language models can be difficult and expensive.

Building a foundational large language model often requires months of training time and millions of dollars.

And because LLMs require a significant amount of training data, developers and enterprises can find it a challenge to access large-enough datasets.

Due to the scale of large language models, deploying them requires technical expertise, including a strong understanding of deep learning, transformer models and distributed software and hardware.

Many leaders in tech are working to advance development and build resources that can expand access to large language models, allowing consumers and enterprises of all sizes to reap their benefits.

Learn more about large language models.

Read More

DLSS 3 Delivers Ultimate Boost in Latest Game Updates on GeForce NOW

DLSS 3 Delivers Ultimate Boost in Latest Game Updates on GeForce NOW

GeForce NOW RTX 4080 SuperPODs are rolling out now, bringing RTX 4080-class performance and features to Ultimate members — including support for NVIDIA Ada Lovelace GPU architecture technologies like NVIDIA DLSS 3

This GFN Thursday brings updates to some of GeForce NOW’s hottest games that take advantage of these amazing technologies, all from the cloud.

Plus, RTX 4080 SuperPOD upgrades are nearly finished in the London data center, expanding the number of regions where Ultimate members can experience the most powerful cloud gaming technology on the planet. Look for updates on Twitter once the upgrade is complete and be sure to check back each week to see which cities light up next on the map.

Members can also look for six more supported games in the GeForce NOW library this week. 

AI-Powered Performance

NVIDIA DLSS has revolutionized graphics rendering, using AI and GeForce RTX Tensor Cores to boost frame rates while delivering crisp, high-quality images that rival native resolution.

Powered by new hardware capabilities of the Ada Lovelace architecture, DLSS 3 generates entirely new high-quality frames, rather than just pixels. It combines DLSS Super Resolution technology and DLSS Frame Generation to reconstruct seven-eighths of the displayed pixels, accelerating performance.

DLSS 3 games are backwards compatible with DLSS 2 technology — when developers integrate DLSS 3, DLSS 2, aka DLSS Super Resolution, is supported by default. Additionally, integrations of DLSS 3 include NVIDIA Reflex, reducing system latency for all GeForce RTX users and making games more responsive.

Support for DLSS 3 is growing, and soon GeForce NOW Ultimate members can experience this technology in new updates to HITMAN World of Assassination and Marvel’s Midnight Suns.

A Whole New ‘World of Assassination’

The critically acclaimed HITMAN 3 from IOI transforms into HITMAN World of Assassination, an upgrade that includes content from HITMAN 1, HITMAN 2 and HITMAN 3. With DLSS 3 support, streaming from the cloud in 4K looks better than ever, even with ray tracing and settings cranked to the max.

HITMAN World of Assassination on GeForce NOW
Death waits for no one, especially when streaming from the cloud.

Become legendary assassin Agent 47 and use creativity and improvisation to execute ingenious, spectacular eliminations in sprawling sandbox locations all around the globe. Stick to the shadows to stalk and eliminate targets — or take them out in plain sight.

Along with DLSS 3 support, Ultimate members can enjoy ray-traced opaque reflections and shadows in the world of HITMAN as they explore open-world missions with multiple ways to succeed. 

Deadpool Does DLSS 3

Marvel’s Midnight Suns’ first downloadable content, The Good, The Bad, and the Undead, adds Deadpool to the team roster, along with new story missions, new enemies and more. Add in DLSS 3 support coming soon, and Ultimate members have a lot to look forward to.

Marvel Midnight Suns on GeForce NOW
Don’t miss out on Deadpool in ‘Marvel’s Midnight Suns’ first DLC.

Launched last month to critical acclaim, VGC awarded Marvel’s Midnight Suns with a five-out-of-five rating, calling it a “modern strategy classic.” PC Gamer said it was “completely brilliant” and scored it an 88 out of 100, and Rock Paper Shotgun called it “one of the best superhero games full stop.”

Ultimate members can explore the abbey grounds and get to know the Merc with a Mouth at up to 4K resolutions and 120 frames per second, or immerse themselves in their mission with ultrawide resolutions at up to 3840 x 1600 at 120 frames per second — plus many other popular formats including 3440 x 1440 and 2560 x 1080. 

GeForce NOW members can also take their games and save data with them wherever they go, from underpowered PCs to Macs, Samsung and LG TVs, mobile devices and Chromebooks.

Game On

Get ready to game: Six more games join the supported list in the GeForce NOW library this week:

  • Tom Clancy’s Ghost Recon: Breakpoint (New release on Steam, Jan. 23)
  • Oddballers (New release on Ubisoft Connect, Jan. 26)
  • Watch Dogs: Legion (New release on Steam, Jan. 26)
  • Cygnus Enterprises (Steam)
  • Rain World (Steam)
  • The Eternal Cylinder (Steam)

There’s only one question left to kick off a weekend full of gaming in the cloud. Let us know on Twitter or in the comments below.

Read More