Deloitte’s Nitin Mittal on the Secrets of ‘All-In’ AI Success

Deloitte’s Nitin Mittal on the Secrets of ‘All-In’ AI Success

Artificial intelligence is the new electricity. The fifth industrial revolution. And companies that go all-in on AI are reaping the rewards. So how do you make that happen?

That big question — how? — is explored by Nitin Mittal, principal at Deloitte, one of the world’s largest professional services organizations, and co-author Thomas Davenport in their new book “All in on AI: How Smart Companies Win Big with Artificial Intelligence.” 

On the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz speaks with Mittal, who leads Deloitte’s artificial intelligence growth platform. He describes how companies across a wide variety of industries have used AI to radically transform their organizations and achieve competitive advantage.

The book, from the Harvard Business Review Press, explores the importance of a company-wide commitment to AI and the role of leadership in driving the adoption and implementation of the technology. Mittal emphasizes that companies must have a clear strategy and plan, and invest in the necessary technology and talent to make the most of AI.

You Might Also Like

Art(ificial) Intelligence: Pindar Van Arman Builds Robots That Paint
Pindar Van Arman, an American artist and roboticist, designs painting robots that explore the differences between human and computational creativity. Since his first system in 2005, he has built multiple artificially creative robots. The most famous, Cloud Painter, was awarded first place at Robotart 2018.

Real or Not Real? Attorney Steven Frank Uses Deep Learning to Authenticate Art
Steven Frank is a partner at the law firm Morgan Lewis, specializing in intellectual property and commercial technology law. He’s also half of the husband-wife team that used convolutional neural networks to authenticate artistic masterpieces, including da Vinci’s Salvador Mundi, with AI’s help.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments
Humans playing games against machines is nothing new, but now computers can develop games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.

Subscribe to the AI Podcast on Your Favorite Platform

You can now listen to the AI Podcast through Amazon Music, Apple Music, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Read More

Cyberpunk 2077 Brings a Taste of the Future with DLSS

Cyberpunk 2077 Brings a Taste of the Future with DLSS

Analyst reports. Academic papers. Ph.D. programs. There are a lot of places you can go to get a glimpse of the future. But the best place might just be El Coyote Cojo, a whiskey-soaked dive bar that doesn’t exist in real life.

Fire up Cyberpunk 2077 and you’ll see much more than the watering hole’s colorful clientele. You’ll see refractions and reflections, shadows and smoke, all in the service of creating more than just eye candy — each element works in tandem with the game’s expansive and engaging story.

Patching In: Cyberpunk 2077’s DLSS 3 Upgrade

It’s a tale that gets more mesmerizing with every patch — the updates game developers periodically release to keep their games at the cutting edge. Today’s addition brings NVIDIA DLSS 3, the latest in neural graphics.

DLSS 3 is a package that includes a number of sophisticated technologies. Combining DLSS Super Resolution, all-new DLSS Frame Generation, and NVIDIA Reflex, running on the new hardware capabilities of GeForce RTX 40 Series GPUs, DLSS 3 multiplies performance while maintaining great image quality and responsiveness.

The performance uplift this delivers lets PC gamers experience more of Cyberpunk 2077’s gritty glory. And it sets the stage for the pending Ray Tracing Overdrive Mode, an update that will escalate the game’s ray tracing, a technique long used to create blockbuster films and enhance the game’s already-incredible visuals.

The gaming press — perhaps the most brutal critics of the visual arts — are already raving about DLSS 3.

“I’m deeply in love with DLSS with Frame Generation,” gushes PC Gamer. “DLSS 3 is incredible, and NVIDIA’s tech is undeniably a selling point for the [GeForce RTX] 4080,” asserts PCGamesN. “[I]t’s a phenomenal achievement in graphics performance,” states Digital Foundry.

Twenty-one games now support DLSS 3, including Dying Light 2 Stay Human, Hitman 3, Marvel’s Midnight Suns, Microsoft Flight Simulator, Portal with RTX, The Witcher 3: Wild Hunt and Warhammer 40,000: Darktide. More are coming, including Atomic Heart, ILL SPACE and Warhaven.

Playing with the Future

There are many tales on the increasingly immersive streets of Cyberpunk 2077’s Night City, but the one even non-gamers should pay attention to the story behind these stories: gaming as a proving ground for the technologies that will shape the future Cyberpunk 2077 is simulating right before our eyes.

This is the best of the best. CD PROJEKT RED is known for supporting its flagship titles like Cyberpunk 2077 and The Witcher 3: Wild Hunt for extended periods of time with a variety of patches that take advantage of modern hardware. It has earned a reputation as a game development studio that embraces emerging technologies.

That makes its games more than a cultural phenomenon. They’re a technology-proving ground, a position held over the past two decades by a string of titles revered by gamers, such as Crysis, Metro and Far Cry.

PC Games Unleash Global Innovation

Building digital worlds such as these is the hard computing problem — the meanest streets in our increasingly digital world — out of which the parallel computing engines that are GPUs emerged.

A decade ago, GPUs sparked the deep-learning revolution that has upended trillion-dollar industries around the world, one that continues with the latest advancements in generative AI such as ChatGPT and Dall-E that have erupted over the past month into a global cultural sensation.

It’s a case study in the disruptive innovations Harvard Business School Professor Clayton Christensen identified as lurking in unexpected places.

DLSS brings that revolution full circle, using the same deep-learning techniques harnessed for everything from cutting-edge science to self-driving cars to advance the visual quality of games.

Trained on NVIDIA’s supercomputers, DLSS enhances a new generation of games that demand ever more performance. And the use of DLSS 3 is just one example of this benchmark game’s innovations — innovations woven into the texture of the game’s storytelling.

CD PROJEKT RED uses DirectX Ray Tracing, for example, a lighting technique that emulates the way light reflects and refracts in the real world to provide a more believable environment than what’s typically seen using static lighting in more traditional games.

The game uses several ray-tracing techniques to render a massive future city at incredible levels of detail. The current version of the game uses ray-traced shadows, reflections, diffuse illumination and ambient occlusion.

And if you turn on “Psycho mode” in the game’s ray-traced lighting settings, you’ll even see ray-traced global illumination as sunlight bounces realistically around the scene.

Cyberpunk 2077’s Visual Storytelling Packs a Punch 

The result of all these features is a visually stunning experience that complements the world’s story and tone: sprawling cityscapes that use subtle shadows to define depth, districts are bathed in neon lights, and windows, mirrors and puddles glistening with accurate reflections.

With realistic shadows and lighting and the added performance of NVIDIA DLSS 3, no other platform will compare to the Cyberpunk 2077 experience on a GeForce RTX-powered PC.

But that’s just part of the bigger story.

Games like these offer a window into the kind of visual capabilities now at the fingertips of architects and designers. It’s a taste of the simulation capabilities being put to work by engineers at NASA and Lawrence Livermore Labs. And it shows what’s possible in the next-generation environments for digital collaboration and simulation now being harnessed at scale by manufacturers such as BMW.

So muscle the geek in your life aside from the PC for an evening, grab the latest patch for Cyberpunk 2077 and a GeForce 40 Series GPU and gawk at the game’s abundance of power and potential, put on display right in front of your face.

It’s where we’ll see the future first, and that future is looking better than ever.

Find out more on GeForce.com.

Read More

Broadcaster ‘Nilson1489’ Shares Livestreaming Techniques and More This Week ‘In the NVIDIA Studio’

Broadcaster ‘Nilson1489’ Shares Livestreaming Techniques and More This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Broadcasters have an arsenal of new features and technologies at their disposal.

These include the eighth-generation NVIDIA video encoder on RTX 40 Series GPUs with support for the open AV1 video-coding format; new NVIDIA Broadcast app effects like Eye Contact and Vignette; and support for AV1 streaming in Discord — joining integrations with software including OBS Studio, Blackmagic Design’s DaVinci Resolve, Adobe Premiere Pro via the Voukoder plugin, Wondershare Filmora and Jianying.

Livestreamer, video editor and entertainer Nilson1489 steps In the NVIDIA Studio this week to demonstrate how these broadcasting advancements elevate his livestreams — in style and substance — using a GeForce RTX 4090 GPU and the power of AI.

In addition, the Warbb World Challenge, hosted by famed 3D artist Warbb, is underway. It invites artists to create their own 3D worlds. Prizes include an NVIDIA Studio laptop, RTX 40 Series GPUs from MSI and ArtStation gift cards. Learn more below.

Better Broadcast Benefits

Content creators looking to get into the livestreaming hustle, professional YouTubers and other broadcasters regardless of skill level or audience can benefit from using GeForce RTX 40 Series GPUs — featuring the eighth-generation NVIDIA video encoder, NVENC, with support for AV1.

The new AV1 encoder delivers 40% better efficiency. This means livestreams will appear as if bandwidth was increased by 40% — a big boost in image quality — in popular broadcast apps like OBS Studio.

Discord, a communication platform with over 150 million active monthly users, has enabled end-to-end livestreams with AV1. This dramatically improves screen sharing — whether for livestreaming, online classes or virtual hangouts with friends — with crisp, clear image quality at up to 4K resolution and 60 frames per second.

AV1 increases bandwidth and video quality by up to 40%.

The integration takes advantage of AV1’s advanced compression efficiency, so users with AV1 decode-capable hardware will experience even higher-quality video. Plus, users with slower internet connections can now enjoy higher-quality video streams at up to 4K and 60fps resolution.

In addition, NVIDIA Studio recently released NVIDIA Broadcast 1.4 — a tool for livestreaming and video conferencing that turns virtually any room into a home studio — with two effects, Eye Contact and Vignette, as well as an enhancement to Virtual Background that uses temporal information. Learn more about Broadcast — available for all RTX GPU owners including this week’s featured artist, Nilson1489.

Give a Boost to Broadcasts

Hailing from Hamburg, Germany, Nilson1489 is a self-taught livestreamer. He possesses a deep passion — stemmed from his involvement in the livestreaming community — for helping to improve the creative workflows of emerging broadcasters who are eager to learn.

Nilson1489 said he invested in a GeForce RTX 4090 GPU expecting better visual livestreaming quality across the board and considerable time savings in his creative workflows. And that’s exactly what he experienced.

“With NVIDIA Broadcast, I’m able to look on my display to read notes or focus on tutorial elements without losing eye contact with the audience.”
—Nilson1489

“NVIDIA RTX GPUs have the best GPU acceleration for my creative apps as well as the best quality when it comes to recording inside OBS Studio,” the livestreamer said.

Nilson1489 streams primarily in OBS Studio, which means the AV1 encoder automatically boosts bandwidth by 40%, dramatically improving video quality.

As a teacher for creators and consultant for various brands and clients, Nilson1489 leads daily calls and workshops over Microsoft Teams, Zoom and other video conference apps supported by NVIDIA Broadcast. He can read notes and present while keeping strong eye contact with his followers, made possible by NVIDIA Broadcast’s new Eye Contact feature.

His GeForce RTX 4090 GPU proved especially handy when exporting final video files with its dual AV1 video encoders, he said. When enabled in video-editing and livestreaming apps — such as Adobe Premiere Pro via the Voukoder plug-in, DaVinci Resolve, Wondershare Filmora and Jianying — export times are cut in half, with improved video quality. This enabled Nilson1489 to export from Premiere Pro and upload his videos to YouTube at least twice as fast as his competitors.

NVIDIA GeForce RTX GPUs.

The right GeForce RTX GPU can make a massive difference in the quality and quantity of content creation, as it did for Nilson1489.

Livestreamer Nilson1489.

Check out Nilson1489’s YouTube channel for streaming tutorials.

Create a 3D World, Win Serious Studio Hardware

3D talent Robin Snijders, aka Warbb, along with NVIDIA Studio presents the Warbb World Challenge, where 3D artists are invited to transform a traditionally boring space into an extraordinary scene using assets provided by Warbb. Everyone starts with the same template: an empty room, table, laptop and person.

A panel of creative talents, including Warbb, In the NVIDIA Studio artist I Am Fesq, Noxx_art and two NVIDIA reps will judge entries based on creativity, originality and visual appeal. Contest winners will receive incredible prizes, including an MSI Creator Z16P 3080 Ti Studio Laptop, RTX 40 Series GPUs from MSI and ArtStation gift cards.

The Warbb World Challenge’s grand prize: an MSI Creator Z16P Studio Laptop equipped with an NVIDIA RTX 3080 Ti GPU.

Enter by downloading the challenge assets, upload the submission to ArtStation with the hashtags #WarbbWorld and #NVIDIAStudio, then share on social media channels with #WarbbWorld and #NVIDIAStudio. NVIDIA Studio could feature you in an in-depth interview to add exposure to your world.

The challenge runs through Sunday, Feb. 19. Terms and conditions apply.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

What Are Large Language Models Used For?

What Are Large Language Models Used For?

AI applications are summarizing articles, writing stories and engaging in long conversations — and large language models are doing the heavy lifting.

A large language model, or LLM, is a deep learning algorithm that can recognize, summarize, translate, predict and generate text and other content based on knowledge gained from massive datasets.

Large language models are among the most successful applications of transformer models. They aren’t just for teaching AIs human languages, but for understanding proteins, writing software code, and much, much more.

In addition to accelerating natural language processing applications — like translation, chatbots and AI assistants — large language models are used in healthcare, software development and use cases in many other fields.

What Are Large Language Models Used For?

Language is used for more than human communication.

Code is the language of computers. Protein and molecular sequences are the language of biology. Large language models can be applied to such languages or scenarios in which communication of different types is needed.

These models broaden AI’s reach across industries and enterprises, and are expected to enable a new wave of research, creativity and productivity, as they can help to generate complex solutions for the world’s toughest problems.

For example, an AI system using large language models can learn from a database of molecular and protein structures, then use that knowledge to provide viable chemical compounds that help scientists develop groundbreaking vaccines or treatments.

Large language models are also helping to create reimagined search engines, tutoring chatbots, composition tools for songs, poems, stories and marketing materials, and more.

How Do Large Language Models Work?

Large language models learn from huge volumes of data. As its name suggests, central to an LLM is the size of the dataset it’s trained on. But the definition of “large” is growing, along with AI.

Now, large language models are typically trained on datasets large enough to include nearly everything that has been written on the internet over a large span of time.

Such massive amounts of text are fed into the AI algorithm using unsupervised learning — when a model is given a dataset without explicit instructions on what to do with it. Through this method, a large language model learns words, as well as the relationships between and concepts behind them. It could, for example, learn to differentiate the two meanings of the word “bark” based on its context.

And just as a person who masters a language can guess what might come next in a sentence or paragraph — or even come up with new words or concepts themselves — a large language model can apply its knowledge to predict and generate content.

Large language models can also be customized for specific use cases, including through techniques like fine-tuning or prompt-tuning, which is the process of feeding the model small bits of data to focus on, to train it for a specific application.

Thanks to its computational efficiency in processing sequences in parallel, the transformer model architecture is the building block behind the largest and most powerful LLMs.

Top Applications for Large Language Models

Large language models are unlocking new possibilities in areas such as search engines, natural language processing, healthcare, robotics and code generation.

The popular ChatGPT AI chatbot is one application of a large language model. It can be used for a myriad of natural language processing tasks.

The nearly infinite applications for LLMs also include:

  • Retailers and other service providers can use large language models to provide improved customer experiences through dynamic chatbots, AI assistants and more.
  • Search engines can use large language models to provide more direct, human-like answers.
  • Life science researchers can train large language models to understand proteins, molecules, DNA and RNA.
  • Developers can write software and teach robots physical tasks with large language models.
  • Marketers can train a large language model to organize customer feedback and requests into clusters, or segment products into categories based on product descriptions.
  • Financial advisors can summarize earnings calls and create transcripts of important meetings using large language models. And credit-card companies can use LLMs for anomaly detection and fraud analysis to protect consumers.
  • Legal teams can use large language models to help with legal paraphrasing and scribing.

Running these massive models in production efficiently is resource-intensive and requires expertise, among other challenges, so enterprises turn to NVIDIA Triton Inference Server, software that helps standardize model deployment and deliver fast and scalable AI in production.

Where to Find Large Language Models

In June 2020, OpenAI released GPT-3 as a service, powered by a 175-billion-parameter model that can generate text and code with short written prompts.

In 2021, NVIDIA and Microsoft developed Megatron-Turing Natural Language Generation 530B, one of the world’s largest models for reading comprehension and natural language inference, which eases tasks like summarization and content generation.

And HuggingFace last year introduced BLOOM, an open large language model that’s able to generate text in 46 natural languages and over a dozen programming languages.

Another LLM, Codex, turns text to code for software engineers and other developers.

NVIDIA offers tools to ease the building and deployment of large language models:

  • NVIDIA NeMo LLM service provides a fast path to customizing large language models and deploying them at scale using NVIDIA’s managed cloud API, or through private and public clouds.
  • NVIDIA NeMo Megatron, part of the NVIDIA AI platform, is a framework for easy, efficient, cost-effective training and deployment of large language models. Designed for enterprise application development, NeMo Megatron provides an end-to-end workflow for automated distributed data processing, training large-scale, customized GPT-3, T5 and multilingual T5 models, and deploying models for inference at scale.
  • NVIDIA BioNeMo is a domain-specific managed service and framework for large language models in proteomics, small molecules, DNA and RNA. It’s built on NVIDIA NeMo Megatron for training and deploying large biomolecular transformer AI models at supercomputing scale.

Challenges of Large Language Models

Scaling and maintaining large language models can be difficult and expensive.

Building a foundational large language model often requires months of training time and millions of dollars.

And because LLMs require a significant amount of training data, developers and enterprises can find it a challenge to access large-enough datasets.

Due to the scale of large language models, deploying them requires technical expertise, including a strong understanding of deep learning, transformer models and distributed software and hardware.

Many leaders in tech are working to advance development and build resources that can expand access to large language models, allowing consumers and enterprises of all sizes to reap their benefits.

Learn more about large language models.

Read More

DLSS 3 Delivers Ultimate Boost in Latest Game Updates on GeForce NOW

DLSS 3 Delivers Ultimate Boost in Latest Game Updates on GeForce NOW

GeForce NOW RTX 4080 SuperPODs are rolling out now, bringing RTX 4080-class performance and features to Ultimate members — including support for NVIDIA Ada Lovelace GPU architecture technologies like NVIDIA DLSS 3

This GFN Thursday brings updates to some of GeForce NOW’s hottest games that take advantage of these amazing technologies, all from the cloud.

Plus, RTX 4080 SuperPOD upgrades are nearly finished in the London data center, expanding the number of regions where Ultimate members can experience the most powerful cloud gaming technology on the planet. Look for updates on Twitter once the upgrade is complete and be sure to check back each week to see which cities light up next on the map.

Members can also look for six more supported games in the GeForce NOW library this week. 

AI-Powered Performance

NVIDIA DLSS has revolutionized graphics rendering, using AI and GeForce RTX Tensor Cores to boost frame rates while delivering crisp, high-quality images that rival native resolution.

Powered by new hardware capabilities of the Ada Lovelace architecture, DLSS 3 generates entirely new high-quality frames, rather than just pixels. It combines DLSS Super Resolution technology and DLSS Frame Generation to reconstruct seven-eighths of the displayed pixels, accelerating performance.

DLSS 3 games are backwards compatible with DLSS 2 technology — when developers integrate DLSS 3, DLSS 2, aka DLSS Super Resolution, is supported by default. Additionally, integrations of DLSS 3 include NVIDIA Reflex, reducing system latency for all GeForce RTX users and making games more responsive.

Support for DLSS 3 is growing, and soon GeForce NOW Ultimate members can experience this technology in new updates to HITMAN World of Assassination and Marvel’s Midnight Suns.

A Whole New ‘World of Assassination’

The critically acclaimed HITMAN 3 from IOI transforms into HITMAN World of Assassination, an upgrade that includes content from HITMAN 1, HITMAN 2 and HITMAN 3. With DLSS 3 support, streaming from the cloud in 4K looks better than ever, even with ray tracing and settings cranked to the max.

HITMAN World of Assassination on GeForce NOW
Death waits for no one, especially when streaming from the cloud.

Become legendary assassin Agent 47 and use creativity and improvisation to execute ingenious, spectacular eliminations in sprawling sandbox locations all around the globe. Stick to the shadows to stalk and eliminate targets — or take them out in plain sight.

Along with DLSS 3 support, Ultimate members can enjoy ray-traced opaque reflections and shadows in the world of HITMAN as they explore open-world missions with multiple ways to succeed. 

Deadpool Does DLSS 3

Marvel’s Midnight Suns’ first downloadable content, The Good, The Bad, and the Undead, adds Deadpool to the team roster, along with new story missions, new enemies and more. Add in DLSS 3 support coming soon, and Ultimate members have a lot to look forward to.

Marvel Midnight Suns on GeForce NOW
Don’t miss out on Deadpool in ‘Marvel’s Midnight Suns’ first DLC.

Launched last month to critical acclaim, VGC awarded Marvel’s Midnight Suns with a five-out-of-five rating, calling it a “modern strategy classic.” PC Gamer said it was “completely brilliant” and scored it an 88 out of 100, and Rock Paper Shotgun called it “one of the best superhero games full stop.”

Ultimate members can explore the abbey grounds and get to know the Merc with a Mouth at up to 4K resolutions and 120 frames per second, or immerse themselves in their mission with ultrawide resolutions at up to 3840 x 1600 at 120 frames per second — plus many other popular formats including 3440 x 1440 and 2560 x 1080. 

GeForce NOW members can also take their games and save data with them wherever they go, from underpowered PCs to Macs, Samsung and LG TVs, mobile devices and Chromebooks.

Game On

Get ready to game: Six more games join the supported list in the GeForce NOW library this week:

  • Tom Clancy’s Ghost Recon: Breakpoint (New release on Steam, Jan. 23)
  • Oddballers (New release on Ubisoft Connect, Jan. 26)
  • Watch Dogs: Legion (New release on Steam, Jan. 26)
  • Cygnus Enterprises (Steam)
  • Rain World (Steam)
  • The Eternal Cylinder (Steam)

There’s only one question left to kick off a weekend full of gaming in the cloud. Let us know on Twitter or in the comments below.

Read More

Braced From Space: Startup Keeps Watchful Eye on Gas Pipeline Leaks Across the Globe

Braced From Space: Startup Keeps Watchful Eye on Gas Pipeline Leaks Across the Globe

As its name suggests, Orbital Sidekick is creating technology that acts as a buddy in outer space, keeping an eye on the globe using satellites to help keep it safe and sustainable.

The San Francisco-based startup, a member of the NVIDIA Inception program, enables commercial and government users to optimize sustainable operations and security with hyperspectral intelligence — information collected from across the electromagnetic spectrum.

“Space-based hyperspectral intelligence basically breaks up the spectrum of light so it’s possible to see what’s happening at a chemical level without needing an aircraft,” said Kaushik Bangalore, vice president of payload engineering at Orbital Sidekick, or OSK.

Founded in 2016, OSK is among the first to use hyperspectral intelligence to detect hydrocarbon or gas leaks. These are some of the world’s most pressing energy issues — 6,000 U.S. pipeline incidents from 2002-2021 resulted in over $11 billion in damages.

“Previous industry-standard ways of detecting such issues were unreliable as they used small aircraft and pilots looking out the window for leaks, depending on the trained eye rather than sensors or other technologies,” said Bangalore.

OSK operates a constellation of satellites that collect hyperspectral imagery from space. That data is processed and analyzed in real time using the NVIDIA Jetson edge AI platform. Then, insights — like the type of leak at a GPS point, its size and its urgency — can be viewed on a screen by users of OSK’s SIGMA Monitor platform.

The technology accomplishes what a pilot would, but much more quickly, objectively and with higher accuracy, Bangalore said.

A methane leak detected by OSK technology.

Sustainable Operations

OSK technologies have so far monitored more than 20,000 kilometers of pipelines for various customers, according to Tushar Prabhakar, its founder and chief operating officer.

The platform has detected nearly 100 suspected methane leaks, 200 suspected liquid hydrocarbon leaks or contamination issues, and more than 300 intrusive events related to digging or construction activities, Prabhakar added. OSK helped eliminate the potential for these events to become serious energy crises.

OSK’s SIGMA Monitor platform dashboard.

“We’re taking hyperspectral intelligence to the finest commercial resolution that the world has ever seen to make the Earth a more sustainable place,” Bangalore said. “The biggest challenge with hyperspectral imagery is dealing with huge amounts of data, which can be up to 400x the size of 2D visual data. NVIDIA technology helps process this data in real time.”

OSK uses the NVIDIA Jetson AGX Xavier module as an AI engine at the satellites’ edge to process the hyperspectral data collected from various sensors and crunch algorithms for leak detection.

The module, along with the NVIDIA CV-CUDA and CUDA Python software toolkits, have sped up OSK’s analysis by 5x, according to Bangalore. This acceleration enhances the platform’s ability to detect and recognize anomalies from space — then project the data back to Earth.

“There are around 15 sun-synchronous orbits per day,” Bangalore said. “With NVIDIA Jetson AGX Xavier, we can process all the data taken onboard a satellite in an orbit within that same orbit, enabling continuous data capture.”

In 2018, OSK’s previous-generation system was launched on the International Space Station. Its data was analyzed using the NVIDIA Jetson TX2 module.

In addition, OSK uses the next-generation NVIDIA Jetson AGX Orin module for an aerial version of the platform that collects hyperspectral imagery from airplanes. Compared to the previous-generation module, the Jetson AGX Orin — with upgraded memory and speed — can run larger amounts of map data streamed in real time to pilots, Bangalore said.

“We chose the NVIDIA Jetson platform because it offers off-the-shelf products for industrial applications with extended shock, vibration and temperature, and software that has been optimized for the NVIDIA GPU architecture,” Bangalore said.

And as a member of NVIDIA Inception, a free, global program for cutting-edge startups, OSK received technical support to optimize the team’s use of such safety features and SDK acceleration.

Versatile Use Cases

Hyperspectral intelligence offers a multitude of applications. For this reason, the OSK platform is deployed across a broad range of customers, including the U.S. Department of Defense and energy sector.

OSK’s GHOSt constellation of satellites.

Energy Transfer, a major pipeline operator, will use OSK’s GHOSt constellation for asset monitoring.

For the commercial oil and gas industry, OSK technology helps detect gas and hydrocarbon leaks, allowing pipeline operators to quickly halt work and fix issues.

To accelerate the energy transition, the platform can enhance exploration of lithium, cobalt and more, display a hyperspectral index of areas on a map that have signals of the elements, and differentiate between these materials and soil.

Creating sustainable supply chains for battery materials like lithium is key to advancing the global energy transition and scaling electric vehicle adoption, as lithium-ion batteries power the majority of EVs. The EV battery market is projected to reach over $218 billion in 2027, and EV sales are estimated to reach up to 50 million units by 2030.

“Our tech can help discover lithium, and prevent methane or greenhouse gasses from being let out into the atmosphere,” Bangalore said. “It’s a very direct impact, and it’s what the planet needs.”

Read more about innovative energy startups, including MinervaCQ, which is using speech AI to coach contact-center agents in retail energy, and Skycatch, which is building digital twins to make mining and construction sites safer, more efficient and sustainable.

Learn more about NVIDIA’s work in energy and apply to join NVIDIA Inception.

Read More

NVIDIA CEO Ignites AI Conversation in Stockholm

NVIDIA CEO Ignites AI Conversation in Stockholm

More than 600 entrepreneurs, developers, researchers and executives from across the Nordics flocked Tuesday to Stockholm’s sleek Sergel Hub conference center in a further sign of the strength of the region’s AI ecosystem.

The highlight: a far-reaching conversation between NVIDIA founder and CEO Jensen Huang and Swedish industrialist Marcus Wallenberg exploring the intersections of AI, green computing, and Scandinavia’s broader tech scene.

“This generative AI phenomenon is creating a whole slew of new startups, new ideas, new video editing, image editing, new text,” Huang said. “It can achieve capabilities that previous computing platforms cannot.”

The Berzelius supercomputer, named for Jöns Jacob Berzelius, one of the fathers of modern chemistry, has just been upgraded to 94 NVIDIA DGX A100 AI computing systems, delivering nearly half an exaflop of AI performance, placing it among the world’s 100 fastest AI supercomputers.

“Years ago, Marcus and I started talking about a new way of doing computer science. Having a key instrument, like Berzelius, would be a fundamental instrument of future science,” Huang told the audience. “The work that is done on this instrument would make tremendous impacts to life sciences, material sciences, physical sciences and computer science.”

Maximum Efficiency, Minimal Impact

The rising use of electricity is one of the causes of global warming, and powerful, energy-efficient computers are crucial to fighting climate change through green computing.

Huang explained that whether for data centers or the latest smartphone, computer chips, systems and software must be designed and used to maximize energy efficiency and minimize environmental impact.

“Companies large and small have to sign up for the carbon footprint that we use to build the work that we do,” said Huang. “If there’s an opportunity for us to help accelerate workloads and reduce energy use and improve energy efficiency, we will.”

Sweden’s Role in AI

The upgrade comes as AI is powering change in every industry across the globe, with leaders from across the Nordics accelerating the growth of some of the world’s most powerful AI solutions, explained Wallenberg.

“From the perspective of the foundations, we’re trying to work for the betterment of Sweden by promoting the areas of research, technology and medicine,” said Wallenberg, whose family has for generations been deeply involved across the nation’s economy. “We are working together as a team to create possibilities and the foundations for more work to be done.”

The Berzelius system was used for training the first Swedish large language model. Increasing in size 10x every year for the last few years, large language models are just one state-of-the-art AI technology that promises transformation through learned knowledge.

Neural networks trained with massive datasets on powerful systems, LLMs are accelerating discoveries across industries such as healthcare and climate science with software frameworks like NVIDIA BioNeMo. Models like ChatGPT are making a name for themselves as a new way to use AI.

“You can connect models together to retrieve new information so that models like ChatGPT could report on the news today, who won that game, or the latest weather,” Huang said. “The combination of these capabilities means not only the ability to respond and answer questions and write stories, but it can also write programs and solve problems.”

Knowledge From Data

Solving problems requires reliable, physically accurate data. The industrial metaverse, where digital twins of real factories, rail networks or retail stores can be created, is already being used by large companies like Amazon, BMW, Ericsson and Siemens.

Following the conversation between Huang and Wallenberg, Staffan Truvé, CTO and co-founder of cybersecurity company Recorded Future, talked about how data can be used to model intelligence as a digital twin to get an end-to-end view of threats and targets.

“Today, there are three major converging threat areas. Physical, cyber and influence, which is the threat to our brains,” Truvé explained. “By creating an intelligence graph, we’re building a full picture of a threat.”

Digital twins are not the only way to gather valuable insights when developing for the future. Sara Mazur, vice executive director of the Knut and Alice Wallenberg Foundation and chair of the Wallenberg AI Autonomous Systems and Software Program, highlighted the importance of collaboration between academia and industry.

Read More

Supersizing AI: Sweden Turbocharges Its Innovation Engine

Supersizing AI: Sweden Turbocharges Its Innovation Engine

Sweden is outfitting its AI supercomputer for a journey to the cutting edge of machine learning, robotics and healthcare.

It couldn’t ask for a better guide than Anders Ynnerman (above). His signature blue suit, black spectacles and gentle voice act as calm camouflage for a pioneering spirit.

Early on, he showed a deep interest in space, but his career took a different direction. He established the country’s first network of supercomputing centers and went on to pioneer scientific visualization technologies used in hospitals and museums around the world.

Today, he leads Sweden’s largest research effort, WASP — the Wallenberg Artificial Intelligence, Autonomous Systems and Software Program — focused on AI innovation.

The Big Picture

“This is a year when people are turning their focus to sustainability challenges we face as a planet,” said the Linköping University professor. “Without advances in AI and other innovations, we won’t have a sustainable future.”

To supercharge environmental efforts and more, Sweden will upgrade its Berzelius supercomputer. Based on the NVIDIA DGX SuperPOD, it will deliver nearly half an exaflop of AI performance, placing it among the world’s 100 fastest AI supercomputers.

“A machine like Berzelius is fundamental not only for the results it delivers, but the way it catalyzes expertise in Sweden,” he said. “We’re a knowledge-driven nation, so our researchers and companies need access to the latest technology to compete.”

AI Learns Swedish

In June, the system trained GPT-SW3, a family of large language models capable of drafting a speech or answering questions in Swedish.

Today, a more powerful version sports 20 billion parameters, a popular measure of a neural network’s smarts. It can help developers write software and handle other complex tasks.

Long term, researchers aim to train a version with a whopping 175 billion parameters that’s also fluent in Nordic languages like Danish and Norwegian.

One of Sweden’s largest banks is already exploring use of the latest GPT-SW3 variant for a chatbot and other applications.

A Memory Boost

To build big AIs, Berzelius will add 34 NVIDIA DGX A100 systems to its cluster of 60 that makeup the SuperPOD. The new units will sport GPUs with 80GB of memory each.

Anders Ynnerman with Sweden's Berzelius AI supercomputer
Ynnerman with Berzelius at the system’s March 2021 launch.

“Having really fat nodes with large memory is important for some of these models,” Ynnerman said. Atos, the system integrator, is providing “a very smooth ride getting the whole process set up,” he added.

Seeking a Cure for Cancer

In healthcare, a data-driven life sciences program, funded by the Wallenberg Foundation, will be a major Berzelius user. The program spans 10 universities and will, among other applications, employ AI to understand protein folding, fundamentally important to understanding diseases like cancer.

Others will use Berzelius to improve detection of cancer cells and navigate the massive mounds of data in human genomes.

Some researchers are exploring tools such as NVIDIA Omniverse Avatar Cloud Engine and NVIDIA BotMaker to create animated patients. Powered by GPT-SW3, they could help doctors practice telemedicine skills.

Robots in Zero Gravity

Sweden’s work in image and video recognition will get a boost from Berzelius. Such algorithms advance work on the autonomous systems used in modern factories and warehouses.

One project is exploring how autonomous systems act in space and undersea. It’s a topic close to the heart of a recent addition to WASP, researcher Christer Fuglesang, who was named Sweden’s first astronaut in 1992.

Fuglesang went to the International Space Station in 2006 and 2008. Later, as a professor of physics at Sweden’s Royal Institute of Technology, he collaborated with Ynnerman on live shows about life in space, presented in the WISDOME dome theater at the Visualization Center C Ynnerman founded and directs.

Thanks to his expertise in visualization, “I can go to Mars whenever I want,” Ynnerman quipped.

He took NVIDIA founder and CEO Jensen Huang and Marcus Wallenberg — scion of Sweden’s leading industrial family — on a tour of outer space at the dome to mark the Berzelius upgrade. The dome can show the Martian surface in 8K resolution at 120 frames per second, thanks to its use of 12 NVIDIA Quadro RTX 8000 GPUs.

Inspiring the Next Generation

Ynnerman’s algorithms have touched millions who’ve seen visualizations of Egyptian mummies at the British Museum.

“That makes me even more proud than some of my research papers because many are young people we can inspire with a love for science and technology,” he said.

A passion for science and technology has attracted more than 400 active Ph.D. candidates so far to WASP, which is on the way to exceeding its goal of 600 grads by 2031.

But even a visualization specialist can’t be everywhere. So Ynnerman’s pet project will use AI to create a vibrant, virtual museum guide.

“I think we can provide more people a ‘wow’ experience — I want a copilot when I’m navigating the universe,” he said.

Read More

3D Artist Enters the Node Zone, Creating Alien Artifacts This Week ‘In the NVIDIA Studio’

3D Artist Enters the Node Zone, Creating Alien Artifacts This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Artist Ducky 3D creates immersive experiences through vibrant visuals and beautiful 3D environments in the alien-inspired animation Stylized Alien Landscape — this week In the NVIDIA Studio.

Ducky 3D is a modern Renaissance man who works with musicians from around the world, creating tour packages and visual art related to the music industry. As a 3D fanatic who specializes in Blender, he often guides emerging and advanced 3D artists to new creative heights. It’s no surprise that his weekly YouTube tutorials have garnered over 400,000 subscribers.

Stylized Alien Landscape is uniquely individualistic and was built entirely in Blender using geometry nodes, or geo nodes.

In the NVIDIA Studio Ducky 3D Customization
Geo nodes can add organic style and customization to Blender scenes and animation.

The use of geo nodes in Blender has recently skyrocketed. That’s because they virtually make modeling a completely procedural process — allowing for non-linear, non-destructive workflows and the instancing of objects — to create incredibly detailed scenes using small amounts of data. Geo nodes can also organically modify all types of geometry, including meshes, curves, volumes, instances and more. Many of these were edited in the making of Stylized Alien Landscape.

Ducky 3D opened a new scene, created a simple cube and applied several popular geo nodes, including random value, triangulate and dual mesh. By simple trial and error with numeric values, he was able to create a provocative, alien-inspired visual.

“I use geometry nodes to take advantage of the dual mesh, which creates organic shapes by manipulating with simple deformations,” he said.

In The NVIDIA Studio Ducky 3D
Ducky 3D’s GeForce 4090 RTX GPU ensured smooth movement in the viewport with virtually no noise.

Simply adding a transform node to the mix got the animation going. Ducky 3D then copied all nodes and scaled the duplicated render to create two animations rotating simultaneously.

Next, Ducky 3D turned his focus to lighting the object, selecting the Blender Cycles renderer to do so.

“Rendering lighting is drastically better in Cycles, but you do you,” he said with candor.

Blender Cycles RTX-accelerated OptiX ray tracing in the viewport unlocks interactive, photoreal rendering for modeling and animation work.

In the NVIDIA Studio Ducky 3D
Ducky 3D applies shading nodes to “Stylized Alien Landscape.”  

Here, Ducky 3D can quickly create more realism in two ways: adding depth of field by playing with distance options and the flat shaded view, and bringing the background out of focus and the object into focus.

Volume “just makes things look cool,” Ducky 3D added. Selecting the world and clicking principled volume made the scene nearly photorealistic.

With the help of geo nodes, Ducky 3D refined the texture to his desired effect, using the bump node, color ramp and noise texture.

For more on the making of Stylized Alien Landscape, check out the video below.

“I needed my viewport to perform well enough to see detail through the added volume,” he said. “Thank goodness for the AI-powered NVIDIA OptiX ray tracing API that my GeForce RTX 4090 GPU enables.”

Ducky 3D accomplished the slightly odd atmosphere that he wanted for his piece through the addition of fog.

“Fog is tough to render, and the GPU helped me see my viewport clearly,” he said.

Ducky 3D Setup In the NVIDIA Studio
3D artist Ducky 3D’s workstation. 

In the NVIDIA Studio Ducky3D
For more Blender tutorials, check out Ducky 3D’s YouTube channel or the NVIDIA Studio Blender playlist.

Enter the #NewYearNewArt Challenge 

A new year comes with new art, and we’d love to see yours! Use the hashtag #NewYearNewArt and tag @NVIDIAStudio to show off your most recent creations for a chance to be featured on our channels.

There have been stunning animations like this lively work from the amazing @stillmanvisual.

There’s also explosive new content from @TheRealYarnHub featuring some action-packed, historically-based battles.

Catch even more #NewYearNewArt entries from other creators on the NVIDIA Studio Instagram stories.

Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

Fresh AI on Security: Digital Fingerprinting Deters Identity Attacks

Fresh AI on Security: Digital Fingerprinting Deters Identity Attacks

Add AI to the list of defenses against identity attacks, one of the most common and hardest breach to prevent.

More than 40% of all data compromises involved stolen credentials, according to the 2022 Verizon Data Breach Investigations Report. And a whopping 80% of all web application breaches involved credential abuse.

“Credentials are the favorite data type of criminal actors because they are so useful for masquerading as legitimate users on the system,” the report said.

In today’s age of zero trust, security experts say it’s not a matter of if but when they’ll experience an identity attack.

A Response From R&D

The director of cybersecurity engineering and R&D at NVIDIA, Bartley Richardson, articulates the challenge simply.

“We need to look for when Bartley is not acting like Bartley,” he said.

Last year, his team described a concept called digital fingerprinting. In the wake of highly publicized attacks in February, he came up with a simple but ambitious idea for implementing it.

A Big Ask

He called a quick meeting with his two tech leads to share the idea. Richardson told them he wanted to create a deep learning model for every account, server, application and device on the network.

The models would learn individual behavior patterns and alert security staff when an account was acting in an uncharacteristic way. That’s how they would deter attacks.

The tech leads thought it was a crazy idea. It was computationally impossible, they told him, and no one was even using GPUs for security yet.

Richardson listened to their concerns and slowly convinced them it was worth a try. They would start with just a model for every account.

Everybody’s Problem

Security managers know it’s a big-data problem.

Companies collect terabytes of data on network events every day. That’s just a fraction of the petabytes of events a day companies could log if they had the resources, according to Daniel Rohrer, NVIDIA’s vice president of software product security.

The fact that it’s a big-data problem is also good news, Rohrer said in a talk at GTC in September (watch free with registration). “We’re already well on the way to combining our cybersecurity and AI efforts,” he said.

Starting With a Proof of Concept

By mid-March, Richardson’s team was focused on ways to run thousands of AI models in tandem. They used NVIDIA Morpheus, an AI security software library announced a year earlier, to build a proof of concept in two months.

Once an entire, albeit crude, product was done, they spent another two months optimizing each portion.

Then they reached out to about 50 NVIDIANs to review their work — security operations and product security teams, and IT folks who would be alpha users.

An Initial Deployment

Three months later, in early October, they had a solution NVIDIA could deploy on its global networks — security software for AI-powered digital fingerprinting.

The software is a kind of LEGO kit, an AI framework anyone can use to create a custom cybersecurity solution.

Version 2.0 is running across NVIDIA’s networks today on just four NVIDIA A100 Tensor Core GPUs. IT staff can create their own models, changing aspects of them to create specific alerts.

Tested and Released

NVIDIA is making these capabilities available in a digital fingerprinting AI workflow included with NVIDIA AI Enterprise 3.0 announced in December.

For identity attackers, “the models Bartley’s team built have anomaly scores that are off the charts, and we’re able to visualize events so we can see things in new ways,” said Jason Recla, NVIDIA’s senior director of information security.

As a result, instead of facing a tsunami of 100 million network events a week, an IT team may have just 8-10 incidents to investigate daily. That cuts the time to detect certain attack patterns from weeks to minutes.

Tailoring AI for Small Events

The team already has big ideas for future versions.

“Our software works well on major identity attacks, but it’s not every day you have an incident like that,” Richardson said. “So, now we’re tuning it with other models to make it more applicable to everyday vanilla security incidents.”

Meanwhile, Richardson’s team used the software to create a proof of concept for a large consulting firm.

“They wanted it to handle a million records in a tenth of a second. We did it in a millionth of a second, so they’re fully on board,” Richardson said.

The Outlook for AI Security

Looking ahead, the team has ideas for applying AI and accelerated computing to secure digital identities and generate hard-to-find training data.

Richardson imagines passwords and multi-factor authentication will be replaced by models that know how fast a person types, with how many typos, what services they use and when they use them. Such detailed digital identities will prevent attackers from hijacking accounts and pretending they are legitimate users.

Data on network events is gold for building AI models that harden networks, but no one wants to share details of real users and break-ins. Synthetic data, generated by a variant of digital fingerprinting, could fill the gap, letting users create what they need to fit their use case.

In the meantime, Recla has advice security managers can act on now.

“Get up to speed on AI,” he said. “Start investing in AI engineering and data science skills — that’s the biggest thing.”

Digital fingerprinting is not a panacea. It’s one more brick in an ever-evolving digital wall that a community of security specialists is building against the next big attack.

You can try this AI-powered security workflow live on NVIDIA LaunchPad starting Jan. 23. And you can watch the video below to learn more about digital fingerprinting.

Read More