Fresh AI on Security: Digital Fingerprinting Deters Identity Attacks

Fresh AI on Security: Digital Fingerprinting Deters Identity Attacks

Add AI to the list of defenses against identity attacks, one of the most common and hardest breach to prevent.

More than 40% of all data compromises involved stolen credentials, according to the 2022 Verizon Data Breach Investigations Report. And a whopping 80% of all web application breaches involved credential abuse.

“Credentials are the favorite data type of criminal actors because they are so useful for masquerading as legitimate users on the system,” the report said.

In today’s age of zero trust, security experts say it’s not a matter of if but when they’ll experience an identity attack.

A Response From R&D

The director of cybersecurity engineering and R&D at NVIDIA, Bartley Richardson, articulates the challenge simply.

“We need to look for when Bartley is not acting like Bartley,” he said.

Last year, his team described a concept called digital fingerprinting. In the wake of highly publicized attacks in February, he came up with a simple but ambitious idea for implementing it.

A Big Ask

He called a quick meeting with his two tech leads to share the idea. Richardson told them he wanted to create a deep learning model for every account, server, application and device on the network.

The models would learn individual behavior patterns and alert security staff when an account was acting in an uncharacteristic way. That’s how they would deter attacks.

The tech leads thought it was a crazy idea. It was computationally impossible, they told him, and no one was even using GPUs for security yet.

Richardson listened to their concerns and slowly convinced them it was worth a try. They would start with just a model for every account.

Everybody’s Problem

Security managers know it’s a big-data problem.

Companies collect terabytes of data on network events every day. That’s just a fraction of the petabytes of events a day companies could log if they had the resources, according to Daniel Rohrer, NVIDIA’s vice president of software product security.

The fact that it’s a big-data problem is also good news, Rohrer said in a talk at GTC in September (watch free with registration). “We’re already well on the way to combining our cybersecurity and AI efforts,” he said.

Starting With a Proof of Concept

By mid-March, Richardson’s team was focused on ways to run thousands of AI models in tandem. They used NVIDIA Morpheus, an AI security software library announced a year earlier, to build a proof of concept in two months.

Once an entire, albeit crude, product was done, they spent another two months optimizing each portion.

Then they reached out to about 50 NVIDIANs to review their work — security operations and product security teams, and IT folks who would be alpha users.

An Initial Deployment

Three months later, in early October, they had a solution NVIDIA could deploy on its global networks — security software for AI-powered digital fingerprinting.

The software is a kind of LEGO kit, an AI framework anyone can use to create a custom cybersecurity solution.

Version 2.0 is running across NVIDIA’s networks today on just four NVIDIA A100 Tensor Core GPUs. IT staff can create their own models, changing aspects of them to create specific alerts.

Tested and Released

NVIDIA is making these capabilities available in a digital fingerprinting AI workflow included with NVIDIA AI Enterprise 3.0 announced in December.

For identity attackers, “the models Bartley’s team built have anomaly scores that are off the charts, and we’re able to visualize events so we can see things in new ways,” said Jason Recla, NVIDIA’s senior director of information security.

As a result, instead of facing a tsunami of 100 million network events a week, an IT team may have just 8-10 incidents to investigate daily. That cuts the time to detect certain attack patterns from weeks to minutes.

Tailoring AI for Small Events

The team already has big ideas for future versions.

“Our software works well on major identity attacks, but it’s not every day you have an incident like that,” Richardson said. “So, now we’re tuning it with other models to make it more applicable to everyday vanilla security incidents.”

Meanwhile, Richardson’s team used the software to create a proof of concept for a large consulting firm.

“They wanted it to handle a million records in a tenth of a second. We did it in a millionth of a second, so they’re fully on board,” Richardson said.

The Outlook for AI Security

Looking ahead, the team has ideas for applying AI and accelerated computing to secure digital identities and generate hard-to-find training data.

Richardson imagines passwords and multi-factor authentication will be replaced by models that know how fast a person types, with how many typos, what services they use and when they use them. Such detailed digital identities will prevent attackers from hijacking accounts and pretending they are legitimate users.

Data on network events is gold for building AI models that harden networks, but no one wants to share details of real users and break-ins. Synthetic data, generated by a variant of digital fingerprinting, could fill the gap, letting users create what they need to fit their use case.

In the meantime, Recla has advice security managers can act on now.

“Get up to speed on AI,” he said. “Start investing in AI engineering and data science skills — that’s the biggest thing.”

Digital fingerprinting is not a panacea. It’s one more brick in an ever-evolving digital wall that a community of security specialists is building against the next big attack.

You can try this AI-powered security workflow live on NVIDIA LaunchPad starting Jan. 23. And you can watch the video below to learn more about digital fingerprinting.

Read More

Booked for Brilliance: Sweden’s National Library Turns Page to AI to Parse Centuries of Data

Booked for Brilliance: Sweden’s National Library Turns Page to AI to Parse Centuries of Data

For the past 500 years, the National Library of Sweden has collected virtually every word published in Swedish, from priceless medieval manuscripts to present-day pizza menus.

Thanks to a centuries-old law that requires a copy of everything published in Swedish to be submitted to the library — also known as Kungliga biblioteket, or KB — its collections span from the obvious to the obscure: books, newspapers, radio and TV broadcasts, internet content, Ph.D. dissertations, postcards, menus and video games. It’s a wildly diverse collection of nearly 26 petabytes of data, ideal for training state-of-the-art AI.

“We can build state-of-the-art AI models for the Swedish language since we have the best data,” said Love Börjeson, director of KBLab, the library’s data lab.

Using NVIDIA DGX systems, the group has developed more than two dozen open-source transformer models, available on Hugging Face. The models, downloaded by up to 200,000 developers per month, enable research at the library and other academic institutions.

“Before our lab was created, researchers couldn’t access a dataset at the library — they’d have to look at a single object at a time,” Börjeson said. “There was a need for the library to create datasets that enabled researchers to conduct quantity-oriented research.”

With this, researchers will soon be able to create hyper-specialized datasets — for example, pulling up every Swedish postcard that depicts a church, every text written in a particular style or every mention of a historical figure across books, newspaper articles and TV broadcasts.

Turning Library Archives Into AI Training Data

The library’s datasets represent the full diversity of the Swedish language — including its formal and informal variations, regional dialects and changes over time.

“Our inflow is continuous and growing — every month, we see more than 50 terabytes of new data,” said Börjeson. “Between the exponential growth of digital data and ongoing work digitizing physical collections that date back hundreds of years, we’ll never be finished adding to our collections.”

The library’s archives include audio, text and video.

Soon after KBLab was established in 2019, Börjeson saw the potential for training transformer language models on the library’s vast archives. He was inspired by an early, multilingual, natural language processing model by Google that included 5GB of Swedish text.

KBLab’s first model used 4x as much — and the team now aims to train its models on at least a terabyte of Swedish text. The lab began experimenting by adding Dutch, German and Norwegian content to its datasets after finding that a multilingual dataset may improve the AI’s performance.

NVIDIA AI, GPUs Accelerate Model Development 

The lab started out using consumer-grade NVIDIA GPUs, but Börjeson soon discovered his team needed data-center-scale compute to train larger models.

“We realized we can’t keep up if we try to do this on small workstations,” said Börjeson. “It was a no-brainer to go for NVIDIA DGX. There’s a lot we wouldn’t be able to do at all without the DGX systems.”

The lab has two NVIDIA DGX systems from Swedish provider AddPro for on-premises AI development. The systems are used to handle sensitive data, conduct large-scale experiments and fine-tune models. They’re also used to prepare for even larger runs on massive, GPU-based supercomputers across the European Union — including the MeluXina system in Luxembourg.

“Our work on the DGX systems is critically important, because once we’re in a high-performance computing environment, we want to hit the ground running,” said Börjeson. “We have to use the supercomputer to its fullest extent.”

The team has also adopted NVIDIA NeMo Megatron, a PyTorch-based framework for training large language models, with NVIDIA CUDA and the NVIDIA NCCL library under the hood to optimize GPU usage in multi-node systems.

“We rely to a large extent on the NVIDIA frameworks,” Börjeson said. “It’s one of the big advantages of NVIDIA for us, as a small lab that doesn’t have 50 engineers available to optimize AI training for every project.”

Harnessing Multimodal Data for Humanities Research

In addition to transformer models that understand Swedish text, KBLab has an AI tool that transcribes sound to text, enabling the library to transcribe its vast collection of radio broadcasts so that researchers can search the audio records for specific content.

AI-enhanced databases are the latest evolution of library records, which were long stored in physical card catalogs.

KBLab is also starting to develop generative text models and is working on an AI model that could process videos and create automatic descriptions of their content.

“We also want to link all the different modalities,” Börjeson said. “When you search the library’s databases for a specific term, we should be able to return results that include text, audio and video.”

KBLab has partnered with researchers at the University of Gothenburg, who are developing downstream apps using the lab’s models to conduct linguistic research — including a project supporting the Swedish Academy’s work to modernize its data-driven techniques for creating Swedish dictionaries.

“The societal benefits of these models are much larger than we initially expected,” Börjeson said.

Images courtesy of Kungliga biblioteket

Read More

What Is AI Computing?

What Is AI Computing?

The abacus, sextant, slide rule and computer. Mathematical instruments mark the history of human progress.

They’ve enabled trade and helped navigate oceans, and advanced understanding and quality of life.

The latest tool propelling science and industry is AI computing.

AI Computing Defined

AI computing is the math-intensive process of calculating machine learning algorithms, typically using accelerated systems and software. It can extract fresh insights from massive datasets, learning new skills along the way.

It’s the most transformational technology of our time because we live in a data-centric era, and AI computing can find patterns no human could.

For example, American Express uses AI computing to detect fraud in billions of annual credit card transactions. Doctors use it to find tumors, finding tiny anomalies in mountains of medical images.

Three Steps to AI Computing

Before getting into the many use cases for AI computing, let’s explore how it works.

First, users, often data scientists, curate and prepare datasets, a stage called extract/transform/load, or ETL. This work can now be accelerated on NVIDIA GPUs with Apache Spark 3.0, one of the most popular open source engines for mining big data.

Second, data scientists choose or design AI models that best suit their applications.

Some companies design and train their own models from the ground up because they are pioneering a new field or seeking a competitive advantage. This process requires some expertise and potentially an AI supercomputer, capabilities NVIDIA offers.

AI computing and MLops
Machine learning operations (MLOps) describe in finer detail the three major steps of AI computing — ETL (top row), training (lower right) and inference (lower left).

Many companies choose pretrained AI models they can customize as needed for their applications. NVIDIA provides dozens of pretrained models and tools for customizing them on NGC, a portal for software, services, and support.

Third, companies sift their data through their models. This key step, called inference, is where AI delivers actionable insights.

The three-step process involves hard work, but there’s help available, so everyone can use AI computing.

For example, NVIDIA TAO Toolkit can collapse the three steps into one using transfer learning, a way of tailoring an existing AI model for a new application without needing a large dataset. In addition, NVIDIA LaunchPad gives users hands-on training in deploying models for a wide variety of use cases.

Inside an AI Model

AI models are called neural networks because they’re inspired by the web-like connections in the human brain.

If you slice into one of these AI models, it might look like a mathematical lasagna, made up of layers of linear algebra equations. One of the most popular forms of AI is called deep learning because it uses many layers.

An example of a deep learning model used in AI computing
An example of a deep learning model that identifies an image. From an article on deep learning for the U.S. National Academy of Sciences. Image credit: Lucy Reading-Ikkanda (artist).

If you zoom in, you’d see each layer is made up of stacks of equations. Each represents the likelihood that one piece of data is related to another.

AI computing multiplies together every stack of equations in every layer to find patterns. It’s a huge job that requires highly parallel processors sharing massive amounts of data on fast computer networks.

GPU Computing Meets AI

GPUs are the de facto engines of AI computing.

NVIDIA debuted the first GPU in 1999 to render 3D images for video games, a job that required massively parallel calculations.

GPU computing soon spread to use in graphics servers for blockbuster movies. Scientists and researchers packed GPUs into the world’s largest supercomputers to study everything from the chemistry of tiny molecules to the astrophysics of distant galaxies.

When AI computing emerged more than a decade ago, researchers were quick to embrace NVIDIA’s programmable platform for parallel processing. The video below celebrates this brief history of the GPU.

The History of AI Computing

The idea of artificial intelligence goes back at least as far as Alan Turing, the British mathematician who helped crack coded messages during WWII.

“What we want is a machine that can learn from experience,” Turing said in a 1947 lecture in London.

AI visionary Alan Turing
Alan Turing

Acknowledging his insights, NVIDIA named one of its computing architectures for him.

Turing’s vision became a reality in 2012 when researchers developed AI models that could recognize images faster and more accurately than humans could. Results from the ImageNet competition also greatly accelerated progress in computer vision.

Today, companies such as Landing AI, founded by machine learning luminary Andrew Ng, are applying AI and computer vision to make manufacturing more efficient. And AI is bringing human-like vision to sports, smart cities and more.

AI Computing Starts Up Conversational AI

AI computing made huge inroads in natural language processing after the invention of the transformer model in 2017. It debuted a machine-learning technique called “attention” that can capture context in sequential data like text and speech.

Today, conversational AI is widespread. It parses sentences users type into search boxes. It reads text messages when you’re driving, and lets you dictate responses.

These large language models are also finding applications in drug discovery, translation, chatbots, software development, call center automation and more.

AI + Graphics Create 3D Worlds

Users in many, often unexpected, areas are feeling the power of AI computing.

The latest video games achieve new levels of realism thanks to real-time ray tracing and NVIDIA DLSS, which uses AI to deliver ultra-smooth game play on the GeForce RTX platform.

That’s just the start. The emerging field of neural graphics will speed the creation of virtual worlds to populate the metaverse, the 3D evolution of the internet.

Neural graphics combine AI computing and graphics
Neural graphics accelerate design and development of virtual worlds to populate the metaverse, the 3D internet.

To kickstart that work, NVIDIA released several neural graphics tools in August.

Use Cases for AI Computing

Cars, Factories and Warehouses

Car makers are embracing AI computing to deliver a smoother, safer driving experience and deliver smart infotainment capabilities for passengers.

Mercedes-Benz is working with NVIDIA to develop software-defined vehicles. Its upcoming fleets will deliver intelligent and automated driving capabilities powered by an NVIDIA DRIVE Orin centralized computer. The systems will be tested and validated in the data center using DRIVE Sim software, built on NVIDIA Omniverse, to ensure they can safely handle all types of scenarios.

At CES, the automaker announced it will also use Omniverse to design and plan manufacturing and assembly facilities at its sites worldwide.

BMW Group is also among many companies creating AI-enabled digital twins of factories in NVIDIA Omniverse, making plants more efficient. It’s an approach also adopted by consumer giants such as PepsiCo for its logistic centers as shown in the video below.

Inside factories and warehouses, autonomous robots further enhance efficiency in manufacturing and logistics. Many are powered by the NVIDIA Jetson edge AI platform and trained with AI in simulations and digital twins using NVIDIA Isaac Sim.

In 2022, even tractors and lawn mowers became autonomous with AI.

In December, Monarch Tractor, a startup based in Livermore, Calif., released an AI-powered electric vehicle to bring automation to agriculture. In May, Scythe, based in Boulder, Colo., debuted its M.52 (below), an autonomous electric lawn mower packing eight cameras and more than a dozen sensors.

Securing Networks, Sequencing Genes

The number and variety of use cases for AI computing are staggering.

Cybersecurity software detects phishing and other network threats faster with AI-based techniques like digital fingerprinting.

In healthcare, researchers broke a record in January 2022 sequencing a whole genome in well under eight hours thanks to AI computing. Their work (described in the video below) could lead to cures for rare genetic diseases.

AI computing is at work in banks, retail shops and post offices. It’s used in telecom, transport and energy networks, too.

For example, the video below shows how Siemens Gamesa is using AI models to simulate wind farms and boost energy production.

As today’s AI computing techniques find new applications, researchers are inventing newer and more powerful methods.

Another powerful class of neural networks, diffusion models, became popular in 2022 because they could turn text descriptions into fascinating images. Researchers expect these models will be applied to many uses, further expanding the horizon for AI computing.

Read More

AI’s Leg Up: Startup Accelerates Robotics Simulation for $8 Trillion Food Market

AI’s Leg Up: Startup Accelerates Robotics Simulation for $8 Trillion Food Market

Robots are finally getting a grip.

Developers have been striving to close the gap on robotic gripping for the past several years, pursuing applications for multibillion-dollar industries. Securely gripping and transferring fast-moving items on conveyor belts holds vast promise for businesses.

Soft Robotics, a Bedford, Mass., startup, is harnessing NVIDIA Isaac Sim to help close the sim to real gap for a handful of robotic gripping applications. One area is perfecting gripping for pick and placement of foods for packaging.

Food packaging and processing companies are using the startup’s mGripAI system, which combines soft grasping with 3D vision and AI to grasp delicate foods such as proteins, produce and bakery items without damage.

“We’re selling the hands, the eyes and the brains of the picking solution,” said David Weatherwax, senior director of software engineering at Soft Robotics.

Unlike other industries that have adopted robotics, the $8 trillion food market has been slow to develop robots to handle variable items in unstructured environments, says Soft Robotics.

The company, founded in 2013, recently landed $26 million in Series C funding from Tyson Ventures, Marel and Johnsonville Ventures.

Companies such as Tyson Foods and Johnsonville are betting on adoption of robotic automation to help improve safety and increase production in their facilities. Both companies rely on Soft Robotics technologies.

Soft Robotics is a member of the NVIDIA Inception program, which provides companies with GPU support and AI platforms guidance.

Getting a Grip With Synthetic Data

Soft Robotics develops unique models for every one of its gripping applications, each requiring specific datasets. And picking from piles of wet, slippery chicken and other foods can be a tricky challenge.

We’re all in on Omniverse and Isaac Sim, and that’s been working great for us, said  Weatherwax.

Utilizing Omniverse and Isaac Sim, the company can create 3D renderings of chicken parts with different backgrounds, like on conveyor belts or in bins, and with different lighting scenarios.

The company taps into Isaac Replicator to develop synthetic data, generating hundreds of thousands of images per model and distributing that among an array of instances in the cloud. Isaac Replicator is a set of tools, APIs and workflows for generating synthetic data using Isaac Sim.

It also runs pose estimation models to help its gripping system see the angle of the item to pick.

NVIDIA A100 Tensor Core GPUs on site enable Soft Robotics to run split-second inference with the unique models for each application in these food-processing facilities. Meanwhile, simulation and training in Isaac Sim offers access to NVIDIA A100 GPUs for scaling up workloads.

“Our current setup is fully synthetic, which allows us to rapidly deploy new applications,” said Weatherwax. “We’re all in on Omniverse and Isaac Sim, and that’s been working great for us.”

Solving Issues With Occlusion, Lighting 

A big challenge at Soft Robotics is solving issues with occlusion for an understanding of how different pieces of chicken stack up and overlap one another when dumped into a pile. “How those form can be pretty complex,” he said.

A key thing for us is the lighting, so the NVIDIA RTX-driven ray tracing is really important, said Weatherwax.

Glares on wet chicken can potentially throw off detection models. “A key thing for us is the lighting, so the NVIDIA RTX-driven ray tracing is really important,” he added.

But where it really gets interesting is modeling it all in 3D and figuring out in a split second which item is the least obstructed in a pile and most accessible for a robot gripper to pick and place.

Building synthetic data sets with physics-based accuracy, Omniverse enables Soft Robotics to create such environments. “One of the big challenges we have is how all these amorphous objects form into a pile.”

Boosting Production Line Pick Accuracy

Production lines in food processing plants can move fast. But robots deployed with application-specific models promise to handle as many as 100 picks per minute.

Still a work in progress, success in such tasks hinges on accurate representations of piles of items, supported by training datasets that consider every possible way items can fall into a pile.

The objective is to provide the robot with the best available pick from a complex and dynamic environment. If food items fall off the conveyor belt or otherwise become damaged, then it is considered waste, which directly impacts yield.

Driving Production Gains 

Meat-packing companies rely on lines of people for processing chicken, but like so many other industries they have faced employee shortages. Some that are building new plants for food processing can’t even attract enough workers at launch, said Weatherwax.

“They are having a lot of staffing challenges, so there’s a push to automate,” he said.

The Omniverse-driven work for food processing companies has delivered a more than 10x increase in its simulation capacity, accelerating deployments times for AI picking systems from months to days.

And that’s enabling Soft Robotics customers to get a grip on more than just deploying automated chicken-picking lines — it’s ensuring that they’re covered for an employment challenge that has hit many industries, especially those with increased injury and health risks.

“Handling raw chicken is a job better suited for a robot,” he said.

Download Isaac Sim here to use the  Replicator features.

Read More

The Ultimate Upgrade: GeForce RTX 4080 SuperPOD Rollout Begins Today

The Ultimate Upgrade: GeForce RTX 4080 SuperPOD Rollout Begins Today

The Ultimate upgrade begins today: GeForce NOW RTX 4080 SuperPODs are now rolling out, bringing a new level of high-performance gaming to the cloud.

Ultimate members will start to see RTX 4080 performance in their region soon, and experience titles like  Warhammer 40,000: Darktide, Cyberpunk 2077, The Witcher 3: Wild Hunt and more at ultimate quality. New features are also available now for Ultimate members streaming from RTX 3080 servers, and members will be able to check GFN Thursday each week for availability updates in their regions.

Plus, get ready for 10 more supported games in the GeForce NOW library.

This Cloud Has an ‘Ultimate’ Lining

The GeForce NOW Ultimate membership brings new features and NVIDIA RTX technologies to the cloud for the first time, made possible by the NVIDIA Ada Lovelace GPU architecture.

Ultimate members receive three major streaming upgrades. The new RTX 4080 SuperPODs are capable of rendering and streaming at up to 240 frames per second. When paired with NVIDIA Reflex, it makes every moment of the action feel as if it’s on a desktop PC. And 4K gaming goes beyond fast with an upgrade to 120 fps, with support for DLSS 3 and RTX ON. Plus, for the first time, ultrawide resolutions are supported, giving members a wider point of view, at up to 3,840 x 1,600 resolution and 120 fps.

Ultimate Upgrade GeForce NOW
Coming to a zone near you.

Ultimate members in and around San Jose, Los Angeles, Dallas and Frankfurt, Germany, will be the first to experience the power of these RTX 4080 SuperPODs, starting today. Each week, GFN Thursday will spotlight the newest cities with upgraded servers, so make sure to check back each week to see which cities light up next on the map.

Even better: Starting today, Ultimate members streaming on RTX 3080 servers can take advantage of ultrawide resolutions and high dynamic range on the GeForce NOW PC and macOS apps. Learn more about supported resolutions and frame rates. Make sure you have the app v2.0.47.125 or later, and restart the app to see the new Ultimate features.

Don’t let this cloud pass you by — check it out and sign up. Get the Ultimate upgrade without paying the ultimate price — this highest-performance membership tier is only $19.99 per month or $99.99 for six months.

Game Related

Genshin Impact on GeForce NOW
The new year in Teyvat approaches in ‘The Exquisite Night Chimes’ update.

Celebrate the new year in Genshin Impact version 3.4, available this week. Players can explore Sumeru’s new sandstorm-ravaged desert with their favorite characters — and GeForce NOW members can play on the go with mobile touch controls.

Plus, 10 new games will be supported in the cloud this week:

Make this the ultimate weekend by playing these titles or any of the other 1,500 games in the GeForce NOW library. What game will you stream first on your Ultimate membership? Let us know in the comments or on Twitter.

Read More

Sequoia Capital’s Pat Grady and Sonya Huang on Generative AI

Sequoia Capital’s Pat Grady and Sonya Huang on Generative AI

For insights into the future of generative AI, check out the latest episode of the NVIDIA AI Podcast. Host Noah Kravitz is joined by Pat Grady and Sonya Huang, partners at Sequoia Capital, to discuss their recent essay, “Generative AI: A Creative New World.”

The authors delve into the potential of generative AI to enable new forms of creativity and expression, as well as the challenges and ethical considerations of this technology.

Grady and Huang emphasize the potential of generative AI to revolutionize industries such as art, design and media by allowing for the creation of unique, personalized content on a scale that would be impossible for humans to achieve alone.

They also address the importance of considering the ethical implications of the technology, including the potential for biased or harmful outputs and the need for responsible use and regulation.

Listen to the full episode to hear more about the possibilities of generative AI and the considerations to be made as this technology moves forward.

You Might Also Like

Art(ificial) Intelligence: Pindar Van Arman Builds Robots That Paint

Pindar Van Arman, an American artist and roboticist, designs painting robots that explore the differences between human and computational creativity. Since his first system in 2005, he has built multiple artificially creative robots. The most famous, Cloud Painter, was awarded first place at Robotart 2018.

Real or Not Real? Attorney Steven Frank Uses Deep Learning to Authenticate Art

Steven Frank is a partner at the law firm Morgan Lewis, specializing in intellectual property and commercial technology law. He’s also half of the husband-wife team that used convolutional neural networks to authenticate artistic masterpieces, including da Vinci’s Salvador Mundi, with AI’s help.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments

Humans playing games against machines is nothing new, but now computers can develop games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.

Subscribe to the AI Podcast: Now Available on Amazon Music

You can now listen to the AI Podcast through Amazon Music.

Also get the AI Podcast through Apple Music, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Read More

Roll Model: Smart Stroller Pushes Its Way to the Top at CES 2023

Roll Model: Smart Stroller Pushes Its Way to the Top at CES 2023

As any new mom or dad can tell you, parenting can be a challenge — packed with big worries and small hassles. But it may be about to get a little bit easier thanks to Glüxkind Technologies and their smart stroller, Ella.

The company has just been named a CES 2023 Innovation Awards Honoree for their AI-powered stroller, which was designed to make life easier for new parents and caregivers.

“People love the rock-a-baby feature and the push and brake assist,” said Glüxkind co-founder and CEO Kevin Huang of a product that’s become an instant sensation at the annual technology industry confab. “When you’re able to hold your child and have the stroller take care of itself, that’s a pretty magical moment.”

The story behind the product that’s made headlines around the world began three years ago when Huang and his co-founder, Anne Hunger, had a baby daughter and went stroller shopping.

And, like all parents, they learned about the challenges of wrangling a stroller packed with baby gear and the safety concerns that have new parents shopping for the safest vehicles they can afford.

“I realized, ‘man, this stuff hasn’t changed in the last 30 years,’” Huang said.

Modern cars, for example, are equipped with systems that ensure they don’t roll backward when you’re stopped on a hill, Huang explained.

“So I thought maybe we can add some of the things already there for cars into this platform that actually carries our children, so we can have a safer and more convenient experience.”

The response from parents at CES was overwhelmingly positive. No surprise, given the in-depth research Huang and his team conducted with new parents.

But it’s also wowed tech enthusiasts worldwide, earning honors from the awards program produced by the Consumer Technology Association, the trade group behind the annual Las Vegas conference.

“We came to CES with the idea of announcing the product and getting maybe three to five writeups about what we were doing,” Huang said. “We didn’t expect the overwhelming amount of exposure we received.”

This year’s CES Innovation Awards program — overseen by an elite panel of judges, including media members, designers and engineers — received a record-high number of over 2,100 submissions, making it no small feat for Ella to come out on top.

SUBHEAD: AI-Powered Stroller Makes Parenting a Walk in the Park

Huang reports that NVIDIA’s Jetson edge AI platform powers the startup’s entire AI stack.

Glüxkind, based in Vancouver, Canada, is a member of NVIDIA Inception, a free program designed to help startups evolve faster through access to cutting-edge technology and NVIDIA experts, opportunities to connect with venture capitalists, and co-marketing support to heighten the company’s visibility.

With Jetson, Huang explains that the stroller is able to use computer vision to map the stroller’s surroundings, using Jetson’s GPU and CPU to process and do pathfinding.

As a result, when the child isn’t in the stroller, parents can activate Ella’s intelligent hands-free strolling mode.

This advanced parent-assist technology helps parents focus on their kids rather than wrangling an empty stroller packed with diapers, snacks, and other supplies.

“It stays out of the way when you don’t need it, but it’s there when you do need it,” Huang said.

But while the stroller is intelligent — able to follow a caregiver as they hold a baby or help ensure the stroller doesn’t roll away on its own — it’s not designed to work independently.

Quite the opposite. With Ella’s adaptive push and brake assistance, caregivers can enjoy effortless walks no matter the terrain — uphill, downhill or even when fully loaded with groceries and toys.

Ella also has features that make parenting easier, such as Rock-My-Baby mode to help little ones get the sleep they need and built-in white noise playback.

“We’re trying to make it so the technology we’re building is augmentative to the parents’ experience to make parenting easier and safer,” Huang said.

The result: while parenting will never be a walk in the park, actually taking that newborn for an actual walk in the park will soon be a lot less of a hassle.

Image Credit: Glüxkind Technologies

Read More

Artist Zhelong Xu Brings Chinese Zodiac to Life for Lunar New Year This Week ‘In the NVIDIA Studio’

Artist Zhelong Xu Brings Chinese Zodiac to Life for Lunar New Year This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

To celebrate the upcoming Lunar New Year holiday, NVIDIA artist Zhelong Xu, aka Uncle Light, brought Chinese zodiac signs to life this week In the NVIDIA Studio — modernizing the ancient mythology in his signature style.

Chinese Tradition Brought to the 21st Century

NVIDIA principal artist Zhelong Xu is also co-founder of the Shanghai Magicstone Images studio and an art consultant for the Tencent TiMi Studio Group.

Xu is deeply passionate about modeling Chinese zodiac signs in 3D. His first serious attempt, Carefree Sheep, was chosen by Adobe Substance Painter, previously Allegorithmic, as the artwork on its first software launch screen.

‘Carefree Sheep’ by 3D artist Zhelong Xu.

Xu creates at least one piece for his zodiac series each year. Harboring a Cute Tiger is his most popular work, which reached over 16 million people on the Chinese social media app Weibo.

‘Harboring a Cute Tiger’ by Zhelong Xu.

“I had the idea to turn this series into ceramic works, so I continued working with my friends in Jingdezhen to turn this series into physical objects,” he said.

Zodiac piece for the Year of the Rabbit.

“I wanted to do something different in the Year of the Rabbit, so I chose to color the rabbit in NVIDIA green to match the classical Chinese atmosphere and to bring out the Chinese New Year energy,” said Xu, who joined NVIDIA last year.

The two emerald rabbits, one with its ears up and the other with them down, are designed to look like they’re teeming with anticipation for the arrival of Lunar New Year.

Xu deployed ZBrush for initial modeling with its custom sculpting tools. He then UV mapped the 3D model in preparation for applying a special emerald texture made in Adobe Substance 3D Painter. NVIDIA RTX-accelerated light- and ambient-occlusion features baked and optimized the scene assets in mere seconds, letting Xu experiment with textures quickly and easily with his GeForce RTX 4090 GPU.

Lighting adjustments in Blender.

The artist quickly exported files to Blender to set up the environment and tinker with lighting. He added many Eastern-style architectural and furniture options from the PBRMAX.com asset library.

High-quality 3D assets gathered from the PBRMAX.com asset library.

Movement within the viewport was seamless with Blender Cycles RTX-accelerated OptiX ray tracing for interactive, photorealistic modeling.

Xu then deployed his secret weapon: NVIDIA Omniverse, a platform for creating and operating metaverse applications. He saved files in Universal Scene Description (USD) format using the Omniverse export plug-in to import them into the NVIDIA Omniverse Create app for final modeling. Here, Xu made adjustments to the translucent emerald material to make it as realistic as possible.

USD format enables import into Omniverse Create.

Omniverse Create was incredibly useful for scene modifications, Xu said, as it enabled him to test lighting with his scene rendering in real time. This provided him with the most accurate iteration of final renders, allowing for more meaningful real-time edits.

“Thanks to the power of the GeForce RTX 4090 GPU and RTX optimization in Omniverse, I got the desired effect very quickly and tested a variety of lighting effects,” he said.

Final environmental edits in Omniverse Create.

Omniverse gives 3D artists their choice of renderer within the viewport, with support for Pixar HD Storm, Chaos V-Ray, Maxon’s Redshift, OTOY Octane, Blender Cycles and more. Xu deployed the unbiased NVIDIA Iray renderer to complete the project.

3D artist Zhelong Xu.

View more of Xu’s work on ArtStation.

#NewYearNewArt Challenge 

With a new year will come new art, and we’d love to see yours! Use the hashtag #NewYearNewArt and tag @NVIDIAStudio to show off recent creations for a chance to be featured on our channels.

The challenge is off to a great start:

Excellent artists like @rabbit.hole_renders have helped kick off the challenge with creativity that’s taking people to new worlds.  

Plus, get a dose of potassium with @graffitpl’s banana-based animation that comes with a side of mushrooms.

Keep your eyes peeled for more amazing submissions on the NVIDIA Studio Instagram stories.

Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

NVIDIA and Dell Technologies Expand AI Portfolio

NVIDIA and Dell Technologies Expand AI Portfolio

In their largest-ever joint AI initiative, NVIDIA and Dell Technologies today launched a wave of Dell PowerEdge systems available with NVIDIA acceleration, enabling enterprises to efficiently transform their businesses with AI.

A total of 15 next-generation Dell PowerEdge systems can draw from NVIDIA’s full AI stack — including GPUs, DPUs and the NVIDIA AI Enterprise software suite — providing enterprises the foundation required for a wide range of AI applications, including speech recognition, cybersecurity, recommendation systems and a growing number of groundbreaking language-based services.

The news was released at Dell’s PowerEdge .Next event, where NVIDIA founder and CEO Jensen Huang joined Dell Technologies founder and CEO Michael Dell in a fireside chat.

Commenting on how they’ve celebrated a 25-year history of collaboration, the two CEOs looked at solving enterprise challenges through the lens of AI.

“As the amount of data in the world expands, the majority of information technology capacity is going to be in service of machine intelligence,” said Dell. “Building systems for AI first is a huge opportunity for Dell and NVIDIA to collaborate.”

“AI has the power to transform every business by accelerating automation across every industry,” said Huang. “Working closely with Dell Technologies, we’re able to reach organizations around the globe with a powerful, energy-efficient AI computing platform that will boost the IQ of modern enterprise.”

Energy-Efficient AI 

A key highlight among Dell’s portfolio is Dell PowerEdge systems featuring NVIDIA BlueField-2 DPUs.

BlueField data processing units can offload, accelerate and isolate the networking and operating system stacks of the data center, which means businesses using NVIDIA DPUs could cut data center energy use by close to 25%, potentially saving them millions of dollars in energy bills. Dell PowerEdge servers with NVIDIA BlueField DPUs optimize performance and efficiency for private, hybrid and multi-cloud deployments, including those running VMware vSphere.

Additionally, systems featuring NVIDIA H100 GPUs have shown they are able to process data 25x more efficiently to deploy diverse AI models into production, and that NVIDIA-accelerated Dell PowerEdge servers are up to 300x more energy efficient for running inference on large language models — those exceeding 500 billion parameters — when compared to prior-generation non-accelerated servers.

Built First for AI

To help customers get their AI projects up and running fast, Dell PowerEdge servers accelerated with NVIDIA H100 GPUs come with a license for NVIDIA AI Enterprise software.

An end-to-end, secure, cloud-native suite of AI software, NVIDIA AI Enterprise streamlines the development and deployment of predictive AI and includes global enterprise support for a wide range of domain- and industry-specific workloads. NVIDIA AI Enterprise includes more than 50 frameworks and pretrained models as well as a set of AI workflows, all which can help organizations speed time to deployment while reducing costs of production-ready AI.

NVIDIA AI frameworks included in NVIDIA AI Enterprise 3.0 are NVIDIA Clara Parabricks for genomics, MONAI for medical imaging, NVIDIA Morpheus for cybersecurity, NVIDIA Metropolis for intelligent video analytics, NVIDIA DeepStream for vision AI, NVIDIA Merlin for recommender systems, and many others.  Additionally, it includes new AI workflows for building contact center intelligent virtual assistants, multi-language audio transcriptions and digital fingerprinting for cybersecurity threat detection.

Enterprises can immediately experience NVIDIA AI Enterprise in dozens of hands-on labs at no charge on NVIDIA LaunchPad with new AI workflow labs expected to debut next week.

Read More

NVIDIA, Evozyne Create Generative AI Model for Proteins

NVIDIA, Evozyne Create Generative AI Model for Proteins

Using a pretrained AI model from NVIDIA, startup Evozyne created two proteins with significant potential in healthcare and clean energy.

A joint paper released today describes the process and the biological building blocks it produced. One aims to cure a congenital disease, another is designed to consume carbon dioxide to reduce global warming.

Initial results show a new way to accelerate drug discovery and more.

“It’s been really encouraging that even in this first round the AI model has produced synthetic proteins as good as naturally occurring ones,” said Andrew Ferguson, Evozyne’s co-founder and a co-author of the paper. “That tells us it’s learned nature’s design rules correctly.”

A Transformational AI Model

Evozyne used NVIDIA’s implementation of ProtT5, a transformer model that’s part of NVIDIA BioNeMo, a software framework and service for creating AI models for healthcare.

“BioNeMo really gave us everything we needed to support model training and then run jobs with the model very inexpensively — we could generate millions of sequences in just a few seconds,” said Ferguson, a molecular engineer working at the intersection of chemistry and machine learning.

The model lies at the heart of Evovyne’s process called ProT-VAE. It’s a workflow that combines BioNeMo with a variational autoencoder that acts as a filter.

“Using large language models combined with variational autoencoders to design proteins was not on anybody’s radar just a few years ago,” he said.

Model Learns Nature’s Ways

Like a student reading a book, NVIDIA’s transformer model reads sequences of amino acids in millions of proteins. Using the same techniques neural networks employ to understand text, it learned how nature assembles these powerful building blocks of biology.

The model then predicted how to assemble new proteins suited for functions Evozyne wants to address.

“The technology is enabling us to do things that were pipe dreams 10 years ago,” he said.

A Sea of Possibilities

Machine learning helps navigate the astronomical number of possible protein sequences, then efficiently identifies the most useful ones.

The traditional method of engineering proteins, called directed evolution, uses a slow, hit-or-miss approach. It typically only changes a few amino acids in sequence at a time.

Evozyne's ProT-VAE workflow generates useful proteins with NVIDIA BioNeMo
Evozyne’s ProT-VAE process uses a powerful transformer model in NVIDIA BioNeMo to generate useful proteins for drug discovery and energy sustainability.

By contrast, Evozyne’s approach can alter half or more of the amino acids in a protein in a single round. That’s the equivalent of making hundreds of mutations.

“We’re taking huge jumps which allows us to explore proteins never seen before that have new and useful functions,” he said.

Using the new process, Evozyne plans to build a range of proteins to fight diseases and climate change.

Slashing Training Time, Scaling Models

“NVIDIA’s been an incredible partner on this work,” he said.

“They scaled jobs to multiple GPUs to speed up training,” said Joshua Moller, a data scientist at Evozyne. “We were getting through entire datasets every minute.”

That reduced the time to train large AI models from months to a week. “It allowed us to train models — some with billions of trainable parameters — that just would not be possible otherwise,” Ferguson said.

Much More to Come

The horizon for AI-accelerated protein engineering is wide.

“The field is moving incredibly quickly, and I’m really excited to see what comes next,” he said, noting the recent rise of diffusion models.

“Who knows where we will be in five years’ time.”

Sign up for early access to the NVIDIA BioNeMo to see how it can accelerate your applications.

Read More