DiDi Chooses NVIDIA DRIVE for New Fleet of Self-Driving Robotaxis

Robotaxis are one major step closer to becoming reality.

DiDi Autonomous Driving, the self-driving technology arm of mobility technology leader Didi Chuxing, announced last month a strategic partnership with Volvo Cars on autonomous vehicles for DiDi’s self-driving test fleet.

Volvo’s autonomous drive-ready XC90 cars will be the first to integrate DiDi Gemini, a new self-driving hardware platform, which is equipped with NVIDIA DRIVE AGX Pegasus. These vehicles, equipped with DiDi’s Gemini self-driving hardware platform, will eventually be deployed in robotaxi services.

These self-driving test vehicles are significant progress towards commercial robotaxi services.

Robotaxis are autonomous vehicles that can operate on their own in geofenced areas, such as cities or residential communities. With a set of high-resolution sensors and a supercomputing platform in place of a driver, they can safely operate 24 hours a day, seven days a week.

And as a safer alternative to current modes of transit, robotaxis are expected to draw quick adoption once deployed at scale, making up more than 5 percent of vehicle miles traveled worldwide by 2030.

With the high-performance, energy-efficient compute of NVIDIA DRIVE at their core, these vehicles developed by DiDi are poised to help accelerate this landmark transition.

Doubling Up on Redundancy

The key to DiDi’s robotaxi ambitions is its new self-driving hardware platform, DiDi Gemini.

Achieving fully autonomous vehicles requires centralized, high-performance compute. The amount of sensor data a robotaxi needs to process is 100x greater than today’s most advanced vehicles.

The complexity in software also increases exponentially, with an array of redundant and diverse deep neural networks running simultaneously as part of an integrated software stack.

Built on NVIDIA DRIVE AGX Pegasus, DiDi Gemini achieves 700 trillion operations per second (TOPS) of performance, and includes up to 50 high-resolution sensors and an ASIL-D rated fallback system. It is architected with multi-layered redundant protections to enhance the overall safety of the autonomous driving experience.

The Gemini platform was designed using Didi Chuxing’s massive database of ride-hailing data as well as real-world autonomous driving test data to deliver the optimal self-driving hardware experience.

A New Generation of Collaboration

DiDi’s test fleet also marks a new era in technology collaboration.

DiDi and Volvo Cars plan to build a long-term partnership, expanding the autonomous test fleets across China and the U.S. and scaling up commercial robotaxi operations. The NVIDIA DRIVE platform enables continuous improvement over the air, facilitating these future plans of development and expansion.

This collaboration combines long-held legacies in vehicle safety, ride-hailing expertise and AI computing to push the bounds of transportation technology for safer, more efficient everyday mobility.

The post DiDi Chooses NVIDIA DRIVE for New Fleet of Self-Driving Robotaxis appeared first on The Official NVIDIA Blog.

Read More

AI Slam Dunk: Startup’s Checkout-Free Stores Provide Stadiums Fast Refreshments

With live sports making a comeback, one thing remains a constant: Nobody likes to miss big plays while waiting in line for a cold drink or snack.

Zippin offers sports fans checkout-free refreshments, and it’s racking up wins among stadiums as well as retailers, hotels, apartments and offices. The startup, based in San Francisco, develops image-recognition models that run on the NVIDIA Jetson edge AI platform to help track customer purchases.

People can simply enter their credit card details into the company’s app, scan into a Zippin-driven store, grab a cold one and any snacks, and go. Their receipt is available in the app afterwards. Customers can also bypass the app and simply use a credit card to enter the stores and Zippin automatically keeps track of their purchases and charges them.

“We don’t want fans to be stuck waiting in line,” said Motilal Agrawal, co-founder and chief scientist at Zippin.

As sports and entertainment venues begin to reopen in limited capacities, Zippin’s grab-and-go stores are offering quicker shopping and better social distancing without checkout lines.

Zippin is a member of NVIDIA Inception, a virtual accelerator program that helps startups in AI and data science get to market faster. “The Inception team met with us, loaned us our first NVIDIA GPU and gave us guidance on NVIDIA SDKs for our application,” he said.

Streak of Stadiums

Zippin has launched in three stadiums so far, all in the U.S. It’s in negotiations to develop checkout-free shopping for several other major sports venues in the country.

In March, the San Antonio Spurs’ AT&T Center reopened with limited capacity for the NBA season, unveiling a Zippin-enabled Drink MKT beverage store. Basketball fans can scan in with the Zippin mobile app or use their credit card, grab drinks and go. Cameras and shelves with scales identify purchases to automatically charge customers.

The debut in San Antonio comes after Zippin came to Mile High Stadium, in Denver, in November, for limited capacity Broncos games. Before that, Zippin unveiled its first stadium, the Golden 1 Center, in Sacramento. It allows customers to purchase popcorn, draft beer and other snacks and drinks and is open for Sacramento Kings basketball games and concerts.

“Our mission is to accelerate the adoption of checkout-free stores, and sporting venues are the ideal location to benefit from our approach,” Agrawal said.

Zippin Store Advances  

In addition to stadiums, Zippin has launched stores within stores for grab-and-go food and beverages in Lojas Americanas, a large retail chain in Brazil.

In Russia, the startup has put a store within a store inside an Azbuka Vkusa supermarket chain store located in Moscow. Zippin is also in Japan, where it has a pilot store in Tokyo with Lawson, a convenience store chain in an office location and another store within the Yokohama Techno Tower Hotel.

As an added benefit for retailers, Zippin’s platform can track products to help automate inventory management.

“We provide a retailer dashboard to see how much inventory there is for each individual item and which items have run low on stock. We can help to know exactly how much is in the store — all these detailed analytics are part of our offering,” Agrawal said.

Jetson Processing

Zippin relies on the NVIDIA Jetson AI platform for inference at 30 frames per second for its models, enabling split-second decisions on customer purchases. The application’s processing speed means it can keep up with a crowded store.

The company runs convolutional neural networks for product identification and store location identification to help track customer purchases. Also, using Zippin’s retail implementations, stores utilize smart shelves to determine whether a product was removed or replaced on a shelf.

The NVIDIA edge AI-driven platform can then process the shelf data and the video data together — sensor fusion — to determine almost instantly who grabbed what.

“It can deploy and work effectively on two out of three sensors (visual, weight and location) and then figure out the products on the fly, with training ongoing in action in deployment to improve the system,” said Agrawal.

The post AI Slam Dunk: Startup’s Checkout-Free Stores Provide Stadiums Fast Refreshments appeared first on The Official NVIDIA Blog.

Read More

GFN Thursday Set to Evolve as Biomutant Comes to GeForce NOW on May 25

GeForce NOW is always evolving, and so is this week’s GFN Thursday. Biomutant, the new open-world action RPG from Experiment 101 and THQ Nordic, is coming to GeForce NOW when it releases on May 25.

Everybody Was Kung Fu Fighting

Biomutant puts you in the role of an anthropomorphic rodent with swords, guns and martial arts moves to explore a strange, new open world. Your kung fu creature will evolve as you play thanks to the game’s mutations system, granting new powers and resistances to help you on your journey.

Each mutation will change how you fight and survive, with combat that challenges you to build ever-increasing attack combinations. You’ll mix and match melee attacks, ranged weapons and Psi powers to emerge victorious.

As you evolve your warrior’s abilities, you’ll also change your in-game look. Players will craft new weapons and armor from items found during their adventures. For gamers who love customizing their build, Biomutant looks to be a dream game.

Adventure Across Your Devices

PC gamers have been eagerly anticipating Biomutant’s release, and now GeForce NOW members will be able to take their adventure with them at GeForce quality, across nearly all of their devices.

“It is great that PC gamers can play Biomutant even without relying on powerful hardware via GeForce NOW,” said Stefan Ljungqvist, art and creative director at Experiment 101.

“We love that GeForce NOW will help introduce even more players to this ever-evolving furry creature and its lush game world,” said Florian Emmerich, head of PR at THQ Nordic, Biomutant’s publisher. “Now players can discover everything we have in store for them, even if their PC isn’t the latest and greatest.”

Biomutant on GeForce NOW
There’s a gorgeous world to explore in Biomutant, and with GeForce NOW, you can explore it even on lower-powered devices.

From what we’ve seen so far, the world that Experiment 101 has built is lush and gorgeous. GeForce NOW members will get to explore it as it was meant to be seen, even on a Chromebook, Mac or mobile device.

How will you evolve when Biomutant releases on May 25? Let us know in the comments below.

Get Your Game On

What GFN Thursday is complete without more games? Members can look forward to the following this week:

Returning to GeForce NOW:

  • The Wonderful 101: Remastered (Steam)

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

The post GFN Thursday Set to Evolve as Biomutant Comes to GeForce NOW on May 25 appeared first on The Official NVIDIA Blog.

Read More

Keeping Games up to Date in the Cloud

GeForce NOW ensures your favorite games are automatically up to date, avoiding game updates and patches. Simply login, click PLAY, and enjoy an optimal cloud gaming experience.

Here’s an overview on how the service keeps your library game ready at all times.

Updating Games for All GeForce NOW Members

When a gamer downloads an update on their PC, all that matters is their individual download.

In the cloud, the GeForce NOW team goes through the steps of patching or maintenance and makes the updated game available for millions of members around the world.

Depending on when you happen to hop on GeForce NOW, you may not even see these updates taking place. In some cases, you’ll be able to keep playing your game while we’re updating game bits. In others, we may need to complete the backend work before it’s ready to play.

Patching in GeForce NOW

Patching is the process of making changes to a game or its supporting data to update, fix or improve it.

It typically takes a few minutes, sometimes longer for a game to patch on GeForce NOW.

When a developer releases a new version of a game, work is done on the backend of GeForce NOW to download that patch, replicate each bit to all of our storage systems, test for the proper security features, and finally copy it back onto all of our data centers worldwide, becoming available for gamers.

GeForce NOW handles this entire process so gamers and developers can focus on, well, gaming and developing.

Different Types of Patching 

Three types of patching occur on GeForce NOW:

  • On-seat patching allows you to play your game as-is while the patch is downloading in the background. Meaning you’re always game ready.
  • Offline patching happens for games that don’t support on-seat patching. Our automated software system downloads the patch on the backend then updates. Offline patching can take minutes to hours.
  • Distributed patching is a quicker type of patching. Specific bits are downloaded and installed, instead of the entire game (100 GB or so). We then finish updating and copy them onto the server. Typically this takes 30 minutes or less.

GeForce NOW continues to work with game developers requesting patch updates before releasing on PC, allowing for real-time preparations in the cloud, resulting in zero wait for gamers, period.

Maintenance Mode Explained 

Maintenance mode is the status of a game that’s been taken offline for bigger fixes such as bugs that cause poor performance, issues with save games, or patches that need more time to deploy.

Depending on severity, a game in maintenance can be offline anywhere from days to weeks. We work to keep these maintenance times to a minimum, and often work directly with game developers to resolve these issues.

Eliminate Waiting for More Gaming

Our goal is to do all of the patching and maintenance behind the scenes — so that when you’re ready to play, you’re game ready and playing instantly.

Get started with your gaming adventures on GeForce NOW.

Follow GeForce NOW on Facebook and Twitter and stay up to date on the latest features and game launches. 

The post Keeping Games up to Date in the Cloud appeared first on The Official NVIDIA Blog.

Read More

Create in Record Time with New NVIDIA Studio Laptops from Dell, HP, Lenovo, Gigabyte, MSI and Razer

New NVIDIA Studio laptops from Dell, HP, Lenovo, Gigabyte, MSI and Razer were announced today as part of the record-breaking GeForce laptop launch. The new Studio laptops are powered by GeForce RTX 30 Series and NVIDIA RTX professional laptop GPUs, including designs with the new GeForce RTX 3050 Ti and 3050 laptop GPUs, and the latest 11th Gen Intel mobile processors.

GeForce RTX 3050 Ti and 3050 Studio laptops are perfect for graphic designers, photographers and video editors, bringing high performance and affordable Studio laptops to artists and students.

The NVIDIA Broadcast app has been updated to version 1.2, bringing new room echo removal and video noise removal features, updated general noise removal and the ability to stack multiple effects for NVIDIA RTX users. The update can be downloaded here.

And it’s all supported by the May NVIDIA Studio Driver, also available today.

Creating Made Even Easier

With the latest NVIDIA Ampere architecture, Studio laptops accelerate creative workflows with real-time ray tracing, AI and dedicated video acceleration. Creative apps run faster than ever, taking full advantage of new AI features that save time and enable all creators to apply effects that previously were limited to the most seasoned experts.

The new line of NVIDIA Studio laptops introduces a wider range of options, making finding the perfect system easier than ever.

  • Creative aficionados that are into photography, graphic design or video editing can do more, faster, with new GeForce RTX 3050 Ti and 3050 laptops, and RTX A2000 professional laptops. They introduce AI acceleration, best-in-class video hardware encoding and GPU acceleration in hundreds of apps. With reduced power consumption and 14-inch designs as thin as 16mm, they bring RTX to the mainstream, making them perfect for students and creators on the go.
  • Advanced creators can step up to laptops powered by GeForce RTX 3070 and 3060 laptop GPUs or NVIDIA RTX A4000 and A3000 professional GPUs. They offer greater performance in up to 6K video editing and 3D rendering, providing great value in elegant Max-Q designs that can be paired with 1440p displays, widely available in laptops for the first time.
  • Expert creators will enjoy the power provided by the GeForce RTX 3080 laptop GPU, available in two variants, with 8GB or 16GB of video memory, or the NVIDIA RTX A5000 professional GPU, with 16GB of video memory. The additional memory is perfect for working with large 3D assets or editing 8K HDR RAW videos. At 16GB, these laptops provide creators working across multiple apps with plenty of memory to ensure these apps run smoothly.

The laptops are powered by third-generation Max-Q technologies. Dynamic Boost 2.0 intelligently shifts power between the GPU, GPU memory and CPU to accelerate apps, improving battery life. WhisperMode 2.0 controls the acoustic volume for the laptop, using AI-powered algorithms to dynamically manage the CPU, GPU and fan speeds to deliver quieter acoustics. For 3D artists, NVIDIA DLSS 2.0 utilizes dedicated AI processors on RTX GPUs called Tensor Cores to boost frame rates in real-time 3D applications such as D5 Render, Unreal Engine 4 and NVIDIA Omniverse.

Meet the New NVIDIA Studio Laptops 

Thirteen new Studio laptops were introduced today, including:

  • Nine new models from Dell: The professional-grade Precision 5560, 5760, 7560 and 7760, creator dream team XPS 15 and XPS 17, redesigned Inspiron 15 Plus and 16 Inspiron Plus, and the ready for small business Vostro 7510. The Dell Precision 5560 and XPS 15 debut with elegant, thin, world-class designs featuring creator-grade panels.
  • HP debuts the updated ZBook Studio G8, the world’s most powerful laptop of its size, featuring an incredible DreamColor display with a 120Hz refresh rate and a wide array of configuration options including GeForce RTX 3080 16GB and NVIDIA RTX A5000 laptop GPUs.
  • Lenovo introduced the IdeaPad 5i Pro, with a factory-calibrated, 100 percent sRGB panel, available in 14 and 16-inch configurations with GeForce RTX 3050, as well as the ThinkBook 16p, powered by GeForce RTX 3060.

Gigabyte, MSI and Razer also refreshed their Studio laptops, originally launched earlier this year, with new Intel 11th Gen CPUs, including the Gigabyte AERO 15 OLED and 17 HDR, MSI Creator 15 and Razer Blade 15.

The Proof is in the Perf’ing

The Studio ecosystem is flush with support for top creative apps. In total, more than 60 have RTX-specific benefits.

GeForce RTX 30 Series and NVIDIA RTX professional Studio laptops save time (and money) by enabling creators to complete creative tasks faster.

Video specialists can expect to edit 3.1x faster in Adobe Premiere Pro on Studio laptops with a GeForce RTX 3050 Ti, and 3.9x faster with a RTX 3060, compared to CPU alone.

Studio laptops shave hours off a single project by reducing time in playback, unlocking GPU-accelerated effects in real-time video editing and frame rendering, and faster exports with NVIDIA encoding.

Color grading in Blackmagic Design’s DaVinci Resolve, and editing using features such as face refinement and optical flow, is 6.8x faster with a GeForce RTX 3050 Ti than on CPU alone.

Edits that took 14 minutes with RTX 3060 would have taken 2 hours with just the CPU.

3D artists working with Blender who are equipped with a laptop featuring a GeForce RTX 3080-powered system can render an astonishing 24x faster than CPU alone.

A heavy scene that would take 1 hour to render on a Macbook Pro 16 only takes 8 minutes to render on an RTX 3080.

Adobe Photoshop Lightroom completes Enhance Details on RAW photos 3.7x faster with a GeForce RTX 3050 Ti, compared to an 11th Gen Intel i7 CPU, while Adobe Illustrator users can zoom and pan canvases twice as fast with an RTX 3050.

Regardless of your creative field, Studio laptops with GeForce RTX 30 Series and RTX professional laptop GPUs will speed up your workflow.

May Studio Driver and Creative App Updates

Two popular creator applications added Tensor Core support, accelerating workflows, this month. Both, along with the new Studio laptops, are supported by the latest Studio Driver.

Topaz Labs Gigapixel enhances imagery up to 600 percent while maintaining impressive original image quality.

Video Enhance AI is a collection of upscaling, denoising and restoration features.

With Studio Driver support, both Topaz apps are at least 6x faster with a GeForce RTX 3060 than with a CPU alone.

Recent NVIDIA Omniverse updates include new app and connector betas, available to download now.

Omniverse Machinima offers a suite of tools and extensions that enable users to render realistic graphics and animation using scenes and characters from games. Omniverse Audio2Face creates realistic facial expressions and motions to match any voice-over track.

AVerMedia integrated the NVIDIA Broadcast virtual background and audio noise removal AI features natively into their software suite to improve broadcasting abilities without requiring special equipment.

Download the May Studio Driver (release 462.59) today through GeForce Experience or from the driver download page to get the latest optimizations for the new Studio laptops and applications.

Get regular updates for creators by subscribing to the NVIDIA Studio newsletter and following us on Facebook, Twitter and Instagram.

The post Create in Record Time with New NVIDIA Studio Laptops from Dell, HP, Lenovo, Gigabyte, MSI and Razer appeared first on The Official NVIDIA Blog.

Read More

Sharpening Its Edge: U.S. Postal Service Opens AI Apps on Edge Network

In 2019, the U.S. Postal Service had a need to identify and track items in its torrent of more than 100 million pieces of daily mail.

A USPS AI architect had an idea. Ryan Simpson wanted to expand an image analysis system a postal team was developing into something much broader that could tackle this needle-in-a-haystack problem.

With edge AI servers strategically located at its processing centers, he believed USPS could analyze the billions of images each center generated. The resulting insights, expressed in a few key data points, could be shared quickly over the network.

The data scientist, half a dozen architects at NVIDIA and others designed the deep-learning models needed in a three-week sprint that felt like one long hackathon. The work was the genesis of the Edge Computing Infrastructure Program (ECIP, pronounced EE-sip), a distributed edge AI system that’s up and running on the NVIDIA EGX platform at USPS today.

An AI Platform at the Edge

It turns out edge AI is a kind of stage for many great performances. ECIP is already running a second app that acts like automated eyes, tracking items for a variety of business needs.

USPS camera gantry
Cameras mounted on the sorting machines capture addresses, barcodes and other data such as hazardous materials symbols. Courtesy of U.S. Postal Service.

“It used to take eight or 10 people several days to track down items, now it takes one or two people a couple hours,” said Todd Schimmel, the manager who oversees USPS systems including ECIP, which uses NVIDIA-Certified edge servers from Hewlett-Packard Enterprise.

Another analysis was even more telling. It said a computer vision task that would have required two weeks on a network of servers with 800 CPUs can now get done in 20 minutes on the four NVIDIA V100 Tensor Core GPUs in one of the HPE Apollo 6500 servers.

Today, each edge server processes 20 terabytes of images a day from more than 1,000 mail processing machines. Open source software from NVIDIA, the Triton Inference Server, acts as the digital mailperson, delivering the AI models each of the 195 systems need —  when and how they need it.

Next App for the Edge

USPS put out a request for what could be the next app for ECIP, one that uses optical character recognition (OCR) to streamline its imaging workflow.

“In the past, we would have bought new hardware, software — a whole infrastructure for OCR; or if we used a public cloud service, we’d have to get images to the cloud, which takes a lot of bandwidth and has significant costs when you’re talking about approximately a billion images,” said Schimmel.

Today, the new OCR use case will live as a deep learning model in a container on ECIP managed by Kubernetes and served by Triton.

The same systems software smoothed the initial deployment of ECIP in the early weeks of the pandemic. Operators rolled out containers to get the first systems running as others were being delivered, updating them as the full network of nearly nodes was installed.

“The deployment was very streamlined,” Schimmel said. “We awarded the contract in September 2019, started deploying systems in February 2020 and finished most of the hardware by August — the USPS was very happy with that,” he added.

Triton Expedites Model Deliveries

Part of the software magic dust under ECIP’s hood, Triton automates the delivery of different AI models to different systems that may have different versions of GPUs and CPUs supporting different deep-learning frameworks. That saves a lot of time for edge AI systems like the ECIP network of almost 200 distributed servers.

NVIDIA DGX servers at USPS
AI algorithms were developed on NVIDIA DGX servers at a U.S. Postal Service Engineering facility. Courtesy of NVIDIA.

The app that checks for mail items alone requires coordinating the work of more than a half dozen deep-learning models, each checking for specific features. And operators expect to enhance the app with more models enabling more features in the future.

“The models we have deployed so far help manage the mail and the Postal Service — it helps us maintain our mission,” Schimmel said.

A Pipeline of Edge AI Apps

So far, departments across USPS from enterprise analytics to finance and marketing have spawned ideas for as many as 30 applications for ECIP. Schimmel hopes to get a few of them up and running this year.

One would automatically check if a package carries the right postage for its size, weight and destination. Another one would automatically decipher a damaged barcode and could be online as soon as this summer.

“This has a benefit for us and our customers, letting us know where a specific parcel is at — it’s not a silver bullet, but it will fill a gap and boost our performance,” he said.

The work is part of a broader effort at USPS to explore its digital footprint and unlock the value of its data in ways that benefit customers.

“We’re at the very beginning of our journey with edge AI. Every day, people in our organization are thinking of new ways to apply machine learning to new facets of robotics, data processing and image handling,” he said.

Learn more about the benefits of edge computing and the NVIDIA EGX platform, as well as how NVIDIA’s edge AI solutions are transforming every industry.

Pictured at top: Postal Service employees perform spot checks to ensure packages are properly handled and sorted. Courtesy of U.S. Postal Service.

The post Sharpening Its Edge: U.S. Postal Service Opens AI Apps on Edge Network appeared first on The Official NVIDIA Blog.

Read More

GFN Thursday: 61 Games Join GeForce NOW Library in May

May’s shaping up to be a big month for bringing fan-favorites to GeForce NOW. And since it’s the first week of the month, this week’s GFN Thursday is all about the games members can look forward to joining the service this month.

In total, we’re adding 61 games to the GeForce NOW library in May, including 17 coming this week.

Joining This Week

This week’s additions include games from Remedy Entertainment, a classic Wild West FPS and a free title on Epic Games Store. Here are a few highlights:

Alan Wake on GeForce NOW

Alan Wake (Steam)

A Dark Presence stalks the small town of Bright Falls, pushing Alan Wake to the brink of sanity in his fight to unravel the mystery and save his love.

Call of Juarez: Gunslinger on GeForce NOW

Call of Juarez: Gunslinger (Steam)

From the dust of a gold mine to the dirt of a saloon, Call of Juarez Gunslinger is a real homage to Wild West tales. Live the epic and violent journey of a ruthless bounty hunter on the trail of the West’s most notorious outlaws.

Pine (Free on Epic Games Store until May 13)

An open-world action-adventure game set in a simulated world in which humans never reached the top of the food chain. Fight with or against a variety of species as you make your way to a new home for your human tribe.

Members can also look for the following titles later today:

MotoGP21 on GeForce NOW
Push your bike to the limit in MotoGP21, joining the GeForce NOW library this week.

More in May

This week is just the beginning. We have a giant list of titles joining GeForce NOW throughout the month, including:

In Case You Missed It

In April, we added 27 more titles than shared on April 1. Check out these games, streaming straight from the cloud:

Time to start planning your month, members. What are you going to play? Let us know on Twitter or in the comments below.

The post GFN Thursday: 61 Games Join GeForce NOW Library in May appeared first on The Official NVIDIA Blog.

Read More

Putting the AI in Retail: Walmart’s Grant Gelvin on Prediction Analytics at Supercenter Scale

With only one U.S. state without a Walmart supercenter — and over 4,600 stores across the country — the retail giant’s prediction analytics work with data on an enormous scale.

Grant Gelven, a machine learning engineer at Walmart Global Tech, joined NVIDIA AI Podcast host Noah Kravitz for the latest episode of the AI Podcast.

Gelven spoke about the big data and machine learning methods making it possible to improve everything from the customer experience to stocking to item pricing.

Gelven’s most recent project has been a dynamic pricing system, which reduces excess food waste by pricing perishable goods at a cost that ensures they’ll be sold. This improves suppliers’ ability to deliver the correct volume of items, the customers’ ability to purchase, and lessens the company’s impact on the environment.

The models that Gelven’s team work on are extremely large, with hundreds of millions of parameters. They’re impossible to run without GPUs, which are helping accelerate dataset preparation and training.

The improvements that machine learning have made to Walmart’s retail predictions reach even farther than streamlining business operations. Gelven points out that it’s ultimately helped customers worldwide get the essential goods they need, by allowing enterprises to react to crises and changing market conditions.

Key Points From This Episode:

  • Gelven’s goal for enterprise AI and machine learning models isn’t just to solve single use case problems, but to improve the entire customer experience through a complex system of thousands of models working simultaneously.
  • Five years ago, the time from concept to model to operations took roughly a year. Gelven explains that GPU acceleration, open-source software, and various other new tools have drastically reduced deployment times.

Tweetables:

“Solving these prediction problems really means we have to be able to make predictions about hundreds of millions of distinct units that are distributed all over the country.” — Grant Gelven [3:17]

“To give customers exactly what they need when they need it, I think is probably one of the most important things that a business or service provider can do.” — Grant Gelven [16:11]

You Might Also Like:

Focal Systems Brings AI to Grocery Stores

CEO Francois Chaubard explains how Focal Systems is applying deep learning and computer vision to automate portions of retail stores to streamline store operations and get customers in and out more efficiently.

Credit Check: Capital One’s Kyle Nicholson on Modern Machine Learning in Finance

Kyle Nicholson, a senior software engineer at Capital One, talks about how modern machine learning techniques have become a key tool for financial and credit analysis.

HP’s Jared Dame on How AI, Data Science Are Driving Demand for Powerful New Workstations

Jared Dame, Z by HP’s director of business development and strategy for AI, data science and edge technologies, speaks about the role HP’s workstations play in cutting-edge AI and data science.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Putting the AI in Retail: Walmart’s Grant Gelvin on Prediction Analytics at Supercenter Scale appeared first on The Official NVIDIA Blog.

Read More

AI Gone Global: Why 20,000+ Developers from Emerging Markets Signed Up for GTC

Major tech conferences are typically hosted in highly industrialized countries. But the appetite for AI and data science resources spans the globe — with an estimated 3 million developers in emerging markets.

Our recent GPU Technology Conference — virtual, free to register, and featuring 24/7 content — for the first time featured a dedicated track on AI in emerging markets. The conference attracted a record 20,000+ developers, industry leaders, policymakers and researchers in emerging markets across 95 countries.

These registrations accounted for around 10 percent of all signups for GTC. We saw a 6x jump from last spring’s GTC in registrations from Latin America, a 10x boost in registrations from the Middle East and a nearly 30x jump in registrations from African countries.

Nigeria alone accounted for more than 1,300 signups, and developers from 30 countries in Latin America and the Caribbean registered for the conference.

These attendees weren’t simply absorbing high-level content — they were leading conversations.

Dozens of startup founders from emerging markets shared their innovations. Community leaders, major tech companies and nonprofits discussed their work to build resources for developers in the Caribbean, Latin America and Africa. And hands-on labs, training and networking sessions offered opportunities for attendees to boost their skills and ask questions of AI experts.

We’re still growing our emerging markets initiatives to better connect with developers worldwide. As we do so, we’ll incorporate three key takeaways from this GTC:

  1. Remove Barriers to Access

While in-person AI conferences typically draw attendees from around the world, these opportunities aren’t equally accessible to developers from every region.

Though Africa has the world’s fastest-growing community of AI developers, visa challenges have in recent years prevented some African researchers from attending AI conferences in the U.S. and Canada. And the cost of conference registrations, flights and hotel accommodations in major tech hubs can be prohibitive for many, even at discounted rates.

By making GTC21 virtual and free to register, we were able to welcome thousands of attendees and presenters from countries including Kenya, Zimbabwe, Trinidad and Tobago, Ghana and Indonesia.

  1. Spotlight Region-Specific Challenges, Successes

Opening access is just the first step. A developer from Nigeria faces different challenges than one in Norway, so global representation in conference speakers can help provide a diversity of perspectives. Relevant content that’s localized by topic or language can help cater to the unique needs of a specific audience and market.

The Emerging Markets Pavilion at GTC, hosted by NVIDIA Inception, our acceleration platform for AI startups, featured companies developing augmented reality apps for cultural tourism in Tunisia, smart video analytics in Lebanon and data science tools in Mexico, to name a few examples.

Several panel discussions brought together public sector reps, United Nations leads, community leaders and developer advocates from NVIDIA, Google, Amazon Web Services and other companies for discussions on how to bolster AI ecosystems around the world. And a session on AI in Africa focused on ways to further AI and data science education for a community that mostly learns through non-traditional pathways.

  1. Foster Opportunities to Learn and Connect

Developer groups in emerging markets are growing rapidly, with many building skills through online courses or community forums, rather than relying on traditional degree programs. One way we’re supporting this is by sponsoring AI hackathons in Africa with Zindi, an online forum that brings together thousands of developers to solve challenges for companies and governments across the continent.

The NVIDIA Developer Program includes tens of thousands of members from emerging markets — but there are hundreds of thousands more developers in these regions poised to take advantage of AI and accelerated applications to power their work.

To learn more about GTC, watch the replay of NVIDIA CEO Jensen Huang’s keynote address. Join the NVIDIA Developer Program for access to a wide variety of tools and training to accelerate AI, HPC and advanced graphics applications.

The post AI Gone Global: Why 20,000+ Developers from Emerging Markets Signed Up for GTC appeared first on The Official NVIDIA Blog.

Read More

Around the World in AI Ways: Video Explores Machine Learning’s Global Impact

You may have used AI in your smartphone or smart speaker, but have you seen how it comes alive in an artist’s brush stroke, how it animates artificial limbs or assists astronauts in Earth’s orbit?

The latest video in the “I Am AI” series — the annual scene setter for the keynote at NVIDIA’s GTC — invites viewers on a journey through more than a dozen ways this new and powerful form of computing is expanding horizons.

Perhaps your smart speaker woke you up this morning to music from a distant radio station. Maybe you used AI in your smartphone to translate a foreign phrase in a book you’re reading.

A View of What’s to Come

These everyday use cases are becoming almost commonplace. Meanwhile, the frontiers of AI are extending to advance more critical needs.

In healthcare, the Bionic Vision Lab at UC Santa Barbara uses deep learning and virtual prototyping on NVIDIA GPUs to develop models of artificial eyes. They let researchers explore the potential and limits of a design for artificial eyes by viewing a model through a virtual-reality headset.

At Canada’s University of Waterloo, researchers are using AI to develop autonomous controls for exoskeleton legs that help users walk, climb stairs and avoid obstacles. Wearable cameras filter video through AI models trained on NVIDIA GPUs to recognize surrounding features such as stairs and doorways and then determine the best movements to take.

“Similar to autonomous cars that drive themselves, we’re designing autonomous exoskeletons that walk for themselves,” Brokoslaw Laschowski, a lead researcher on the ExoNet project, said in a recent blog.

Watch New Worlds Come to Life

In “I Am AI,” we meet Sofia Crespo who calls herself a generative artist. She blends and morphs images of jellyfish, corals and insects in videos that celebrate the diversity of life, using an emerging form of AI called generative adversarial networks and neural network models like GPT-2.

Sofia Crespo uses GANs
A fanciful creature created by artist Sofia Crespo using GANs.

“Can we use these technologies to dream up new biodiversities that don’t exist? What would these creatures look like?” she asks in a separate video describing her work.

See How AI Guards Ocean Life

“I Am AI” travels to Hawaii, Morocco, the Seychelles and the U.K., where machine learning is on the job protecting marine life from very real threats.

In Africa, the ATLAN Space project uses a fleet of autonomous drones with AI-powered computer vision to detect illegal fishing and ships dumping oil into the sea.

On the other side of the planet, the Maui dolphin is on the brink of extinction, with only 63 adults in the latest count. A nonprofit called MAUI63 uses AI in drones to identify individuals by their fin markings, tracking their movements so policy makers can take steps such as creating marine sanctuaries to protect them.

Taking the Planet’s Temperature

AI is also at work developing the big picture in planet ecology.

The video spotlights the Plymouth Marine Laboratory in the UK, where researchers use an NVIDIA DGX system to analyze data gathered on the state of our oceans. Their work contributes to the U.N. Sustainable Development Goals and other efforts to monitor the health of the seas.

A team of Stanford researchers is using AI to track wildfire risks. The video provides a snapshot of their work opening doors to deeper understandings of how ecosystems are affected by changes in water availability and climate.

Beam Me Up, NASA

The sky’s the limit with the Spaceborne Computer-2, a supercharged system made by Hewlett Packard Enterprise now installed in the International Space Station. It packs NVIDIA GPUs that astronauts use to monitor their health in real time and track objects in space and on Earth like a cosmic traffic copter.

The ISS now packs an NVIDIA GPU.
Astronauts use Spaceborne Computer-2 to run AI experiments on the ISS.

One of the coolest things about Spaceborne Computer-2 is you can suggest an experiment to run on it. HPE and NASA extended an open invitation for proposals, so Earth-bound scientists can expand the use of AI in space.

If these examples don’t blow the roof off your image of where machine learning might go next, check out the full “I Am AI” video below. It includes several more examples of other AI projects in art, science and beyond.

The post Around the World in AI Ways: Video Explores Machine Learning’s Global Impact appeared first on The Official NVIDIA Blog.

Read More