“Insanely Fast,” “Biggest Generational Leap” “New High-End Gaming Champion”: Reviewers Rave for GeForce RTX 3080

“Insanely Fast,” “Biggest Generational Leap” “New High-End Gaming Champion”: Reviewers Rave for GeForce RTX 3080

Reviewers have just finished testing NVIDIA’s new flagship GPU — the NVIDIA RTX 3080 — and the raves are rolling in.

NVIDIA CEO Jensen Huang promised “a giant step into the future,” when he revealed NVIDIA’s GeForce RTX 30 Series GPUs on Sept. 1.

The NVIDIA Ampere GPU architecture, introduced in May, has already stormed through supercomputing and hyperscale data centers.

But no one knew for sure what the new architecture would be capable of when unleashed on gaming.

Now they do:

The GeForce RTX 30 Series, NVIDIA’s second-generation RTX GPUs, deliver up to 2x the performance and up to 1.9x the power efficiency over previous-generation GPUs.

This leap in performance will deliver incredible performance in upcoming games such as Cyberpunk 2077, Call of Duty: Black Ops Cold War and Watch Dogs: Legion, currently bundled with select GeForce RTX 3080 graphics cards at participating retailers.

In addition to the trio of new GPUs — the flagship GeForce RTX 3080, the GeForce RTX 3070 and the “ferocious” GeForce RTX 3090 — gamers get a slate of new tools.

They include NVIDIA Reflex — which makes competitive gamers quicker; NVIDIA Omniverse Machinima — for those using real-time computer graphics engines to create movies; and NVIDIA Broadcast — which harnesses AI to build virtual broadcast studios for streamers.

And new 2nd Gen Ray Tracing Cores and 3rd Gen Tensor Cores make ray-traced and DLSS-accelerated experiences even faster.

GeForce RTX 3080 will be out from NVIDIA and our partners Sept. 17.

The post “Insanely Fast,” “Biggest Generational Leap” “New High-End Gaming Champion”: Reviewers Rave for GeForce RTX 3080 appeared first on The Official NVIDIA Blog.

Read More

More Space, Less Jam: Transportation Agency Uses NVIDIA DRIVE for Federal Highway Pilot

More Space, Less Jam: Transportation Agency Uses NVIDIA DRIVE for Federal Highway Pilot

It could be just a fender bender or an unforeseen rain shower, but a few seconds of disruption can translate to extra minutes or even hours of mind-numbing highway traffic.

But how much of this congestion could be avoided with AI at the wheel?

That’s what the Contra Costa Transportation Authority is working to determine in one of three federally funded automated driving system pilots in the next few years. Using vehicles retrofitted with the NVIDIA DRIVE AGX Pegasus platform, the agency will estimate just how much intelligent transportation can improve the efficiency of everyday commutes.

“As the population grows, there are more demands on roadways and continuing to widen them is just not sustainable,” said Randy Iwasaki, executive director of the CCTA. “We need to find better ways to move people, and autonomous vehicle technology is one way to do that.”

The CCTA was one of eight awardees – and the only local agency – of the Automated Driving System Demonstration Grants Program from the U.S. Department of Transportation, which aims to test the safe integration of self-driving cars into U.S. roads.

The Bay Area agency is using the funds for the highway pilot, as well as two other projects to develop robotaxis equipped with self-docking wheelchair technology and test autonomous shuttles for a local retirement community.

A More Intelligent Interstate

From the 101 to the 405, California is known for its constantly congested highways. In Contra Costa, Interstate 680 is one of those high-traffic corridors, funneling many of the area’s 120,000 daily commuters. This pilot will explore how the Highway Capacity Manual – which sets assumptions for modeling freeway capacity – can be updated to incorporate future automated vehicle technology.

Iwasaki estimates that half of California’s congestion is recurrent, meaning demand for roadways is higher than supply.  The other half is non-recurrent and can be attributed to things like weather events, special events — such as concerts or parades — and accidents. By eliminating human driver error, which has been estimated by the National Highway Traffic Safety Administration to be the cause of 94 percent of traffic accidents, the system becomes more efficient and reliable.

Autonomous vehicles don’t get distracted or drowsy, which are two of the biggest causes of human error while driving. They also use redundant and diverse sensors as well as high-definition maps to detect and plan the road ahead much farther than a human driver can.

These attributes make it easier to maintain constant speeds as well as space for vehicles to merge in and out of traffic for a smoother daily commute.

Driving Confidence

The CCTA will be using a fleet of autonomous test vehicles retrofitted with sensors and NVIDIA DRIVE AGX to gauge how much this technology can improve highway capacity.

The NVIDIA DRIVE AGX Pegasus AI compute platform uses the power of two Xavier systems-on-a-chip and two NVIDIA Turing architecture GPUs to achieve an unprecedented 320 trillion operations per second of supercomputing performance. The platform is designed and built for Level 4 and Level 5 autonomous systems, including robotaxis.

NVIDIA DRIVE AGX Pegasus

Iwasaki said the agency tapped NVIDIA for this pilot because the company’s vision matches its own: to solve real problems that haven’t been solved before, using proactive safety measures every step of the way.

With half of adult drivers reporting they’re fearful of self-driving technology, this approach to autonomous vehicles is critical to gaining public acceptance, he said.

“We need to get the word out that this technology is safer and let them know who’s behind making sure it’s safer,” Iwasaki said.

The post More Space, Less Jam: Transportation Agency Uses NVIDIA DRIVE for Federal Highway Pilot appeared first on The Official NVIDIA Blog.

Read More

AI From the Sky: Stealth Entrepreneur’s Drone Platform Sees into Mines

AI From the Sky: Stealth Entrepreneur’s Drone Platform Sees into Mines

Christian Sanz isn’t above trying disguises to sneak into places. He once put on a hard hat, vest and steel-toed boots to get onto the construction site of the San Francisco 49ers football stadium to explore applications for his drone startup.

That bold move scored his first deal.

For the entrepreneur who popularized drones in hackathons in 2012 as founder of the Drone Games matches, starting Skycatch in 2013 was a logical next step.

“We decided to look for more industrial uses, so I went and bought construction gear and was able to blend in, and in many cases people didn’t know I wasn’t working for them as I was collecting data,” Sanz said.

Skycatch has since grown up: In recent years the San Francisco-based company has been providing some of the world’s largest mining and construction companies its AI-enabled automated drone surveying and analytics platform. The startup, which has landed $47 million in funding, promises customers automated visibility over operations.

At the heart of the platform is the NVIDIA Jetson TX2-driven Edge1 edge computer and base station. It can create 2D maps and 3D point clouds in real-time, as well as pinpoint features  to within five-centimeter accuracy. Also, it runs AI models to do split-second inference in the field to detect objects.

Today, Skycatch announced its new Discover1 device. The Discover1 connects to industrial machines, enabling customers to plug in a multitude of sensors that can expand the data gathering of Skycatch.

The Discover1 sports a Jetson Nano inside to facilitate the collection of data from sensors and enable computer vision and machine learning on the edge. The device has LTE and WiFi connectivity to stream data to the cloud.

Changing-Tracking AI

Skycatch can capture 3D images of job sites for merging against blueprints to monitor changes.

Such monitoring for one large construction site showed that electrical conduit pipes were installed in the wrong spot. Concrete would be poured next, cementing them in place. Catching the mistake early helped avoid a much costlier revision later.

Skycatch says that customers using its services can expect to compress the timelines on their projects as well as reduce costs by catching errors before they become bigger problems.

Surveying with Speed

Japan’s Komatsu, one of the world’s leading makers of bulldozers, excavators and other industrial machines, is an early customer of Skycatch.

With Japan facing a labor shortage, the equipment maker was looking for ways to help automate its products. One bottleneck was surveying a location, which could take days, before unleashing the machines.

Skycatch automated the process with its drone platform. The result for Komatsu is that less-skilled workers can generate a 3D map of a job site within 30 minutes, enabling operators to get started sooner with the land-moving beasts.

Jetson for AI

As Skycatch was generating massive sums of data, the company’s founder realized they needed more computing capability to handle it. Also, given the environment in which they were operating, the computing had to be done on the edge while consuming minimal power.

They turned to the Jetson TX2, which provides server-level AI performance using the CUDA-enabled NVIDIA Pascal GPU in a small form factor and taps as little as 7.5 watts of power. It’s high memory bandwidth and wide range of hardware interfaces in a rugged form factor are ideal for the industrial environments Skycatch operates in.

Sanz says that “indexing the physical world” is demanding because of all the unstructured data of photos and videos, which require feature extraction to “make sense of it all.”

“When the Jetson TX2 came out, we were super excited. Since 2017, we’ve rewritten our photogrammetry engine to use the CUDA language framework so that we can achieve much faster speed and processing,” Sanz said.

Remote Bulldozers

The Discover1 can collect data right from the shovel of a bulldozer. Inertial measurement unit, or IMU, sensors can be attached to the Discover1 on construction machines to track movements from the bulldozer’s point of view.

One of the largest mining companies in the world uses the Discover1 in pilot tests to help remotely steer its massive mining machines in situations too dangerous for operators.

“Now you can actually enable 3D viewing of the machine to someone who is driving it remotely, which is much more affordable,” Sanz said.

 

Skycatch is a member of NVIDIA Inception, a virtual accelerator program that helps startups in AI and data science get to market faster.

The post AI From the Sky: Stealth Entrepreneur’s Drone Platform Sees into Mines appeared first on The Official NVIDIA Blog.

Read More

Letter From Jensen: Creating a Premier Company for the Age of AI

Letter From Jensen: Creating a Premier Company for the Age of AI

NVIDIA founder and CEO Jensen Huang sent the following letter to NVIDIA employees today:

Hi everyone, 

Today, we announced that we have signed a definitive agreement to purchase Arm. 

Thirty years ago, a visionary team of computer scientists in Cambridge, U.K., invented a new CPU architecture optimized for energy-efficiency and a licensing business model that enables broad adoption. Engineers designed Arm CPUs into everything from smartphones and PCs to cloud data centers and supercomputers. An astounding 180 billion computers have been built with Arm — 22 billion last year alone. Arm has become the most popular CPU in the world.   

Simon Segars, its CEO, and the people of Arm have built a great company that has shaped the computer industry and nearly every technology market in the world. 

We are joining arms with Arm to create the leading computing company for the age of AI. AI is the most powerful technology force of our time. Learning from data, AI supercomputers can write software no human can. Amazingly, AI software can perceive its environment, infer the best plan, and act intelligently. This new form of software will expand computing to every corner of the globe. Someday, trillions of computers running AI will create a new internet — the internet-of-things — thousands of times bigger than today’s internet-of-people.   

Uniting NVIDIA’s AI computing with the vast reach of Arm’s CPU, we will engage the giant AI opportunity ahead and advance computing from the cloud, smartphones, PCs, self-driving cars, robotics, 5G, and IoT. 

NVIDIA will bring our world-leading AI technology to Arm’s ecosystem while expanding NVIDIA’s developer reach from 2 million to more than 15 million software programmers. 

Our R&D scale will turbocharge Arm’s roadmap pace and accelerates data center, edge AI, and IoT opportunities. 

Arm’s business model is brilliant. We will maintain its open-licensing model and customer neutrality, serving customers in any industry, across the world, and further expand Arm’s IP licensing portfolio with NVIDIA’s world-leading GPU and AI technology. 

Arm’s headquarter will remain in Cambridge and continue to be a cornerstone of the U.K. technology ecosystem. NVIDIA will retain the name and strong brand identity of Arm. Simon and his management team are excited to be joining NVIDIA.  

Arm gives us the critical mass to invest in the U.K. We will build a world-class AI research center in Cambridge — the university town of Isaac Newton and Alan Turing, for whom NVIDIA’s Turing GPUs and Isaac robotics platform were named. This NVIDIA research center will be the home of a state-of-the-art AI supercomputer powered by Arm CPUs. The computing infrastructure will be a major attraction for scientists from around the world doing groundbreaking research in healthcare, life sciences, robotics, self-driving cars, and other fields. This center will serve as our European hub to collaborate with universities, industrial partners, and startups. It will also be the NVIDIA Deep Learning Institute for Europe, where we teach the methods of applying this marvelous AI technology.  

The foundation built by Arm and NVIDIA employees has provided this fantastic opportunity to create the leading computing company for the age of AI. The possibilities of our combined companies are beyond exciting.   

I can’t wait. 

Jensen

The post Letter From Jensen: Creating a Premier Company for the Age of AI appeared first on The Official NVIDIA Blog.

Read More

NVIDIA and Arm to Create World-Class AI Research Center in Cambridge

NVIDIA and Arm to Create World-Class AI Research Center in Cambridge

Artificial intelligence is the most powerful technology force of our time. 

It is the automation of automation, where software writes software. While AI began in the data center, it is moving quickly to the edge — to stores, warehouses, hospitals, streets, and airports, where smart sensors connected to AI computers can speed checkouts, direct forklifts, orchestrate traffic, and save power. In time, there will be trillions of these small autonomous computers powered by AI, connected by massively powerful cloud data centers in every corner of the world.

But in many ways, the field is just getting started. That’s why we are excited to be creating a world-class AI laboratory in Cambridge, at the Arm headquarters: a Hadron collider or Hubble telescope, if you like, for artificial intelligence.  

NVIDIA, together with Arm, is uniquely positioned to launch this effort. NVIDIA is the leader in AI computing, while Arm is present across a vast ecosystem of edge devices, with more than 180 billion units shipped. With this newly announced combination, we are creating the leading computing company for the age of AI. 

Arm is an incredible company and it employs some of the greatest engineering minds in the world. But we believe we can make Arm even more incredible and take it to even higher levels. We want to propel it — and the U.K. — to global AI leadership.

We will create an open center of excellence in the area once home to giants like Isaac Newton and Alan Turing, for whom key NVIDIA technologies are named. Here, leading scientists, engineers and researchers from the U.K. and around the world will come develop their ideas, collaborate and conduct their ground-breaking work in areas like healthcare, life sciences, self-driving cars and other fields. We want the U.K. to attract the best minds and talent from around the world. 

The center in Cambridge will include: 

  • An Arm/NVIDIA-based supercomputer. Expected to be one of the most powerful AI supercomputers in the world, this system will combine state-of-the art Arm CPUs, NVIDIA’s most advanced GPU technology, and NVIDIA Mellanox DPUs, along with high-performance computing and AI software from NVIDIA and our many partners. For reference, the world’s fastest supercomputer, Fugaku in Japan, is Arm-based, and NVIDIA’s own supercomputer Selene is the seventh most powerful system in the world.  
  • Research Fellowships and Partnerships. In this center, NVIDIA will expand research partnerships within the U.K., with academia and industry to conduct research covering leading-edge work in healthcare, autonomous vehicles, robotics, data science and more. NVIDIA already has successful research partnerships with King’s College and Oxford. 
  • AI Training. NVIDIA’s education wing, the Deep Learning Institute, has trained more than 250,000 students on both fundamental and applied AI. NVIDIA will create an institute in Cambridge, and make our curriculum available throughout the U.K. This will provide both young people and mid-career workers with new AI skills, creating job opportunities and preparing the next generation of U.K. developers for AI leadership. 
  • Startup Accelerator. Much of the leading-edge work in AI is done by startups. NVIDIA Inception, a startup accelerator program, has more than 6,000 members — with more than 400 based in the U.K. NVIDIA will further its investment in this area by providing U.K. startups with access to the Arm supercomputer, connections to researchers from NVIDIA and partners, technical training and marketing promotion to help them grow. 
  • Industry Collaboration. The NVIDIA AI research facility will be an open hub for industry collaboration, providing a uniquely powerful center of excellence in Britain. NVIDIA’s industry partnerships include GSK, Oxford Nanopore and other leaders in their fields. From helping to fight COVID-19 to finding new energy sources, NVIDIA is already working with industry across the U.K. today — but we can and will do more. 

We are ambitious. We can’t wait to build on the foundations created by the talented minds of NVIDIA and Arm to make Cambridge the next great AI center for the world. 

The post NVIDIA and Arm to Create World-Class AI Research Center in Cambridge appeared first on The Official NVIDIA Blog.

Read More

Perfect Pairing: NVIDIA’s David Luebke on the Intersection of AI and Graphics

Perfect Pairing: NVIDIA’s David Luebke on the Intersection of AI and Graphics

NVIDIA Research comprises more than 200 scientists around the world driving innovation across a range of industries. One of its central figures is David Luebke, who founded the team in 2006 and is now the company’s vice president of graphics research.

Luebke spoke with AI Podcast host Noah Kravitz about what he’s working on. He’s especially focused on the interaction between AI and graphics. Rather than viewing the two as conflicting endeavors, Luebke argues that AI and graphics go together “like peanut butter and jelly.”

NVIDIA Research proved that with StyleGAN2, the second iteration of the generative adversarial network StyleGAN. Trained on high-resolution images, StyleGAN2 takes numerical input and produces realistic portraits.

Creating images comparable to those generated in films — which could take up to weeks to create just a single frame — the first version of StyleGAN only takes 24 milliseconds to produce an image.

Luebke envisions the future of GANs as an even larger collaboration between AI and graphics. He predicts that GANs such as those used in StyleGAN will learn to produce the key elements of graphics: shapes, materials, illumination and even animation.

Key Points From This Episode:

  • AI is especially useful in graphics by replacing or augmenting components of the traditional computer graphics pipeline, from content creation to mesh generation to realistic character animation.
  • Luebke researches a range of topics, one of which is virtual and augmented reality. It was, in fact, what inspired him to pursue graphics research — learning about VR led him to switch majors from chemical engineering.
  • Displays are a major stumbling block in virtual and augmented reality, he says. He emphasizes that VR requires high frame rates, low latency and very high pixel density.

Tweetables:

“Artificial intelligence, deep neural networks — that is the future of computer graphics” — David Luebke [2:34]

“[AI], like a renaissance artist, puzzled out the rules of perspective and rotation” — David Luebke [16:08]

You Might Also Like

NVIDIA Research’s Aaron Lefohn on What’s Next at Intersection of AI and Computer Graphics

Real-time graphics technology, namely, GPUs, sparked the modern AI boom. Now modern AI, driven by GPUs, is remaking graphics. This episode’s guest is Aaron Lefohn, senior director of real-time rendering research at NVIDIA. Aaron’s international team of scientists played a key role in founding the field of AI computer graphics.

GauGAN Rocket Man: Conceptual Artist Uses AI Tools for Sci-Fi Modeling

Ever wondered what it takes to produce the complex imagery in films like Star Wars or Transformers? Here to explain the magic is Colie Wertz, a conceptual artist and modeler who works on film, television and video games. Wertz discusses his specialty of hard modeling, in which he produces digital models of objects with hard surfaces like vehicles, robots and computers.

Cycle of DOOM Now Complete: Researchers Use AI to Generate New Levels for Seminal Video Game

DOOM, of course, is foundational to 3D gaming. 3D gaming, of course, is foundational to GPUs. GPUs, of course, are foundational to deep learning, which is, now, thanks to a team of Italian researchers, two of whom we’re bringing to you with this podcast, being used to make new levels for … DOOM.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Perfect Pairing: NVIDIA’s David Luebke on the Intersection of AI and Graphics appeared first on The Official NVIDIA Blog.

Read More

Vision of AI: Startup Helps Diabetic Retinopathy Patients Retain Their Sight

Vision of AI: Startup Helps Diabetic Retinopathy Patients Retain Their Sight

Every year, 60,000 people go blind from diabetic retinopathy, a condition caused by damage to the blood vessels in the eye and a risk factor of high blood sugar levels.

Digital Diagnostics, a software-defined AI medical imaging company formerly known as IDx, is working to help those people retain their vision, using NVIDIA technology to do so.

The startup was founded a decade ago by Michael Abramoff, a retinal surgeon with a Ph.D. in computer science. While training as a surgeon, Abramoff often saw patients with diabetic retinopathy, or DR, that had progressed too far to be treated effectively, leading to permanent vision loss.

With the mission of increasing access to and quality of DR diagnosis, as well as decreasing its cost, Abramoff and his team have created an AI-based solution.

The company’s product, IDx-DR, takes images of the back of the eye, analyzes them and provides a diagnosis within minutes — referring the patient to a specialist for treatment if a more than mild case is detected.

The system is optimized on NVIDIA GPUs and its deep learning pipeline was built using the NVIDIA cuDNN library for high-performance GPU-accelerated operations. Training occurs using Amazon EC2 P3 instances featuring NVIDIA V100 Tensor Core GPUs and is based on images of DR cases confirmed by retinal specialists.

IDx-DR enables diagnostic tests to be completed in easily accessible settings like drugstores or primary care providers’ offices, rather than only at ophthalmology clinics, said John Bertrand, CEO at Digital Diagnostics.

“Moving care to locations the patient is already visiting improves access and avoids extra visits that overwhelm specialty physician schedules,” he said. “Patients avoid an extra copay and don’t have to take time off work for a second appointment.”

Autonomous, Not Just Assistive

“There are lots of good AI products specifically created to assist physicians and increase the detection rate of finding an abnormality,” said Bertrand. “But to allow physicians to practice to the top of their license, and reduce the costs of these low complexity tests, you need to use autonomous AI,” he said.

IDx-DR is the first FDA-cleared autonomous AI system — meaning that while the FDA has cleared many AI-based applications, IDx-DR was the first that doesn’t require physician oversight.

Clinical trials using IDx-DR consisted of machine operators who didn’t have prior experience taking retinal photographs, simulating the way the product would be used in the real world, according to Bertrand.

“Anyone with a high school diploma can perform the exam,” he said.

The platform has been deployed in more than 20 sites across the U.S., including Blessing Health System, in Illinois, where family medicine doctor Tim Beth said, “Digital Diagnostics has done well in developing an algorithm that can detect the possibility of early disease. We would be missing patients if we didn’t use IDx-DR.”

In addition to DR, Digital Diagnostics has created prototypes for products that diagnose glaucoma and age-related macular degeneration. The company is also looking to provide solutions for healthcare issues beyond eye-related conditions, including those related to the skin, nose and throat.

Stay up to date with the latest healthcare news from NVIDIA.

Digital Diagnostics is a Premier member of NVIDIA Inception, a program that supports AI startups with go-to-market support, expertise and technology.

The post Vision of AI: Startup Helps Diabetic Retinopathy Patients Retain Their Sight appeared first on The Official NVIDIA Blog.

Read More

Scaling New Heights: Surge in Remote Work Fuels NVIDIA Cloud Service Provider Program

Scaling New Heights: Surge in Remote Work Fuels NVIDIA Cloud Service Provider Program

For many of the tens of millions of employees working from home amid the pandemic, their change of scenery is likely to stick.

Fifty-two percent of global IT and business leaders surveyed by IDC in June said that their work at home employment models will likely be permanently changed.*

To cope, enterprises are turning to the cloud as it provides the simplified, flexible management of IT resources that are required to support remote workers, wherever they may be. With NVIDIA GPUs and virtualization software, cloud infrastructure can support all kinds of compute and visualization workloads — AI, data science, computer-aided design, rendering, content creation and more — without compromising performance.

This surge of growth in remote work has led the NVIDIA Cloud Service Provider program, a pillar of the NVIDIA Partner Network, to grow by over 60 percent in the first half of the year alone.

New program members include Cloudalize, CoreWeave, Dizzion, E2E, IronOrbit and Paperspace.

The program provides partners like these with resources and tools to grow their business and ensure customer success. Recently 22 new partners have joined in Europe and more than 10 in North America.

Europe and North America have driven regional growth which has contributed to over 80 percent of the new CSP partner adoption, bringing the program to over 100 partners worldwide.

“As the world continues to adapt to working remote, we see unprecedented demand for high-performance managed desktop as a service across all industries,” said Robert Green, president and CTO of Dizzion. Jviation, an aviation engineering firm, relies on Dizzion to optimize its end-user experience, especially for high-end graphics, video collaboration and other media-intense workloads.

“With innovative NVIDIA GPUs, Dizzion cloud desktops enable any global team member to work from home — or anywhere — and keep things business as usual,” said Green.

Daniel Kobran, chief operating officer at Paperspace, said, “GPUs and the new era of accelerated computing are powering applications previously thought impossible. The Paperspace cloud platform provides on-demand GPU processing power behind a unified hub to facilitate collaboration across large, distributed teams for customers such as Medivis, which is using Paperspace to build AI-assisted, real-time analysis to provide surgeons key insights during surgery.”

Cloud service providers in the NPN program have expertise in designing, developing, delivering and managing cloud-based workloads, applications and services. Customers choosing providers that offer NVIDIA GPU-accelerated infrastructure can gain additional benefits, such as:

  • Broad NVIDIA GPU options from the cloud, such as Quadro RTX 6000 and 8000 and NVIDIA T4 and V100 Tensor Core GPUs.
  • Management software to easily unify enterprise private and multi-cloud infrastructure.
  • Services and offerings that ease adoption and migration to the cloud, including deep vertical and workload expertise. For example, desktop-as-a-service options configured with NVIDIA Quadro Virtual Workstation  to support graphics and compute workloads required by creative and technical professionals. Many offerings can be tailored to each enterprise’s unique needs.
  • Compliance with local data sovereignty laws.

More information on program benefits and how to sign up as a partner is available here.

* Source: IDC, “From Rigid to Resilient Organizations: Enabling the Future of Work”, Doc # US45799820, July 2020

The post Scaling New Heights: Surge in Remote Work Fuels NVIDIA Cloud Service Provider Program appeared first on The Official NVIDIA Blog.

Read More

The Great AI Bake-Off: Recommendation Systems on the Rise

The Great AI Bake-Off: Recommendation Systems on the Rise

If you want to create a world-class recommendation system, follow this recipe from a global team of experts: Blend a big helping of GPU-accelerated AI with a dash of old-fashioned cleverness.

The proof was in the pudding for a team from NVIDIA that won this year’s ACM RecSys Challenge. The competition is a highlight of an annual gathering of more than 500 experts who present the latest research in recommendation systems, the engines that deliver personalized suggestions for everything from restaurants to real estate.

At the Sept. 22-26 online event, the team will describe its dish, already available as open source code. They’re also sharing lessons learned with colleagues who build NVIDIA products like RAPIDS and Merlin, so customers can enjoy the fruits of their labor.

In an effort to bring more people to the table, NVIDIA will donate the contest’s $15,000 cash prize to Black in AI, a nonprofit dedicated to mentoring the next generation of Black specialists in machine learning.

GPU Server Doles Out Recommendations

This year’s contest, sponsored by Twitter, asked researchers to comb through a dataset of 146 million tweets to predict which ones a user would like, reply or retweet. The NVIDIA team’s work led a field of 34 competitors, thanks in part to a system with four NVIDIA V100 Tensor Core GPUs that cranked through hundreds of thousands of options.

Their numbers were eye-popping. GPU-accelerated software engineered in less than a minute features that required nearly an hour on a CPU, a 500x speedup. The four-GPU system trained the team’s AI models 120x faster than a CPU. And GPUs gave the group’s end-to-end solution a 280x speedup compared to an initial implementation on a CPU.

“I’m still blown away when we pull off something like a 500x speedup in feature engineering,” said Even Oldridge, a Ph.D. in machine learning who in the past year quadrupled the size of his group that designs NVIDIA Merlin, a framework for recommendation systems.

Recommendation systems on GPUs
GPUs and frameworks such as UCX provided up to 500x speedups compared to CPUs.

Competition Sparks Ideas for Software Upgrades  

The competition spawned work on data transformations that could enhance future versions of NVTabular, a Merlin library that eases engineering new features with the spreadsheet-like tables that are the basis of recommendation systems.

“We won in part because we could prototype fast,” said Benedikt Schifferer, one of three specialists in recommendation systems on the team that won the prize.

Schifferer also credits two existing tools. DASK, an open-source scheduling tool, let the team split memory-hungry jobs across multiple GPUs. And cuDF, part of NVIDIA’s RAPIDS framework for accelerated data science, let the group run the equivalent of the popular Pandas library on GPUs.

“Searching for features in the data using Pandas on CPUs took hours for each new feature,” said Chris Deotte, one of a handful of data scientists on the team who have earned the title Kaggle grandmaster for their prowess in competitions.

“When we converted our code to RAPIDS, we could explore features in minutes. It was life changing, we could search hundreds of features and that eventually led to discoveries that won that competition,” said Deotte, one of only two grandmasters who hold that title in all four Kaggle categories.

More enhancements for recommendation systems are on the way. For example, customers can look forward to improvements in text handling on GPUs, a key data type for recommendation systems.

An Aha! Moment Fuels the Race

Deotte credits a colleague in Brazil, Gilberto Titericz, with an insight that drove the team forward.

“He tracked changes in Twitter followers over time which turned out to be a feature that really fueled our accuracy — it was incredibly effective,” Deotte said.

“I saw patterns changing over time, so I made several plots of them,” said Titericz, who ranked as the top Kaggle grandmaster worldwide for a couple years.

“When I saw a really great result, I thought I made a mistake, but I took a chance, submitted it and to my surprise it scored high on the leaderboard, so my intuition was right,” he added.

In the end, the team used a mix of complementary AI models designed by Titericz, Schifferer and a colleague in Japan, Kazuki Onodera, all based on XGBoost, an algorithm well suited for recommendation systems.

Several members of the team are part of an elite group of Kaggle grandmasters that NVIDIA founder and CEO Jensen Huang dubbed KGMON, a playful takeoff on Pokemon. The team won dozens of competitions in the last four years.

Recommenders Getting Traction in B2C

For many members, including team leader Jean-Francois Puget in southern France, it’s more than a 9-to-5 job.

“We spend nights and weekends in competitions, too, trying to be the best in the world,” said Puget, who earned his Ph.D. in machine learning two decades before deep learning took off commercially.

Now the technology is spreading fast.

This year’s ACM RecSys includes three dozen papers and talks from companies like Amazon and Netflix that helped establish the field with recommenders that help people find books and movies. Now, consumer companies of all stripes are getting into the act including IKEA and Etsy, which are presenting at ACM RecSys this year.

“For the last three or four years, it’s more focused on delivering a personalized experience, really understanding what users want,” said Schifferer. It’s a cycle where “customers’ choices influence the training data, so some companies retrain their AI models every four hours, and some say they continuously train,” he added.

That’s why the team works hard to create frameworks like Merlin to make recommendation systems run easily and fast at scale on GPUs. Other members of NVIDIA’s winning team were Christof Henkel (Germany), Jiwei Liu and Bojan Tunguz (U.S.), Gabriel De Souza Pereira Moreira (Brazil) and Ahmet Erdem (Netherlands).

To get tips on how to design recommendation systems from the winning team, tune in to an online tutorial here on Friday, Sept. 25.

The post The Great AI Bake-Off: Recommendation Systems on the Rise appeared first on The Official NVIDIA Blog.

Read More

Office Ready? Jetson-Driven ‘Double Robot’ Supports Remote Working

Office Ready? Jetson-Driven ‘Double Robot’ Supports Remote Working

Apple’s iPad 2 launch in 2011 ignited a touch tablet craze, but when David Cann and Marc DeVidts got their hands on one they saw something different: They rigged it to a remote-controlled golf caddy and posted a video of it in action on YouTube.

Next came phone calls from those interested in buying such a telepresence robot.

Hacks like this were second nature for the friends who met in 2002 while working on the set of the BattleBots TV series, featuring team-built robots battling before live audiences.

That’s how Double Robotics began in 2012. The startup went on to attend YCombinator’s accelerator, and it has sold more than 12,000 units. That cash flow has allowed the small team with just $1.8 million in seed funding to carry on without raising capital, a rarity in hardware.

Much has changed since they began. Double Robotics, based in Burlingame, Calif., today launched its third-generation model, the Double 3, sporting an NVIDIA Jetson TX2 for AI workloads.

“We did a bunch of custom CUDA code to be able to process all of the depth data in real time, so it’s much faster than before, and it’s highly tailored to the Jetson TX2 now,” said Cann.

Remote Worker Presence

The Double helped engineers inspect Selene while it was under construction.

The Double device, as it’s known, was designed for remote workers to visit offices in the form of the robot so they could see their co-workers in meetings. Video-over-internet call connections allow people to see and hear their remote colleague on the device’s tablet screen.

The Double has been a popular ticket at tech companies on the East and West Coasts in the five years prior to the pandemic, and interest remains strong but in different use cases, according to the company. It has also proven useful in rural communities across the country, where people travel long distances to get anywhere, the company said.

NVIDIA purchased a telepresence robot from Double Robotics so that non-essential designers sheltering at home could maintain daily contact with work on Selene, the world’s seventh-fastest computer.

Some customers who use it say it breaks down communication barriers for remote workers, with the physical presence of the robot able to interact better than using video conferencing platforms.

Also, COVID-19 has spurred interest for contact-free work using the Double. Pharmaceutical companies have contacted Double Robotics asking how the robot might aid in international development efforts, according to Cann. The biggest use case amid the pandemic is for using the Double robots in place of international business travel, he said. Instead of flying in to visit a company office, the office destination could offer a Double to would-be travelers.

 

Double 3 Jetson Advances

Now shipping, the Double 3 features wide-angle and zoom cameras and can support night vision. It also uses two stereovision sensors for depth vision, five ultrasonic range finders, two wheel encoders and an inertial measurement unit sensor.

Double Robotics will sell the head of the new Double 3 — which includes the Jetson TX2 — to existing customers seeking to upgrade its brains for access to increasing levels of autonomy.

To enable the autonomous capabilities, Double Robotics relied on the NVIDIA Jetson TX2 to process all of the camera and sensor data in realtime, utilizing the CUDA-enabled GPUs and the accelerated multimedia and image processors.

The company is working on autonomous features for improved self-navigation and safety features for obstacle avoidance as well as other capabilities, such as improved auto docking for recharging and auto pilot all the way into offices.

Right now the Double can do automated assisted driving to help people avoid hitting walls. The company next aims for full office autonomy and ways to help it get through closed doors.

“One of the reasons we chose the NVIDIA Jetson TX2 is that it comes with the Jetpack SDK that makes it easy to get started and there’s a lot that’s already done for you — it’s certainly a huge help to us,” said Cann.

 

The post Office Ready? Jetson-Driven ‘Double Robot’ Supports Remote Working appeared first on The Official NVIDIA Blog.

Read More