A Breakthrough Preview: JIDU Auto Debuts Intelligent Robo-01 Concept Vehicle, Powered by NVIDIA DRIVE Orin

JIDU Auto sees a brilliant future ahead for intelligent electric vehicles.

The EV startup, backed by tech titan Baidu, took the wraps off the Robo-01 concept vehicle last week during its virtual ROBODAY event. The robot-inspired, software-defined vehicle features cutting-edge AI capabilities powered by the high-performance NVIDIA DRIVE Orin compute platform.

The sleek compact SUV provides a glimpse of JIDU’s upcoming lineup. It’s capable of level 4 autonomous driving, safely operating at highway speeds, on busy urban roads and performing driverless valet parking.

The Robo-01 also showcases a myriad of design innovations, including a retractable yoke steering wheel that folds under the dashboard during autonomous driving mode, as well as lidar sensors that extend and retract from the hood. It features human-like interactive capabilities between passengers and the vehicle’s in-cabin AI using perception and voice recognition.

JIDU is slated to launch a limited production version of the robocar later this year.

Continuous Innovation

A defining feature of the Robo-01 concept is its ability to improve by adding new intelligent capabilities throughout the life of the vehicle.

These updates are delivered over the air, which requires a software-defined vehicle architecture built on high-performance AI compute. The Robo-01 has two NVIDIA DRIVE Orin  systems-on-a-chip (SoC) at the core of its centralized computer system, which provide ample compute for autonomous driving and AI features, with headroom to add new capabilities.

DRIVE Orin is a highly advanced autonomous vehicle processor. This supercomputer on a chip is capable of delivering up to 254 trillion operations per second (TOPS) to handle the large number of applications and deep neural networks that run simultaneously in autonomous vehicles and robots, while meeting systematic safety standards such as ISO 26262 ASIL-D.

The two DRIVE Orin SoCs at the center of JIDU vehicles will deliver more than 500 TOPS of performance to achieve the redundancy and diversity necessary for autonomous operation and in-cabin AI features.

Even More in Store

JIDU will begin taking orders in 2023 for the production version of the Robo-01, with deliveries scheduled for 2024.

The automaker plans to unveil the design of its second production model at this year’s Guangzhou Auto Show in November.

Jam-packed with intelligent features and room to add even more, the Robo-01 shows the incredible possibilities that future electric vehicles can achieve with a centralized, software-defined AI architecture.

The post A Breakthrough Preview: JIDU Auto Debuts Intelligent Robo-01 Concept Vehicle, Powered by NVIDIA DRIVE Orin appeared first on NVIDIA Blog.

Read More

The Data Center’s Traffic Cop: AI Clears Digital Gridlock

Gal Dalal wants to ease the commute for those who work from home — or the office.

The senior research scientist at NVIDIA, who is part of a 10-person lab in Israel, is using AI to reduce congestion on computer networks.

For laptop jockeys, a spinning circle of death — or worse, a frozen cursor — is as bad as a sea of red lights on the highway. Like rush hour, it’s caused by a flood of travelers angling to get somewhere fast, crowding and sometimes colliding on the way.

AI at the Intersection

Networks use congestion control to manage digital traffic. It’s basically a set of rules embedded into network adapters and switches, but as the number of users on networks grows their conflicts can become too complex to anticipate.

AI promises to be a better traffic cop because it can see and respond to patterns as they develop. That’s why Dalal is among many researchers around the world looking for ways to make networks smarter with reinforcement learning, a type of AI that rewards models when they find good solutions.

But until now, no one’s come up with a practical approach for several reasons.

Racing the Clock

Networks need to be both fast and fair so no request gets left behind. That’s a tough balancing act when no one driver on the digital road can see the entire, ever-changing map of other drivers and their intended destinations.

And it’s a race against the clock. To be effective, networks need to respond to situations in about a microsecond, that’s one-millionth of a second.

To smooth traffic, the NVIDIA team created new  reinforcement learning techniques inspired by state-of-the-art computer game AI and adapted them to the networking problem.

Part of their breakthrough, described in a 2021 paper, was coming up with an algorithm and a corresponding reward function for a balanced network based only on local information available to individual network streams. The algorithm enabled the team to create, train and run an AI model on their NVIDIA DGX system.

A Wow Factor

Dalal recalls the meeting where a fellow Nvidian, Chen Tessler, showed the first chart plotting the model’s results on a simulated InfiniBand data center network.

“We were like, wow, ok, it works very nicely,” said Dalal, who wrote his Ph.D. thesis on reinforcement learning at Technion, Israel’s prestigious technical university.

“What was especially gratifying was we trained the model on just 32 network flows, and it nicely generalized what it learned to manage more than 8,000 flows with all sorts of intricate situations, so the machine was doing a much better job than preset rules,” he added.

Reinforcement learning for congestion control
Reinforcement learning (purple) outperformed all rule-based congestion control algorithms in NVIDIA’s tests.

In fact, the algorithm delivered at least 1.5x better throughput and 4x lower latency than the best rule-based technique.

Since the paper’s release, the work’s won praise as a real-world application that shows the potential of reinforcement learning.

Processing AI in the Network

The next big step, still a work in progress, is to design a version of the AI model that can run at microsecond speeds using the limited compute and memory resources in the network. Dalal described two paths forward.

His team is collaborating with the engineers designing NVIDIA BlueField DPUs to optimize the AI models for future hardware. BlueField DPUs aim to run inside the network an expanding set of communications jobs, offloading tasks from overburdened CPUs.

Separately, Dalal’s team is distilling the essence of its AI model into a machine learning technique called boosting trees, a series of yes/no decisions that’s nearly as smart but much simpler to run. The team aims to present its work later this year in a form that could be immediately adopted to ease network traffic.

A Timely Traffic Solution

To date, Dalal has applied reinforcement learning to everything from autonomous vehicles to data center cooling and chip design. When NVIDIA acquired Mellanox in April 2020, the NVIDIA Israel researcher started collaborating with his new colleagues in the nearby networking group.

“It made sense to apply our AI algorithms to the work of their congestion control teams, and now, two years later, the research is more mature,” he said.

It’s good timing. Recent reports of double-digit increases in Israel’s car traffic since pre-pandemic times could encourage more people to work from home, driving up network congestion.

Luckily, an AI traffic cop is on the way.

The post The Data Center’s Traffic Cop: AI Clears Digital Gridlock appeared first on NVIDIA Blog.

Read More

3D Environment Artist Jacinta Vu Sets the Scene ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

3D environment artist Jacinta Vu joins us In the NVIDIA Studio this week, showcasing her video-game-inspired scene Royal Library and 3D content creation workflow.

Based in Cincinnati, Vu specializes in transforming 2D concept art into 3D models and scenes, a critical contribution she made to The Dragon Prince from Wonderstorm Games.

Vu’s work emits a variety of colors and textures, from models to fully fleshed-out scenes.

Her artistic endeavors often start by drawing low-poly game assets by hand that look like beautiful paintings, her original intention in stylizing Royal Library.

“Around the time of Royal Library, my style was very hand-painted and I wanted to work more towards League of Legends and World of Warcraft styles,” Vu said. “My vision for this project, however, was very different. Royal Library is based on concept art and very different if you compare it.”

Fine attention to detail on individual models is the foundation of creating a stunning scene.

Vu began her creative workflow crafting 3D models in Autodesk Maya, slowly building out the larger scene. Deploying her GeForce RTX 2080 GPU unlocked the GPU-accelerated viewport, enabling Vu’s modeling and animation workflows to be faster and more interactive. This left her free to ideate and unlock creativity, all while saving valuable time.

“Being able to make those fast, precise tweaks was really nice,” Vu said. “Especially since, when you’re making a modular kit for an interior versus an exterior, there is less room to mess up because buildings are made to be perfect structurally.”

Practice makes perfect. The NVIDIA Studio YouTube channel hosts many helpful tutorials, including how to quickly model a scene render using a blocking technique in Autodesk Maya.

Vu then used ZBrush’s customizable brushes to shape and sculpt some models in finer detail.

Next, Vu deployed Marmoset Toolbag and baked her models quickly with RTX-acceleration in mere seconds, saving rendering time later in the process.

Vu then shifts gears to lighting where her mentor encouraged her to go big, literally, saying, “Wouldn’t it be cool to do all this bounce lighting in this big, expansive building?”

Here, Vu experimented with lighting techniques that take advantage of several GPU-accelerated features. In Unreal Engine 4.26, RTX-accelerated ray tracing and NVIDIA DLSS, powered by AI and Tensor Cores, make scene refinement simpler and faster. With the release of Unreal Engine 5, Vu then tried Lumen, UE5’s fully dynamic global illumination system, which gives her the ability to light her scene in stunning detail.

Composition is a key part of the process, noted Vu, “When building a composition, you really want to look into the natural lines of architecture that lead your eye to a focal point.”

Normally Vu would apply her hand-painted texture style to the finished model, but as she continued to refine the scene, it made more and more sense to lean into realistic visuals, especially with RTX GPU hardware to support her creative ambition.

“It’s actually really weird, because I think I was stuck in the process for a while where I had lighting set up, the camera set up, the models were done except for textures,” said Vu. “For me that was hard, because I am from a hand-painted background and switching textures was nerve wracking.”

Applying realistic textures and precise lighting brings the Royal Library to life.

Vu created her textures in Adobe Photoshop and then used Substance 3D Painter to apply various colors and materials directly to her 3D models. NVIDIA RTX and NVIDIA Iray technology in the viewport enable Vu to edit in real time and use ray-traced baking for faster rendering speeds — all accelerated by her GPU.

Vu returns to Unreal Engine 5 to animate the scene using the Sequencer feature. The sparkly effect comes from a godray, amplified by particle effects, combined with atmospheric fog to fill the room.

 

All that’s left are final renders. Vu renders her full-fidelity scene in lightning speed with UE5’s RTX-accelerated Path Tracer.

At last, the Royal Library is ready for visitors, friends and distinguished guests.

Vu, proud to have finally completed Royal Library, reflected on her creative journey, saying, “In the last stretch, I said, ‘I actually know how to do this.’ Once again I was in my head thinking I couldn’t do something, but it was freeing and it’s the type of thing where I learned so much for my next one. I know I can do a lot more a lot quicker, because I know how to do it and I can keep practicing, so I can get to the quality I want.”

NVIDIA Studio exists to unlock creative potential. It provides the resources, innovation and know-how to assist passionate content creators, like Vu.

3D environment artist Jacinta Vu is on ArtStation and Twitter.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

The post 3D Environment Artist Jacinta Vu Sets the Scene ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

Powered Up: 5G and VR Accelerate Vehicle Battery Design

Traveling the scenic route between Wantage, a small town in Oxfordshire, and Coventry in the U.K. meanders up steep hills, past the birthplace of Shakespeare and skirts around 19th-century English bathhouses.

A project using edge computing and the world’s first 5G-enabled VR technology is enabling two engineering teams in those locales, about 70 miles apart, to collaborate as if they were in the same room.

The project is taking place at Hyperbat, the U.K.’s largest independent electric vehicle battery manufacturer. The company’s engineers are able to work simultaneously on a 1:1-scale digital twin of an EV battery.

They can immerse themselves in virtual tasks that mimic real life thanks to renders created using NVIDIA GPUs, RTX Virtual Workstation software and NVIDIA CloudXR technology. The digital transformation results in reduced inefficiencies and faster design processes.

Working in a New Reality

The team at Hyperbat, in partnership with BT, Ericsson, the GRID Factory, Masters of Pie, Qualcomm and NVIDIA, has developed a proof of concept that uses VR to power collaborative sessions.

Using a digital twin with VR delivers greater clarity during the design process. Engineers can work together from anywhere to effectively identify and rectify errors during the vehicle battery design process, making projects more cost-effective.

“This digital twin solution at Hyperbat is the future of manufacturing,” said Marc Overton, managing director of Division X, part of BT’s Enterprise business. “It shows how a 5G private network can provide the foundation for a whole host of new technologies which can have a truly transformative effect in terms of collaboration, innovation and speeding up the manufacturing process.”

See Hyperbat’s system in action:

Masters of Pie’s collaboration engine, called Radical, delivers a real-time extended reality (XR) experience that allows design and manufacturing teams to freely interact with a 3D, lifesize model of an electric vehicle battery. This gives the Hyperbat team a single source of truth for each project — no need for numerous iterations.

The 5G-enabled VR headset, powered by the Qualcomm Snapdragon XR2 platform, gives the team an untethered experience that can be launched with just one click. Designed specifically to address all the challenges of extended reality, it doesn’t require a lengthy setup, nor the importing and exporting of data. Designers can put on their headsets and get straight to work.

Speed Is Key

5G’s ultra-low latency, deployed using an Ericsson radio and private 5G network at Hyperbat, provides faster speeds and more reliable connections, as well as immediate response times.

Combining 5G with the cloud and XR removes inefficiencies in design processes and speeds up production lines, improvements that could greatly benefit the wider manufacturing sector.

And using Project Aurora — NVIDIA’s CloudXR and RTX Virtual Workstation software platform for XR streaming at the edge of the 5G network — large amounts of data can be rapidly processed on remote computers before being streamed to VR headsets with ultra-low latency.

Innovation on a New Scale

AI is reshaping almost every industry. VR and augmented reality provide windows for AI in industry and new design possibilities, with 5G making the technology more accessible.

“Hyperbat’s use case is another demonstration of how 5G and digitalization can really help boost the U.K.’s economy and industry,” said Katherine Ainley, CEO of Ericsson U.K. and Ireland. This technology “can really drive efficiency and help us innovate on a whole new scale,” she said.

Learn more about NVIDIA CloudXR.

The post Powered Up: 5G and VR Accelerate Vehicle Battery Design appeared first on NVIDIA Blog.

Read More

From Code to Clinic, Smart Hospital Tech Boosts Efficiency, Sustainability in Medicine

NVIDIA is collaborating with clinical organizations across Europe to bring AI to the point of care, bolstering clinical pathways with efficiency gains and new data dimensions that can be included in medical decision-making processes.

The University Hospital Essen, in northwestern Germany, is one such organization taking machine learning from the bits to the bedside — using NVIDIA technology and AI to build smart hospitals of the future.

Jens Kleesiek and Felix Nensa, professors at the School of Medicine of the University of Duisburg Essen, are part of a four-person team leading the research groups that established the Institute for Artificial Intelligence in Medicine (IKIM). The technology developed by IKIM is integrated with the IT infrastructure of University Hospital Essen.

IKIM hosts a data annotation lab, overseen by a team of board-certified radiologists, that accelerates the labeling of anatomic structures in medical images using MONAI, an open-source, PyTorch-based framework for building, training, labeling and deploying AI models for healthcare imaging.

MONAI was created by NVIDIA in collaboration with over a dozen leading clinical and research organizations, including King’s College London.

IKIM researchers also use self-supervised learning to pretrain AI models that generate high-quality labels for the hospital’s CT scans, MRIs and more.

Additionally, the IKIM team has developed a smart hospital information platform, or SHIP, an AI-based central healthcare data integration platform and deployment engine. The platform is used by researchers and clinicians to conduct real-time analysis of the slew of data in university hospitals — including medical imaging, radiology reports, clinic notes and patient interviews.

SHIP can, for example, flag an abnormality on a radiology report and notify physicians via real-time push notifications, enabling quicker diagnoses and treatments for patients. The AI can also pinpoint data-driven associations between healthcare metrics like genetic traits and patient outcomes.

“We want to solve real-world problems and bring the solutions right into the clinics,” Kleesiek said. “The SHIP framework is capable of delivering deep learning algorithms that analyze data straight to the clinicians who are at the point of care.”

Plus, increased workflow efficiency — enabled by AI — means increased sustainability within hospitals.

Making Hospitals Smarter

Nensa says his hospital currently has close to 500 IT systems, including those for hospital information, laboratories and radiology. Each consists of critical patient information that’s interrelated — but data from disparate systems can be difficult to connect or draw machine learning-based insights from.

SHIP connects the data from all such systems by automatically translating it into a description standard called fast healthcare interoperability resources, or FHIR, which is commonly used in medicine to exchange electronic health records. SHIP currently encompasses more than 1.2 billion FHIR.

Once converted to FHIR, the information can be easily accessed by data scientists, researchers and clinicians for real-time AI training and analysis based on NVIDIA GPUs and DGX A100 systems. This makes it possible for labor-intensive tasks, such as liver volumetry prior to living donor liver transplantation or bone age estimation in children, to be performed fully automatically in the background, instead of requiring a half-hour of manual work by a radiologist.

“The more artificial intelligence is at work in a hospital, the more patients can enjoy human intelligence,” Nensa said. “As AI provides doctors and nurses relief from repetitive tasks like data retrieval and annotation, the medical professionals can focus on what they really want to do, which is to be there and care for their patients.”

NVIDIA DGX A100 systems power IKIM’s AI training and inference. NVIDIA Triton Inference Server enables fast and scalable concurrent serving of AI models within the clinic.

The IKIM team also uses NVIDIA FLARE, an open-source platform for federated learning, which allows data scientists to develop generalizable and robust AI models while maintaining patient privacy.

Smarter Equals Greener

In addition to reducing physician workload and increasing time for patient care, AI in hospitals boosts sustainability efforts.

As a highly specialized medical center, the University Hospital Essen must be available year-round for reliable patient treatment, with 24-hour operation times. In this way, patient-oriented, cutting-edge medicine is traditionally associated with a high consumption of energy.

SHIP helps hospitals increase efficiency, automating tasks and optimizing processes to reduce friction in the workflow — which saves energy. According to Kleesiek, IKIM reuses the energy emitted by GPUs in the data center, which also helps to make the University Hospital Essen greener.

“NVIDIA is providing all of the layers for us to get the most out of the technology, from software and hardware to training led by expert engineers,” Nensa said.

In April, NVIDIA experts hosted a workshop at IKIM, featuring lectures and hands-on training on GPU-accelerated deep learning, data science and AI in medicine. The workshop led IKIM to kickstart additional projects using AI for medicine — including a research contribution to MONAI.

In addition, IKIM is building SmartWard technology to provide an end-to-end AI-powered patient experience in hospitals, from service robots in waiting areas to automated discharge reports.

For the SmartWard project, the IKIM team is considering integrating the NVIDIA Clara Holoscan platform for medical device AI computing.

Subscribe to NVIDIA healthcare news and watch IKIM’s NVIDIA GTC session on demand.

Feature image courtesy of University of Duisburg-Essen.

The post From Code to Clinic, Smart Hospital Tech Boosts Efficiency, Sustainability in Medicine appeared first on NVIDIA Blog.

Read More

Out of This World: ‘Mass Effect Legendary Edition’ and ‘It Takes Two’ Lead GFN Thursday Updates

Some may call this GFN Thursday legendary as Mass Effect Legendary Edition and It Takes Two join the GeForce NOW library.

Both games expand the available number of Electronic Arts games streaming from our GeForce cloud servers, and are part of 10 new additions this week.

Adventure Awaits In The Cloud

Relive the saga of Commander Shepard in the highly acclaimed “Mass Effect” trilogy with Mass Effect Legendary Edition (Steam and Origin). One person is all that stands between humanity and the greatest threat it’s ever faced. With each action controlling the outcome of every mission, every relationship, every battle and even the fate of the galaxy itself, you decide how the story unfolds.

Play as clashing couple Cody and May, two humans turned into dolls and trapped in a fantastical world in It Takes Two (Steam and Origin). Challenged with saving their relationship, master unique and connected abilities to help each other across an abundance of obstacles and enjoy laugh-out-loud moments. Invite a friend to join for free with Friend’s Pass and work as a team in this heartfelt and hilarious experience.

GeForce NOW gamers can experience both of these beloved games today across compatible devices. RTX 3080 members can take Mass Effect Legendary Edition to the max with 4K resolution and 60 frames per second on the PC and Mac apps. They can also bring It Takes Two on the go streaming at 120 frames per second to select mobile phones.

Plus, RTX 3080 membership gets the perks of ultra-low latency, dedicated RTX 3080 servers and eight-hour-long gaming sessions to support their play.

No Time Like Playtime

Pro Cycling Manager on GeForce NOW
Recruitment, budget, strategy: you make all the decisions in Pro Cycling Manager 2022.

GFN Thursday always means more great gaming. This week comes with 10 new games available to stream on the cloud:

Finally, as you begin your quest known as “The Weekend,” we’ve got a question for you. Let us know your response on Twitter or in the comments below.

The post Out of This World: ‘Mass Effect Legendary Edition’ and ‘It Takes Two’ Lead GFN Thursday Updates appeared first on NVIDIA Blog.

Read More

Stunning Insights from James Webb Space Telescope Are Coming, Thanks to GPU-Powered Deep Learning

NVIDIA GPUs will play a key role interpreting data streaming in from the James Webb Space Telescope, with NASA preparing to release next month the first full-color images from the $10 billion scientific instrument.

The telescope’s iconic array of 18 interlocking hexagonal mirrors, which span a total of 21 feet 4 inches, will be able to peer far deeper into the universe, and deeper into the universe’s past, than any tool to date, unlocking discoveries for years to come.

GPU-powered deep learning will play a key role in several of the highest-profile efforts to process data from the revolutionary telescope positioned a million miles away from Earth, explains UC Santa Cruz Astronomy and Astrophysics Professor Brant Robertson.

“The JWST will really enable us to see the universe in a new way that we’ve never seen before,” said Robertson, who is playing a leading role in efforts to use AI to take advantage of the unprecedented opportunities JWST creates. “So it’s really exciting.”

High-Stakes Science

Late last year, Robertson was among the millions tensely following the runup to the launch of the telescope, developed over the course of three decades, and loaded with instruments that define the leading edge of science.

The JWST’s Christmas Day launch went better than planned, allowing the telescope to slide into a LaGrange point — a kind of gravitational eddy in space that allows an object to “park” indefinitely — and extending the telescope’s usable life to more than 10 years.

“It’s working fantastically,” Robertson reports. “All of the signs are it’s going to be a tremendous facility for science.”

AI Powering New Discoveries

Robertson — who leads the computational astrophysics group at UC Santa Cruz — is among a new generation of scientists across a growing array of disciplines using AI to quickly classify the vast quantities of data — often more than can be sifted in a human lifetime — streaming in from the latest generation of scientific instruments.

“What’s great about AI and machine learning is that you can train a model to actually make those decisions for you in a way that is less hands-on and more based on a set of metrics that you define,” Robertson said.

Simulated image of a portion of the JADES galaxy survey, part of the preparations for galaxy surveys using JWST UCSC astronomer Brant Robertson and his team have been working on for years. (Image credit: JADES Collaboration)

Working with Ryan Hausen, a Ph.D. student in UC Santa Cruz’s computer science department, Robertson helped create a deep learning framework that classifies astronomical objects, such as galaxies, based on the raw data streaming out of telescopes on a pixel by pixel basis, which they called Morpheus.

It quickly became a key tool for classifying images from the Hubble Space Telescope. Since then the team working on Morpheus has grown considerably, to roughly a half-dozen people at UC Santa Cruz.

Researchers are able to use NVIDIA GPUs to accelerate Morpheus across a variety of platforms — from an NVIDIA DGX Station desktop AI system to a small computing cluster equipped with several dozen NVIDIA V100 Tensor Core GPUs to sophisticated simulations runs thousands of GPUs on the Summit supercomputer at Oak Ridge National Laboratory.

A Trio of High-Profile Projects

Now, with the first science data from the JWST due for release July 12, much more’s coming.

“We’ll be applying that same framework to all of the major extragalactic JWST surveys that will be conducted in the first year,” Robertson.

Robertson is among a team of nearly 50 researchers who will be mapping the earliest structure of the universe through the COSMOS-Webb program, the largest general observer program selected for JWST’s first year.

Simulations by UCSC researchers showed how JWST can be used to map the distribution of galaxies in the early universe. The web-like structure in the background of this image is dark matter, and the yellow dots are galaxies that should be detected in the survey. (Image credit: Nicole Drakos)

Over the course of more than 200 hours, the COSMOS-Webb program will survey half a million galaxies with multiband, high-resolution, near-infrared imaging and an unprecedented 32,000 galaxies in mid-infrared.

“The COSMOS-Webb project is the largest contiguous area survey that will be executed with JWST for the foreseeable future,” Robertson said.

Robertson also serves on the steering committee for the JWST Advanced Deep Extragalactic Survey, or JADES, to produce infrared imaging and spectroscopy of unprecedented depth. Robertson and his team will put Morpheus to work classifying the survey’s findings.

Robertson and his team are also involved with another survey, dubbed PRIMER, to bring AI and machine learning classification capabilities to the effort.

From Studying the Stars to Studying Ourselves

All these efforts promise to help humanity survey — and understand — far more of our universe than ever before. But perhaps the most surprising application Robertson has found for Morpheus is here at home.

“We’ve actually trained Morpheus to go back into satellite data and automatically count up how much sea ice is present in the North Atlantic over time,” Robertson said, adding it could help scientists better understand and model climate change.

As a result, a tool developed to help us better understand the history of our universe may soon help us better predict the future of our own small place in it.

FEATURED IMAGE CREDIT: NASA

 

The post Stunning Insights from James Webb Space Telescope Are Coming, Thanks to GPU-Powered Deep Learning appeared first on NVIDIA Blog.

Read More

Festo Develops With Isaac Sim to Drive Its Industrial Automation

Dionysios Satikidis was playing FIFA 19 when he realized the simulated soccer game’s realism offered a glimpse into the future for training robots.

An expert in AI and autonomous systems at Festo, a German industrial control and automation company, he believed the worlds of gaming and robotics would intersect.

“I’ve always been passionate about technology and gaming, and for me and my close colleagues it was clear that someday we will need the gaming tools to create autonomous robots,” said Satikidis, based in Esslingen Germany.

It was a view shared by teammate Jan Seyler, head of advanced control and analytics at Festo; and Dimitrios Lagamtzis, who worked with Festo at that time in 2019.

Satikidis and his colleagues had begun keeping close tabs on NVIDIA and grew increasingly curious about Isaac Sim, a robotics simulation application and synthetic data generation tool built on NVIDIA Omniverse, the 3D design and simulation platform.

Finally, watching from the sidelines of the field wasn’t enough.

“I set up a call with NVIDIA, and when Dieter Fox, senior director of robotics research at NVIDIA, came on the call, I just asked if they were willing to work with us,” he said.

And that’s when it really started.

Tackling Sim-to-Real Challenge

Today Satikidis and a small team at Festo are developing AI for robotics automation. As a player in hardware and pneumatics used in robotics, Festo is making a move into AI-driven simulation, aiming at future Festo products.

Festo uses Isaac Sim to develop skills for its collaborative robots, or cobots. That requires building an awareness of their environments, human partners and tasks.

The lab is focused on narrowing the sim-to-real gap for a robotic arm, developing simulation that improves perception for real robots.

For building perception, its AI models are trained on synthetic data generated by Omniverse Replicator.

“Festo is working on its own cobots, which they plan to ship in 2023 in Europe,” said Satikidis.

Applying Cortex for Automation 

Festo uses Isaac Cortex, a tool in Isaac Sim, to simplify programming for cobot skills. Cortex is a framework for coordinating the Isaac tools into a cohesive robotic system to control virtual robots in Omniverse and physical robots in the real world.

“Our goal is to make programming task-aware robots as easy as programming gaming AIs,” said Nathan Ratliff, director of systems software at NVIDIA, in a recent GTC presentation.

Isaac Sim is a simulation suite that provides a diverse set of tools for robotics simulation. It enables sensor simulation, synthetic data generation, world representation, robot modeling and other capabilities.

The Omniverse platform and its Isaac Sim tools have been a game changer for Festo.

“This is incredible because you can manifest a video game to a real robot,” said Satikidis.

To learn more, check out the GTC session Isaac Cortex: A Decision Framework for Virtual and Physical Robots

The post Festo Develops With Isaac Sim to Drive Its Industrial Automation appeared first on NVIDIA Blog.

Read More

What Is Zero Trust?

For all its sophistication, the Internet age has brought on a digital plague of security breaches. The steady drumbeat of data and identity thefts spawned a new movement and a modern mantra that’s even been the subject of a U.S. presidential mandate — zero trust.

So, What Is Zero Trust?

Zero trust is a cybersecurity strategy for verifying every user, device, application and transaction in the belief that no user or process should be trusted.

That definition comes from the NSTAC report, a 56-page document on zero trust compiled in 2021 by the U.S. National Security Telecommunications Advisory Committee, a group that included dozens of security experts led by a former AT&T CEO.

In an interview, John Kindervag, the former Forrester Research analyst who created the term, noted that he defines it this way in his Zero Trust Dictionary: Zero trust is a strategic initiative that helps prevent data breaches by eliminating digital trust in a way that can be deployed using off-the-shelf technologies that will improve over time.

What Are the Basic Tenets of Zero Trust?

In his 2010 report that coined the term, Kindervag laid out three basic tenets of zero trust. Because all network traffic should be untrusted, he said users must:

  • verify and secure all resources,
  • limit and strictly enforce access control, and
  • inspect and log all network traffic.

That’s why zero trust is sometimes known by the motto, “Never Trust, Always Verify.”

How Do You Implement Zero Trust?

As the definitions suggest, zero trust is not a single technique or product, but a set of principles for a modern security policy.

In its seminal 2020 report, the U.S. National Institute for Standards and Technology (NIST) detailed guidelines for implementing zero trust.

Zero Trust architecture from NIST

Its general approach is described in the chart above. It uses a security information and event management (SIEM) system to collect data and continuous diagnostics and mitigation (CDM) to analyze it and respond to insights and events it uncovers.

It’s an example of a security plan also called a zero trust architecture (ZTA) that creates a more secure network called a zero trust environment.

But one size doesn’t fit all in zero trust. There’s no “single deployment plan for ZTA [because each] enterprise will have unique use cases and data assets,” the NIST report said.

Five Steps to Zero Trust

The job of deploying zero trust can be boiled down to five main steps.

It starts by defining a so-called protect surface, what users want to secure. A protect surface can span systems inside a company’s offices, the cloud and the edge.

From there, users create a map of the transactions that typically flow across their networks and a zero trust architecture to protect them. Then they establish security policies for the network.

Finally, they monitor network traffic to make sure transactions stay within the policies.

Five step process for zero trust

Both the NSTAC report (above) and Kindervag suggest these same steps to create a zero trust environment.

It’s important to note that zero trust is a journey not a destination. Consultants and government agencies recommend users adopt a zero trust maturity model to document an organization’s security improvements over time.

The Cybersecurity Infrastructure Security Agency, part of the U.S. Department of Homeland Security, described one such model (see chart below) in a 2021 document.

Zero Trust maturity model from CISA

In practice, users in zero trust environments request access to each protected resource separately. They typically use multi-factor authentication (MFA) such as providing a password on a computer, then a code sent to a smartphone.

The NIST report lists ingredients for an algorithm (below) that determines whether or not a user gets access to a resource.

NIST algorithm for zero trust access

“Ideally, a trust algorithm should be contextual, but this may not always be possible,” given a company’s resources, it said.

Some argue the quest for an algorithm to measure trustworthiness is counter to the philosophy of zero trust. Others note that machine learning has much to offer here, capturing context across many events on a network to help make sound decisions on access.

The Big Bang of Zero Trust

In May 2021, President Joe Biden released an executive order mandating zero trust for the government’s computing systems.

The order gave federal agencies 60 days to adopt zero trust architectures based on the NIST recommendations. It also called for a playbook on dealing with security breaches, a safety board to review major incidents — even a program to establish cybersecurity warning labels for some consumer products.

It was a big bang moment for zero trust that’s still echoing around the globe.

“The likely effect this had on advancing zero trust conversations within boardrooms and among information security teams cannot be overstated,” the NSTAC report said.

What’s the History of Zero Trust?

Around 2003, ideas that led to zero trust started bubbling up inside the U.S. Department of Defense, leading to a 2007 report. About the same time, an informal group of industry security experts called the Jericho Forum coined the term “de-perimeterisation.”

Kindervag crystalized the concept and gave it a name in his bombshell September 2010 report.

The industry’s focus on building a moat around organizations with firewalls and intrusion detection systems was wrongheaded, he argued. Bad actors and inscrutable data packets were already inside organizations, threats that demanded a radically new approach.

Security Goes Beyond Firewalls

From his early days installing firewalls, “I realized our trust model was a problem,” he said in an interview. “We took a human concept into the digital world, and it was just silly.”

At Forrester, he was tasked with finding out why cybersecurity wasn’t working. In 2008, he started using the term zero trust in talks describing his research.

After some early resistance, users started embracing the concept.

“Someone once told me zero trust would become my entire job. I didn’t believe him, but he was right,” said Kindervag, who, in various industry roles, has helped hundreds of organizations build zero trust environments.

An Expanding Zero Trust Ecosystem

Indeed, Gartner projects that by 2025 at least 70% of new remote access deployments will use what it calls zero trust network access (ZTNA), up from less than 10% at the end of 2021. (Gartner, Emerging Technologies: Adoption Growth Insights for Zero Trust Network Access, G00764424, April 2022)

That’s in part because the COVID lockdown accelerated corporate plans to boost security for remote workers. And many firewall vendors now include ZTNA capabilities in their products.

Market watchers estimate at least 50 vendors from Appgate to Zscaler now offer security products aligned with the zero trust concepts.

AI Automates Zero Trust

Users in some zero trust environments express frustration with repeated requests for multi-factor authentication. It’s a challenge that some experts see as an opportunity for automation with machine learning.

For example, Gartner suggests applying analytics in an approach it calls continuous adaptive trust. CAT (see chart below) can use contextual data — such as device identity, network identity and geolocation — as a kind of digital reality check to help authenticate users.

Gartner on MFA to CAT for zero trust journey
Gartner lays out zero trust security steps. Source: Gartner, Shift Focus From MFA to Continuous Adaptive Trust, G00745072, December 2021.

In fact, networks are full of data that AI can sift in real time to automatically enhance security.

“We do not collect, maintain and observe even half the network data we could, but there’s intelligence in that data that will form a holistic picture of a network’s security,” said Bartley Richardson, senior manager of AI infrastructure and cybersecurity engineering at NVIDIA.

Human operators can’t track all the data a network spawns or set policies for all possible events. But they can apply AI to scour data for suspicious activity, then respond fast.

“We want to give companies the tools to build and automate robust zero trust environments with defenses that live throughout the fabric of their data centers,” said Richardson, who leads development on NVIDIA Morpheus, an open AI cybersecurity framework.

NVIDIA Morpheus for zero trust

NVIDIA provides pretrained AI models for Morpheus, or users can choose a model from a third party or build one themselves.

“The backend engineering and pipeline work is hard, but we have expertise in that, and we can architect it for you,” he said.

It’s the kind of capability experts like Kindervag see as part of the future for zero trust.

“Manual response by security analysts is too difficult and ineffective,” he wrote in a 2014 report. “The maturity of systems is such that a valuable and reliable level of automation is now achievable.”

To learn more about AI and zero trust, read this blog or watch the video below.

The post What Is Zero Trust? appeared first on NVIDIA Blog.

Read More

Feel the Need … for Speed as ‘Top Goose’ Debuts In the NVIDIA Studio

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

You can be my wing-wing anytime.

This week In the NVIDIA Studio takes off with the debut of Top Goose, a short animation created with Omniverse Machinima and inspired by one of the greatest fictional pilots to ever grace the big screen.

The project was powered by PCs using the same breed of GPU that has produced every Best Visual Effects nominee at the Academy Awards for 14 years: multiple systems with NVIDIA RTX A6000 GPUs and an NVIDIA Studio laptop — the Razer Blade 15 with a GeForce RTX 3070 Laptop GPU.

The team took Top Goose from concept to completion in just two weeks. It likely would’ve taken at least twice as long without the remote collaboration NVIDIA Omniverse offers NVIDIA RTX and GeForce RTX users.

 

Built to showcase the #MadeinMachinima contest, the inspiration was simple. One of the NVIDIANs involved in the project, Dane Johnston, succinctly noted, “How do you get a midcentury legionnaire on an aircraft carrier and what would he be doing? He’d be getting chased by a goose, of course.”

Ready to Take-Off

Johnston and fellow NVIDIANs Dave Tyner, Matthew Harwood and Terry Naas began the project by prepping models for the static assets in Autodesk 3ds Max. Several of the key models came from TurboSquid by Shutterstock, including the F14 fighter jet, aircraft carrier, goose and several props.

High-quality models such as the F14 fighter jet, courtesy of TurboSquid by Shutterstock, are available to all Omniverse users.

TurboSquid has a huge library of 3D models to begin creating within Omniverse. Simply drag and drop models into Omniverse and start collaborating with team members — regardless of the 3D application they’re using or where they’re physically located.

Tyner could easily integrate 3D models he already owned by simply dropping them into the scene from the new Asset Store browser in Omniverse.

Texture details were added within Omniverse in real time using Adobe Photoshop.

The team worked seamlessly between apps within Omniverse, in real time, including Adobe Photoshop.

From there, Adobe Photoshop was used to edit character uniforms and various props within the scene, including the Top Goose badge at the end of the cinematic.

Animators, Mount Up!

Once models were ready, animation could begin. The team used Reallusion’s iClone Character Creator Omniverse Connector to import characters to Machinima.

Omniverse-ready USD animations from Reallusion ActorCore were dragged and dropped into the Omniverse Machinima content browser for easy access.

 

The models and animations were brought into Machinima by Tyner, where he used the retargeting function to instantly apply the animations to different characters, including the top knight from Mount & Blade II: Bannerlord — one of the hundreds of assets included with Omniverse.

Tyner, a generalist 3D artist, supplemented the project by creating custom animations from motion capture using an Xsens suit that was exported to FBX. Using a series of Omniverse Connectors, he brought the FBX files into Autodesk 3ds Max and ran a quick script to create a rudimentary skin.

Then, Tyner sent the skinned character and animation into Autodesk Maya for USD skeleton export to Machinima, using the Autodesk Maya Connector. The animation was automatically retargeted onto the main character inside Machinima. Once the data was captured, the entire mocap workflow took only a few minutes using NVIDIA Studio tools.

If Tyner didn’t have a motion-capture suit, he could have used Machinima’s AI Pose Estimation — a tool within Omniverse that lets anyone with a camera capture movement and create a 3D animation.

Static objects were all animated in Machinima with the Curve Editor and Sequencer. These tools allowed the team to animate anything they wanted, exactly how they wanted. For instance, the team animated the fighter jet barrel rolls with gravity keyed on a y-axis — allowing gravity to be turned on and off.

This technique, coupled with NVIDIA PhysX, also allowed the team to animate the cockpit scene with the flying bread and apples simply by turning off the gravity. The objects in the scene all obeyed the laws of physics and flew naturally without any manual animation.

The team collaborates virtually to achieve realistic animations using the Omniverse platform.

Animating the mighty wings of the goose was no cheap trick. While some of the animations were integrated as part of the asset from TurboSquid, the team collaborated within Omniverse to animate the inverted scenes.

Tyner used Omniverse Cloud Simple Share Early Access to package and send the entire USD project to Johnston and Harwood, NVIDIA’s resident audiophile. Harwood added sounds like the fly-bys and goose honks. Johnston brought the Mount & Blade II: Bannerlord character to life by recording custom audio and animating the character’s face with Omniverse Audio2Face.

Traditional audio workflows usually involve multiple pieces of audio recordings sent piecemeal to the animators. With Simple Share, Tyner packaged and sent the entire USD project to Harwood, who was able to add audio directly to the file and return it with a single click.

Revvin’ Up the Engine

Working in Omniverse meant the team could make adjustments and see the changes, with full-quality resolution, in real time. This saved the team a massive amount of time by not having to wait for single shots to render out.

The 3D artist team works together to finish the scene in Omniverse Machinima and Audio2Face.

With individuals working hundreds of miles apart, the team leveraged Omniverse’s collaboration capabilities with Omniverse Nucleus. They were able to complete set dressing, layout and lighting adjustments in a single real-time jam session.

 

The new constraints system in Machinima was integral to the camera work. Tyner created the shaky camera that helps bring the feeling of being on an aircraft carrier by animating a shaking ball in Autodesk 3ds Max, bringing it in via its Omniverse Connector, and constraining a camera to it using OmniGraph.

Equally important are the new Curve Editor and Sequencer. They gave the team complete intuitive control of the creative process. They used Sequencer to quickly and easily choreograph animated characters, lights, constraints and cameras — including field of view and depth of field.

With all elements in place, all that was left was the final render — conveniently and quickly handled using the Omniverse RTX renderer and without any file transfers in Omniverse Nucleus.

Tyner noted, “This is the first major project that I’ve done where I was never blocked. With Omniverse, everything just worked and was really easy to use.”

Not only was it easy to use individually, but Omniverse, part of the NVIDIA Studio suite of software, let this team of artists easily collaborate while working in and out of various apps from multiple locations.

Top Prizes in the #MadeinMachinima Contest

Top Goose is a showcase for #MadeinMachinima. The contest, which is currently running and closes June 27, asks artists to build and animate a cinematic short story with the Omniverse Machinima app for a chance to win RTX-accelerated NVIDIA Studio laptops.

RTX creators everywhere can remix and animate characters from Squad, Mount & Blade II: Bannerlord, Shadow Warrior 3, Post Scriptum, Beyond the Wire and Mechwarrior Mercenaries 5 using the Omniverse Machinima app.

Experiment with the AI-enabled tools like Audio2Face for instant facial animation from just an audio track; create intuitively with PhysX-powered tools to help you build as if building in reality; or add special effects with Blast for destruction and Flow for smoke and fire. You can use any third-party tools to help with your workflow, just assemble and render your final submission using Omniverse Machinima.

Learn more about NVIDIA Omniverse, including tips, tricks and more on the Omniverse YouTube channel. For additional support, explore the Omniverse forums or join the Discord server to chat with the community. Check out the Omniverse Twitter, Instagram and Medium page to stay up to date.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access a wide range of tutorials on the Studio YouTube channel and get updates in your inbox by subscribing to the Studio newsletter.

The post Feel the Need … for Speed as ‘Top Goose’ Debuts In the NVIDIA Studio appeared first on NVIDIA Blog.

Read More