3D Artist Turns Hobby Into Career, Using Omniverse to Turn Sketches Into Masterpieces

Yenifer Macias headshot
Yenifer Macias

It was memories of playing Pac-Man and Super Mario Bros while growing up in Colombia’s sprawling capital of Bogotá that inspired Yenifer Macias’s award-winning submission for the #CreateYourRetroverse contest, featured above.

The contest asked NVIDIA Omniverse users to share scenes that visualize where their love for graphics began. For Macias, that passion goes back to childhood. She loved video games — but was all the more wowed by their art.

Now, using Omniverse, a physically accurate 3D design collaboration platform that runs on NVIDIA Studio products and supports the leading 3D content creation tools of the industry, Macias accelerates her work as a 3D artist — making environments and props for video games, animation, films and advertisements.

She aimed in her #CreateYourRetroverse scene, she said, to “immerse viewers in the game world for a bit and remind them of childhood.”

The Artist’s Journey

Macias loved to doodle and always knew she’d study art.

A 3D animation course she took at a vocational institute in Bogotá confirmed her passion. She wanted to make 3D art all day, every day. Due to financial hardships, however, Macias didn’t have a home computer.

To practice her graphics skills, Macias took classes and completed internships — and with her first paycheck as a 3D artist, she bought a PC to continue her projects at home.

Over the past eight years, Macias has completed a wide range of work including visual effects, architectural drawings and freelance animations. She uses design tools like Adobe Substance Painter, Autodesk Maya and ZBrush.

From Concept to Execution

For her #CreateYourRetroverse scene, Macias started with an initial sketch:

Based on references, she then created all of the props and objects for the environment from scratch in Autodesk Maya. Next, she brought her assets into the Omniverse Create app — using an Omniverse Connector — where she fine-tuned lighting, texture and render.

“I found Omniverse’s powerful render engine to be incredible — you can make changes to the lighting and materials, seeing the results in real time,” Macias said.

Despite being a first-time user of Omniverse, Macias said the Omniverse Create app for massive 3D world building helped her finish the project on a tight timeline and “was very user friendly.”

Looking forward, she plans to use Omniverse’s real-time collaboration feature to team up on projects with artists across the world.

With Omniverse, NVIDIA Studio creators like Macias can supercharge their artistic workflows with optimized RTX-accelerated hardware and software drivers, and state-of-the-art AI and simulation features.

Explore the NVIDIA Omniverse gallery, forums and Medium channel. Check out Omniverse tutorials on Twitter and YouTube, and join our Discord server and Twitch channel to chat with the community.

The post 3D Artist Turns Hobby Into Career, Using Omniverse to Turn Sketches Into Masterpieces appeared first on The Official NVIDIA Blog.

Read More

NVIDIA BlueField Sets New World Record for DPU Performance

Data centers need extremely fast storage access, and no DPU is faster than NVIDIA’s  BlueField-2.

Recent testing by NVIDIA shows that a single BlueField-2 data processing unit reaches 41.5 million input/output operations per second (IOPS) — more than 4x more IOPS than any other DPU.

The BlueField-2 DPU delivered record-breaking performance using standard networking protocols and open-source software. It reached more than 5 million 4KB IOPS and from 7 million to over 20 million 512B IOPS for NVMe over Fabrics (NVMe-oF), a common method of accessing storage media, with TCP networking, one of the primary internet protocols.

To accelerate AI, big data and high performance computing applications, BlueField provides even higher storage performance using the popular RoCE network transport option.

In testing, BlueField supercharged performance as both an initiator and target, using different types of storage software libraries and different workloads to simulate real-world storage configurations. BlueField also supports fast storage connectivity over InfiniBand, the preferred networking architecture for many HPC and AI applications.

Testing Methodology

The 41.5 million IOPS reached by BlueField is more than 4x the previous world record of 10 million IOPS, set using proprietary storage offerings. This performance was achieved by connecting two fast Hewlett Packard Enterprise Proliant DL380 Gen 10 Plus servers, one as the application server (storage initiator) and one as the storage system (storage target).

Each server had two Intel “Ice Lake” Xeon Platinum 8380 CPUs clocked at 2.3GHz, giving 160 hyperthreaded cores per server, along with 512GB of DRAM, 120MB of L3 cache (60MB per socket) and a PCIe Gen4 bus.

To accelerate networking and NVMe-oF, each server was configured with two NVIDIA BlueField-2 P-series DPU cards, each with two 100Gb Ethernet network ports, resulting in four network ports and 400Gb/s wire bandwidth between initiator and target, connected back-to-back using NVIDIA LinkX 100GbE Direct-Attach Copper (DAC) passive cables. Both servers had Red Hat Enterprise Linux (RHEL) version 8.3.

For the storage system software, both SPDK and the standard upstream Linux kernel target were tested using both the default kernel 4.18 and one of the newest kernels, 5.15. Three different storage initiators were benchmarked: SPDK, the standard kernel storage initiator, and the FIO plugin for SPDK. Workload generation and measurements were run with FIO and SPDK. I/O sizes were tested using 4KB and 512B, which are common medium and small storage I/O sizes, respectively.

The NVMe-oF storage protocol was tested with both TCP and RoCE at the network transport layer. Each configuration was tested with 100 percent read, 100 percent write and 50/50 read/write workloads with full bidirectional network utilization.

Our testing also revealed the following performance characteristics of the BlueField DPU:

  • Testing with smaller 512B I/O sizes resulted in higher IOPS but lower-than-line-rate throughput, while 4KB I/O sizes resulted in higher throughput but lower IOPS numbers.
  • 100 percent read and 100 percent write workloads provided similar IOPS and throughput, while 50/50 mixed read/write workloads produced higher performance by using both directions of the network connection simultaneously.
  • Using SPDK resulted in higher performance than kernel-space software, but at the cost of higher server CPU utilization, which is expected behavior, since SPDK runs in user space with constant polling.
  • The newer Linux 5.15 kernel performed better than the 4.18 kernel due to storage improvements added regularly by the Linux community.

Record-Setting DPU Storage Performance Enables Storage Performance With Security

In today’s storage landscape, the vast majority of cloud and enterprise deployments require fast, distributed and networked flash storage, accessed over Ethernet or InfiniBand. Faster servers, GPUs, networks and storage media all tax server CPUs to keep up, and the best way to do so is to deploy storage-capable DPUs.

The incredible storage performance demonstrated by the BlueField-2 DPU enables higher performance and better efficiency across the data center for both application servers and storage appliances.

On top of fast storage access, BlueField also supports hardware-accelerated encryption and decryption of both Ethernet storage traffic and the storage media itself, helping protect against data theft or exfiltration.

It offloads IPsec at up to 100Gb/s (data on the wire) and 256-bit AES-XTS at up to 200Gb/s (data at rest), reducing the risk of data theft if an adversary has tapped the storage network or if the physical storage drives are stolen or sold or disposed of improperly.

Customers and leading security software vendors are using BlueField’s recently updated NVIDIA DOCA framework to run cybersecurity applications – such as a distributed firewall or security groups with micro-segmentation – on the DPU to further improve application and network security for compute servers, which reduces the risk of inappropriate access or data modifications on the storage attached to those servers.

Learn More About NVIDIA Networking Acceleration 

Learn more about NVIDIA DPUs, DOCA, RoCE, and how DPUs and DOCA enable accelerated networking and Zero-Trust Security with these links:

The post NVIDIA BlueField Sets New World Record for DPU Performance appeared first on The Official NVIDIA Blog.

Read More

How Omniverse Wove a Real CEO — and His Toy Counterpart — Together With Stunning Demos at GTC 

It could only happen in NVIDIA Omniverse — the company’s virtual world simulation and collaboration platform for 3D workflows.

And it happened during an interview with a virtual toy model of NVIDIA’s CEO, Jensen Huang.

“What are the greatest …” one of Toy Jensen’s creators asked, stumbling, then stopping before completing his scripted question.

Unfazed, the tiny Toy Jensen paused for a moment, considering the answer carefully.

“The greatest are those,” Toy Jensen replied, “who are kind to others.”

Leading-edge computer graphics, physics simulation, a live CEO, and a supporting cast of AI-powered avatars came together to make NVIDIA’s GTC keynote — delivered using Omniverse — possible.

Along the way, a little soul got into the mix, too.

The AI-driven comments, added to the keynote as a stinger, provided an unexpected peek at the depth of Omniverse’s technology.

“Omniverse is the hub in which all the various research domains converge and align and work in unison,” says Kevin Margo, a member of NVIDIA’s creative team who put the presentation together. “Omniverse facilitates the convergence of all of them.”

Toy Jensen’s ad-lib capped a presentation that seamlessly mixed a real CEO with virtual and real environments as Huang took viewers on a tour of how NVIDIA technologies are weaving AI, graphics and robotics together with humans in real and virtual worlds.

Real CEO, Digital Kitchen

While the CEO viewers saw was all real, the environment around him morphed as he spoke to support the story he was telling.

Viewers saw Huang deliver a keynote that seemed to begin, like so many during the global COVID pandemic, in Huang’s kitchen.

Then, with a flourish, Huang’s kitchen — modeled down to the screws holding its cabinets together — slid away from sight as Huang strolled toward a virtual recreation of Endeavor’s gleaming lobby.

“One of our goals is to find a way to elevate our keynote events,” Margo says. “We’re always looking for those special moments when we can do something novel and fantastical, and that showcase NVIDIA’s latest technological innovations.”

It was the start of a visual journey that would take Huang from that lobby to Shannon’s, a gathering spot inside Endeavor, through a holodeck, and a data center with stops inside a real robotics lab and the exterior of Endeavor.

Virtual environments such as Huang’s kitchen were created by a team using familiar tools supported by Omniverse such as Autodesk Maya and 3ds Max, and Adobe Substance Painter.  

Omniverse served to connect them all in real-time — so each team member could see changes made by colleagues using different tools simultaneously, accelerating their work.

“That was critical,” Margo says.

The virtual and the real came together quickly once live filming began.

A small on-site video team recorded Huang’s speech in just four days, starting October 30, in a spare pair of conference rooms at NVIDIA’s Silicon Valley headquarters.

Omniverse allowed NVIDIA’s team to project the dynamic virtual environments their colleagues had created on a screen behind Huang.

As a result, the light spill onto Huang changed as the scene around him changed, better integrating him into the virtual environment.

And as Huang moved through the scene, or as the camera shifted, the environment changed around Huang.

“As the camera moves, the perspective and parallax of the world on the video wall responds accordingly,” Mago says.

And because Huang could see the environment projected on the screens around him, he was better able to navigate each scene.

At the Speed of Omniverse

All of this accelerated the work of NVIDIA’s production team, which had most of what they needed in-camera after each shot rather than adding elaborate digital sets in post-production.

As a result, the video team quickly created a presentation seamlessly blending a real CEO with virtual and real-world settings.

However, Omniverse was more than just a way to speed collaboration between creatives working with real and digital elements hustling to hit a deadline. It also served as the platform that knit the string of demos featured in the keynote together.

To help developers create intelligent, interactive agents with Omniverse that can see, speak, converse on a wide range of subjects and understand naturally spoken intent, Huang announced Omniverse Avatar.

Omniverse brings together a deep stack of technologies — from ray-tracing to recommender systems — that were mixed and matched throughout the keynote to drive a series of stunning demos.

In a demo that swiftly made headlines, Huang showed how “Project Tokkio” for Omniverse Avatar connects Metropolis computer vision, Riva speech AI, avatar animation and graphics into a real-time conversational AI robot — the Toy Jensen Omniverse Avatar.

The conversation between three of NVIDIA’s engineers and a tiny toy model of Huang was more than just a technological tour de force, demonstrating expert, natural Q&A.

It showed how photorealistic modeling of Toy Jensen and his environment — right down to the glint on Toy Jensen’s glasses as he moved his head — and NVIDIA’s Riva speech synthesis technology powered by the Megatron 530B large language model could support natural, fluid conversations.

To create the demo, NVIDIA’s creative team created the digital model in Maya Substance, and Omniverse did the rest.

“None of it was manual, you just load up the animation assets and talk to it,” he said.

Huang also showed a second demo of Project Tokkio, a customer-service avatar in a restaurant kiosk that was able to see, converse with and understand two customers.

Rather than relying on Megatron, however, this model relied on a model that integrated the restaurant’s menu, allowing the avatar to smoothly guide customers through their options.

That same technology stack can help humans talk to one another, too. Huang showed Project Maxine’s ability to add state-of-the-art video and audio features to virtual collaboration and video content creation applications.

A demo showed a woman speaking English on a video call in a noisy cafe, but she can be heard clearly without background noise. As she speaks, her words are transcribed and translated in real-time into French, German and Spanish.

Thanks to Omniverse, they’re spoken by an avatar able to engage in conversation with her same voice and intonation.

These demos were all possible because Omniverse, through Omniverse Avatar, unites advanced speed AI, computer vision, natural language understanding, recommendation engines, facial animation and graphics technologies.

Omniverse Avatar’s speech recognition is based on NVIDIA Riva, a software development kit that recognizes speech across multiple languages. Riva is also used to generate human-like speech responses using text-to-speech capabilities.

Omniverse Avatar’s natural language understanding is based on the Megatron 530B large language model that can recognize, understand and generate human language.

Megatron 530B is a pretrained model that can, with little or no additional training, complete sentences, answers questions involving a large domain of subjects. It can summarize long, complex stories, translate to other languages, and handle many domains that it is not trained specifically to do.

Omniverse Avatar’s recommendation engine is provided by NVIDIA Merlin, a framework that allows businesses to build deep learning recommender systems capable of handling large amounts of data to make smarter suggestions.

Its perception capabilities are enabled by NVIDIA Metropolis, a computer vision framework for video analytics.

And its avatar animation is powered by NVIDIA Video2Face and Audio2Face, 2D and 3D AI-driven facial animation and rendering technologies.

All of these technologies are composed into an application and processed in real-time using the NVIDIA Unified Compute Framework.

Packaged as scalable, customizable microservices, the skills can be securely deployed, managed and orchestrated across multiple locations by NVIDIA Fleet Command.

Using them, Huang was able to tell a sweeping story about how NVIDIA Omniverse is changing multitrillion-dollar industries.

All of these demos were built on Omniverse. And thanks to Omniverse, everything came together — a real CEO, real and virtual environments, and a string of demos made within Omniverse as well.

Since its launch late last year, Omniverse has been downloaded over 70,000 times by designers at 500 companies. Omniverse Enterprise is now available starting at $9,000 a year.

The post How Omniverse Wove a Real CEO — and His Toy Counterpart — Together With Stunning Demos at GTC  appeared first on The Official NVIDIA Blog.

Read More

Living in the Future: NIO ET5 Sedan Designed for the Autonomous Era With NVIDIA DRIVE Orin

Meet the electric vehicle that’s truly future-proof.

Electric-automaker NIO took the wraps off its fifth mass-production model, the ET5, during NIO Day 2021 last week.

The mid-size sedan borrows from its luxury and performance predecessors for an intelligent vehicle that’s as agile as it is comfortable. Its AI features are powered by the NIO Adam supercomputer, built on four NVIDIA DRIVE Orin systems-on-a-chip (SoC).

In addition to centralized compute, the ET5 incorporates high-performance sensors into its sleek design, equipping it with the hardware necessary for advanced AI-assisted driving features.

The sedan also embodies the NIO concept of vehicles serving as a second living room, with a luxurious interior and immersive augmented reality digital cockpit.

These cutting-edge features are built to go the distance. The ET5 achieves more than 620 miles of range with the 150 kWh Ultralong Range Battery and a lightning-fast acceleration from zero to 60 mph in about four seconds.

A Truly Intelligent Creation

The ET5 and its older sibling, the ET7 full-size sedan, rely on a centralized, high-performance compute architecture to power AI features and continuously receive upgrades over the air.

The NIO Adam supercomputer is built on four DRIVE Orin SoCs, making it one of the most powerful platforms to run in a vehicle, achieving a total of more than 1,000 TOPS of performance.

Orin is the world’s highest-performance, most-advanced AV and robotics processor. It delivers up to 254 TOPS to handle the large number of applications and deep neural networks that run simultaneously in autonomous vehicles and robots while achieving systematic safety standards such as ISO 26262 ASIL-D.

Adam integrates the redundancy and diversity necessary for safe autonomous operation by using multiple SoCs.

The first two SoCs process the eight gigabytes of data produced by the vehicle’s sensor set every second.

The third Orin serves as a backup to ensure the system can operate safely in any situation.

While the fourth enables local training, improving the vehicle with fleet learning and personalizing the driving experience based on user preferences.

With high-performance computing at its core, Adam is a major achievement in the creation of automotive intelligence and autonomous driving.

Going Global

After beginning deliveries in Norway earlier this year, NIO will expand worldwide in 2022.

The ET7, the first vehicle built on the DRIVE Orin-powered Adam supercomputer, will become available in March, with the ET5 following in September.

Next year, NIO vehicles will begin deliveries in the Netherlands, Sweden and Denmark.

By 2025, NIO vehicles will be in 25 countries and regions worldwide, bringing one of the most advanced AI platforms to even more customers.

With the ET5, NIO is showing no signs of slowing as it charges into the future with sleek, intelligent EVs powered by NVIDIA DRIVE.

The post Living in the Future: NIO ET5 Sedan Designed for the Autonomous Era With NVIDIA DRIVE Orin appeared first on The Official NVIDIA Blog.

Read More

Detect That Defect: Mariner Speeds Up Manufacturing Workflows With AI-Based Visual Inspection

Imagine picking out a brand new car — only to find a chip in the paint, rip in the seat fabric or mark in the glass.

AI can help prevent such moments of disappointment for manufacturers and potential buyers.

Mariner, an NVIDIA Metropolis partner based in Charlotte, North Carolina, offers an AI-enabled video analytics system to help manufacturers improve surface defect detection. For over 20 years, the company has worked to provide its customers with deep learning-based insights to optimize their manufacturing processes.

The vision AI platform, called Spyglass Visual Inspection, or SVI, helps manufacturers detect the defects they couldn’t see before. It’s built on the NVIDIA Metropolis intelligent video analytics framework and powered by NVIDIA GPUs.

SVI is installed in factories and used by customers like Sage Automotive Interiors to enhance their defect detection in cases where traditional, rules-based machine vision systems often pinpoint false positives.

Reducing Waste with AI

According to David Dewhirst, vice president of marketing at Mariner, up to 40 percent of annual revenue for automotive manufacturers is consumed by producing defective products.

Traditional machine vision systems installed in factories have difficulty discerning between true defects — like a stain in fabric or a chip in glass — and false positives, like lint or a water droplet that can be easily wiped away.

SVI, however, uses AI software and NVIDIA hardware connected to camera systems that provide real-time inspection of pieces on production lines, identify potential issues and determine whether they are true material defects — in just a millisecond.

This speeds up factory lines, removing the need to slow or stop the workflow to have a person inspect each potential defect. SVI results in a 20 percent increase in line speed and 30x reduction of incorrect defect classification over traditional machine vision systems.

The platform can be integrated with a factory’s existing machine vision system, giving it a boost with AI-based analysis and processing. It offers a factory an average annual savings of $2 million, Dewhirst said.

SVI uses a deep learning model that analyzes images, identifies a defect, and then labels the defects by type — which are all tasks that require powerful graphics processors.

“NVIDIA GPUs guarantee that SVI can handle almost any pixel combination and processing speed, which is why it was our choice of hardware on which to standardize our platform,” Dewhirst said.

Mariner is on track to revolutionize the defect detection process by expanding the use of its platform, which can identify defects in metal, plastic or virtually any other surface type.

Learn more about how the Spyglass system works:

The post Detect That Defect: Mariner Speeds Up Manufacturing Workflows With AI-Based Visual Inspection appeared first on The Official NVIDIA Blog.

Read More