Creating Faces of the Future: Build AI Avatars With NVIDIA Omniverse ACE

Creating Faces of the Future: Build AI Avatars With NVIDIA Omniverse ACE

Developers and teams building avatars and virtual assistants can now register to join the early-access program for NVIDIA Omniverse Avatar Cloud Engine (ACE), a suite of cloud-native AI microservices that make it easier to build and deploy intelligent virtual assistants and digital humans at scale.

Omniverse ACE eases avatar development, delivering the AI building blocks necessary to add intelligence and animation to any avatar, built on virtually any engine and deployed on any cloud. These AI assistants can be designed for organizations across industries, enabling organizations to enhance existing workflows and unlock new business opportunities.

ACE is one of several generative AI applications that will help creators accelerate the development of 3D worlds and the metaverse. Members who join the program will receive access to the prerelease versions of NVIDIA’s AI microservices, as well as the tooling and documentation needed to develop cloud-native AI workflows for interactive avatar applications.

Bring Interactive AI Avatars to Life With Omniverse ACE

Methods for developing avatars often require expertise, specialized equipment and manually intensive workflows. To ease avatar creation, Omniverse ACE enables seamless integration of NVIDIA’s AI technologies — including pre-built models, toolsets and domain-specific reference applications — into avatar applications built on most engines and deployed on public or private clouds.

Since it was unveiled in September, Omniverse ACE has been shared with select partners to capture early feedback. Now, NVIDIA is looking for partners who will provide feedback on the microservices, collaborate to improve the product, and push the limits of what’s possible with lifelike, interactive digital humans.

The early-access program includes access to the prerelease versions of ACE animation AI and conversational AI microservices, including:

  • 3D animation AI microservice for third-party avatars, which uses Omniverse Audio2Face generative AI to bring to life characters in Unreal Engine and other rendering tools by creating realistic facial animation from just an audio file.
  • 2D animation AI microservice, called Live Portrait, enables easy animation of 2D portraits or stylized human faces using live video feeds.
  • Text-to-speech microservice uses NVIDIA Riva TTS to synthesize natural-sounding speech from raw transcripts without any additional information, such as patterns or rhythms of speech.

Program members will also get access to tooling, sample reference applications and supporting resources to help get started.

Avatars Make Their Mark Across Industries

Omniverse ACE can help teams build interactive, digital humans that elevate experiences across industries, providing:

  • Easy animation of characters, so users can bring them to life with minimal expertise.
  • The ability to deploy on cloud, which means avatars will be usable virtually anywhere, such as a quick-service restaurant kiosk, a tablet or a virtual-reality headset.
  • A plug-and-play suite, built on NVIDIA Unified Compute Framework (UCF), which enables interoperability between NVIDIA AI and other solutions, ensuring state-of-the-art AI that fits each use case.

Partners such as Ready Player Me and Epic Games have experienced how Omniverse ACE can enhance workflows for AI avatars.

The Omniverse ACE animation AI microservice supports 3D characters from Ready Player Me, a platform for building cross-game avatars.

“Digital avatars are becoming a significant part of our daily lives. People are using avatars in games, virtual events and social apps, and even as a way to enter the metaverse,” said Timmu Tõke, CEO and co-founder of Ready Player Me. “We spent seven years building the perfect avatar system, making it easy for developers to integrate in their apps and games and for users to create one avatar to explore various worlds — with NVIDIA Omniverse ACE, teams can now more easily bring these characters to life.”

Epic Games’ advanced MetaHuman technology transformed the creation of realistic, high-fidelity digital humans. Omniverse ACE, combined with the MetaHuman framework, will make it even easier for users to design and deploy engaging 3D avatars.

Digital humans don’t just have to be conversational. They can be singers, as well — just like the AI avatar Toy Jensen. NVIDIA’s creative team quickly created a holiday performance by TJ, using Omniverse ACE to extract the voice of a singer and turn it into TJ’s voice. This enabled the avatar to sing at the same pitch and with the same rhythm as the original artist.

Many creators are venturing into VTubing, a new way of livestreaming. Users embody a 2D avatar and interact with viewers. With Omniverse ACE, creators can move their avatars into 3D from 2D animation, including photos and stylistic faces. Users can render the avatars from the cloud and animate the characters from anywhere.

Additionally, the NVIDIA Tokkio reference application is expanding, with early partners building cloud-native customer service avatars for industries such as telco, banking and more.

Join the Early-Access Program

Early access to Omniverse ACE is available to developers and teams building avatars and virtual assistants.

Watch the NVIDIA special address at CES on demand. Learn more about NVIDIA Omniverse ACE and register to join the early-access program.

Read More

Symbol Guided Hindsight Priors for Reward Learning from Human Preferences

This paper was accepted at the “Human in the Loop Learning Workshop” at NeurIPS 2022.
Specification of reward functions for Reinforcement Learning is a challenging task which is bypassed by the framework of Preference Based Learning methods which instead learn from preference labels on trajectory queries. These methods, however, still suffer from high requirements of preference labels and often would still achieve low reward recovery. We present the PRIOR framework that alleviates the issues of impractical number of queries to humans as well as poor reward recovery through computing priors…Apple Machine Learning Research

RangeAugment: Efficient Online Augmentation with Range Learning

State-of-the-art automatic augmentation methods (e.g., AutoAugment and RandAugment) for visual recognition tasks diversify training data using a large set of augmentation operations. The range of magnitudes of many augmentation operations (e.g., brightness and contrast) is continuous. Therefore, to make search computationally tractable, these methods use fixed and manually-defined magnitude ranges for each operation, which may lead to sub-optimal policies. To answer the open question on the importance of magnitude ranges for each augmentation operation, we introduce RangeAugment that allows us…Apple Machine Learning Research

New Year, New Career: 5 Leaders Share Tips for Building a Career in AI

New Year, New Career: 5 Leaders Share Tips for Building a Career in AI

Those looking to join the ranks of AI trailblazers or chart a new course in their careers need look no further.

At NVIDIA’s latest GTC conference, industry leaders in a panel called “5 Paths to a Career in AI” shared tips and insights on how to make a mark in this rapidly evolving field.

Representing diverse sectors such as healthcare, automotive, augmented and virtual reality, climate and energy, and manufacturing, these experts offered valuable advice for all seeking to build a career in AI.

Here are five key takeaways from the discussion:

  1. Be curious and constantly learn: “I think in order to break into this field, you’ve got to be curious. It’s so important to always be learning [and] always be asking questions,” emphasized Chelsea Sumner, healthcare AI startups lead for North and Latin America at NVIDIA. “If we’re not asking questions, and we’re not learning, we’re not growing.”
  2. Tell your story effectively to different audiences: “Your ability to tell your story to a variety of different audiences is essential,” noted Justin Taylor, vice president of AI at Lockheed Martin. “So for them to understand what you’re doing [with AI], how you’re doing it, why you’re doing it is essential.”
  3. Embrace challenges and be resilient: “When you have all of these different experiences, you understand that it’s not always going to be perfect,” advised Laura Leal-Taixé, professor at the Technical University of Munich and principal scientist at Argo AI. “And when things aren’t always perfect, you’re able to have competence because [you know that you] did that really hard thing and was able to get through it.”
  4. Understand the purpose behind your work: “Understand the baseline, how do you collect the data baseline — understand the physical, the bottom line. What’s the purpose, what do you want to do?” advised Jay Lee, Ohio eminent scholar of the University of Cincinnati and board member of Foxconn.
  5. Collaborate and seek support from others: “It’s so important for resiliency to find people across different domains and really tap into that,” said Carrie Gotch, creator and content strategy for 3D/AR at Adobe. “No one does it alone, right? You’re always part of a system, part of a team of people.”

The panelists stressed the importance of staying up to date and curious, gaining practical experience, collaborating with others and taking risks when building a career in AI.

Start your journey to an AI career by signing up for NVIDIA GTC, running in March, where you can network, get trained on the latest tools and hear from thought leaders about the impact of AI in various industries.

It could be the first step toward a rewarding AI career that takes you into 2023 and beyond.

Read More