The Ultimate Creative Machines: NVIDIA Studio Laptops Now with GeForce RTX 30 Series Laptop GPUs

The latest NVIDIA Studio laptops, powered by new NVIDIA GeForce RTX 30 Series Laptop GPUs, are empowering the next generation of creativity. And they bring a host of updates to change how fast creators work.

New models come equipped with up to 16GB of video memory, pixel-accurate displays with 1440p and 4K options, and GPU acceleration for ray tracing, AI and video encoding that allow artists to create in record time.

NVIDIA Studio laptops with new RTX 30 Series Laptop GPUs offer improved performance.

Built to Create, Anywhere

GeForce RTX 30 Series Laptop GPUs accelerate rendering performance by up to 2x and introduce third-generation Max-Q technologies. Max-Q helps manufacturers develop thin and light laptops that maintain great performance, enabling creators to conceptualize incredible pieces of content using hundreds of GPU-accelerated apps.

Creator workflows require multiple PC components — the GPU, GPU memory and CPU — to execute rapidly. New, AI-powered Dynamic Boost 2.0 intelligently shifts power among these components, improving performance so laptops run faster and longer.

NVIDIA Studio laptops are designed to provide the best performance and acoustics, but sometimes you need your laptop to be even quieter while giving a presentation or working in a public place. WhisperMode 2.0 uses AI-powered algorithms to dynamically manage the CPU, GPU and fan speeds to deliver great acoustics and the best possible performance.

Creators need to have pixel-accurate displays to preview with the maximum fidelity. NVIDIA Studio laptops use factory-calibrated displays with wide color gamuts and up to 4K resolution. And now, many NVIDIA Studio laptops offer 1440p configurations for the perfect balance between pixel density and performance.

The New 30 Series Studio Laptops

New Studio laptops with GeForce RTX 3060, 3070 and 3080 Laptop GPUs will start rolling out later this month. The first models announced include:

  • Asus ZenBook Pro Duo UX582 — featuring GeForce RTX 3070 graphics. Artists can create their best thanks to next-gen performance and a stunning dual display, giving them new ways of using a laptop.

  • MSI Creator 15 — offering GeForce RTX 3060, 3070 and 3080 variants. MSI’s built-for-creators laptop delivers blazing-fast performance with hushed tones as it leverages both Dynamic Boost 2.0 and WhisperMode 2.0.

  • Gigabyte Aero 15 and Aero 17 — with multiple screen sizes to choose from and configurable with GeForce RTX 3060, 3070 or 3080 graphics. The Aero 15 features Dynamic Boost 2.0, WhisperMode 2.0 and Advanced Optimus (on the 1080p display) to deliver peak performance while on the go. The larger format Aero 17 also features Dynamic Boost 2.0.

  • Razer Blade 15 and Blade Pro 17 — offer stunning 1440p and 4K display options in an elegant, high-quality chassis. The Blade 15 Base model is configurable with a GeForce RTX 3070 and a 1440p display; the Razer Blade Advanced model with GeForce RTX 3070 or 3080 and 1440p or 4K displays; the Blade Pro 17 with a GeForce RTX 3080 and 4K 120Hz display.

Accelerating Creativity

GeForce RTX 30 Series Studio laptops give video editors mobile access to edit up to 8K HDR RAW footage, use AI to simplify workflows, and reduce encode times by up to 75 percent with the NVIDIA Encoder. When connected to an external G-SYNC display — an NVIDIA technology that synchronizes the display to content — editors can preview their content at the exact frame rate it will be exported and played back.

Artists can get up to 16GB of graphics memory to work with huge assets and across multiple apps at the same time.

AI-Powered Workflows

NVIDIA Studio laptops take advantage of industry-leading creative apps to empower creators with AI.

Adobe Photoshop recently introduced Neural Filters, which use AI to quickly make complex edits to photographs.

Blackmagic Design’s Davinci Resolve 17 implements a new Magic Mask feature that uses AI to speed up mask creation and tracking.

The free NVIDIA Broadcast app turns any room into a home studio by enhancing your microphone, speakers and webcam with AI-powered features such as virtual background effects, audio noise removal and webcam auto frame.

NVIDIA Optix AI denoising in Blender cleans up noise on the fly, quickly resolving 3D images.

And NVIDIA Omniverse Audio2Face uses AI to animate unrigged 3D characters, based solely on an audio source.

Studio Drivers

NVIDIA Studio laptops are supported with Studio Drivers that are built specifically to meet creators’ needs for both performance and reliability. They’re tested extensively against top creative apps and workflows.

The latest Studio Driver can be downloaded through GeForce Experience or from our driver download page.

New GeForce RTX 30 Series NVIDIA Studio laptops from ASUS, Gigabyte, MSI and Razer will begin rolling out later this month.

Read more about the GeForce RTX 30 Series laptop announcement, including additional options for gamers and creators starting at $999. Visit the GeForce Laptop website for even more information.

Learn more about NVIDIA Studio hardware and software for creators on the NVIDIA Studio website.

And stay up to date on the latest apps through the NVIDIA Studio YouTube channel, which features tutorials, tips and tricks by industry-leading artists.

The post The Ultimate Creative Machines: NVIDIA Studio Laptops Now with GeForce RTX 30 Series Laptop GPUs appeared first on The Official NVIDIA Blog.

Read More

AWS Announces the global expansion of AWS CCI Solutions

We’re excited to announce the global availability of AWS Contact Center Intelligence (AWS CCI) solutions powered by AWS AI Services and made available through the AWS Partner Network. AWS CCI solutions enable you to leverage AWS machine learning (ML) capabilities with your current contact center provider to gain greater efficiencies and deliver increasingly tailored customer experiences —with no ML expertise required.

AWS CCI solutions use a combination of AWS AI-powered services for text-to-speech, translation, intelligent search, conversational AI, transcription, and language comprehension capabilities. We’re delighted to announce the addition of AWS Technology Partners: Salesforce, Avaya, Talkdesk, 8×8, Clarabridge, Clevy, XappAI, and Voiceworx. We are also adding new AWS Consulting Partners: Inawisdom, Cation Consulting, HCL Technologies, Wipro, First Derivatives, Servion, and Lucy in the Cloud/Micropole for customers who require a custom solution or seek additional assistance with AWS CCI. These new partners provide customers across the globe more opportunities to benefit from AWS ML-powered contact center intelligence solutions to enhance self-service, analyze calls in real time to assist agents, and learn from all contact center interactions with post-call analytics.

Around the world, the volume of interactions in contact centers continues to increase. Companies see multiple opportunities to leverage AI technology to improve the customer experience. This can include 24/7 self-serve virtual agents that can provide timely and accurate answers to customer queries, call analytics and agent assist to improve agent productivity, or call analytics to generate further improvements in their operations. However, piecing together the various technologies to build an ML-driven intelligent contact center unique to the goals and needs of each business can be a significant undertaking. You want the benefits that intelligent contact center technologies bring, but the resources, time and cost to implement are often too high to overcome. AWS CCI provides a simple and fast route to deploy AWS ML solutions no matter which contact center provider you use.

AWS CCI customer success stories

Multiple customers already benefit from an improved customer experience and reduced operational costs as a result of using AWS CCI solutions through AWS Partners. Here are some example of AWS CCI customer stories.

Maximus is a leading pure-play provider in the administration of government health and human services programs, and is the largest provider of contact center services to the government. Tom Romeo, the General Manager at Maximus Federal, says, “At Maximus, we are constantly looking for new ways to innovate and improve the Citizen Journey and contact center experience. With AWS Partner SuccessKPI, we were able to add AWS CCI into our Genesys Cloud environment in a matter of hours and deliver a 360-degree view of the citizen experience. This program allowed us to deliver increased capacity, automated quality review, and agent compliance and performance improvements for government agencies.”

Magellan Health is a large managed health care company focused on special population, complete pharmacy benefits and other specialty areas. Brian Lichtle, the Senior Director of Software Engineering at Magellan Rx says,
“We chose Amazon Kendra, a service within AWS CCI to build a secure and scalable agent assist application. This helped call center agents, and the customer they serve quickly uncover the information they need. Since implementing CCI and Amazon Kendra, early results show an average reduction in call times of about 9-15 seconds, which saves more than 4.4k hours on over 2.2 million calls per calendar year.”

Cation Consulting is an AWS consulting partner focused on delivering robust, conversational AI experiences to customers. Alan Kiernan, the co-founder and CTO at Cation Consulting says, “At Cation Consulting, we provide customers with conversational AI and self-service experiences that allow them to significantly reduce customer support costs while improving the customer experience. AWS Contact Center Intelligence enables us to move quickly and scale seamlessly with customers such as Ryanair, the largest airline in Europe. The Ryanair chatbot has handled millions of customer enquiries per year as a trusted extension of the Ryanair’s customer care team. We are excited to leverage Amazon Lex’s recent expansion into European languages and design virtual agents who can resolve customer issues quickly and improve customer service ratings.”

New AWS CCI language support and partner additions

In addition to our new partners, AWS CCI continues to expand its global capabilities with new language expansions. AWS CCI has 3 pre-configured solutions available through participating APN partners, focused on the contact center workflow: Self-Service, Live Call Analytics and Agent Assist, and Post-Call Analytics. The Self-Service solution uses ML-driven chatbots and Interactive Voice Response (IVR) to address and deflect the most common tasks and queries so that the contact center workforce can focus on resolving interactions that need a human touch. The Self-Service solution utilizes the conversational interface of Amazon Lex and the text to speech voices of Amazon Polly to create a dynamic virtual agent in multiple languages such as French, German, Italian, and Spanish. Adding Amazon Kendra can boost the ability of these virtual agents to answer questions by finding the best answers from internal knowledge bases. The Live Call Analytics & Agent Assist and Post-Call Analytics solutions use Amazon Transcribe to perform real-time or post-call speech transcription with Amazon Comprehend to automatically analyze the interaction, detect call sentiment, and identify key words and phrases in the conversation using natural language processing (NLP) to increase agent productivity. These key words can then be utilized by the intelligent search capabilities of Amazon Kendra to help agents find timely and relevant information to resolve live call issues more quickly. Transcribing live calls is now available in German, Italian, Japanese, Korean, and Portuguese languages. Amazon Translate can also be used to translate calls into an agent’s preferred language and supports a total of 71 languages and variants.

“At Amazon, we want to meet the customer wherever they are in their contact center journey. With AWS CCI, we wanted to make it easy for customers who use different contact centers providers to add AI and achieve new levels of operational efficiency.” says Vasi Philomin, GM of AWS Language Services, AI. “Having a global partner network is critical to enabling our customers to realize the benefits of cloud-based machine learning services and removing the need to hire specialized developers to build and maintain these systems.”

Talkdesk is a cloud contact center for innovative enterprises, combining enterprise performance with consumer simplicity resulting in higher customer satisfaction, productivity and cost savings. Tiago Paiva, chief executive officer at Talkdesk, shares, “Combining Talkdesk cloud innovations with powerful AI and machine learning services from AWS extends the capabilities and choices available to Talkdesk customers. We are excited to add new, out-of-the-box options through AWS Contact Center Intelligence solutions to help the Talkdesk user base rise above their market peers through superior customer service.”

8×8 is a leading contact center provider. Manu Mukerji, the Vice President of Engineering at 8×8, Inc., says, “By partnering with AWS, we can deliver to businesses and organizations superior bi-directional integration with AWS CCI, providing a best-in-class experience for customers. The 8×8 integration with AWS CCI makes it easy for customers to leverage AI capabilities even if they have no AI experience. The 8×8 Virtual Agent is the only fully managed and customizable solution in the market that works seamlessly for both unified communications and contact center use cases, enhancing contact center efficiency for reduced wait times and faster time to resolution.”

Pat Higbie, Co-founder and CEO of XAPP AI, an AWS Technology Partner, says, “Amazon Lex, Amazon Kendra and Amazon Polly provide a powerful combination of AI services that enables contact centers to transcend the limitations of traditional chatbots and IVR to transform their operations with truly conversational self-service that improves the customer experience and delivers dramatic ROI. And, AWS CCI solutions can be integrated with all contact center brands to bring the value of AWS AI services to any enterprise quickly.”

We are excited to have all these new partners join

Getting started

There are multiple ways to get started with AWS CCI. To find a participating partner, see the AWS CCI partner page for more information and contact details.

To learn more, please join us for any or all of the following sessions hosted by AWS and our AWS CCI partners.

re:Invent sessions

Learn how you can leverage AWS CCI solutions to improve the customer experience and reduce cost with AI. Explore how AWS CCI solutions can be built easily through an expanding network of partners to provide self-service interactions, live and post-call analytics, and agent assist on existing contact center systems. AWS Partner SuccessKPI shares how it uses CCI solutions to improve the customer experience and tackle challenging business problems such as reducing call volume, improving agent effectiveness, and automating quality management in enterprise contact centers for customers like Maximus.

Numerous stakeholders including content designers, developers, and business owners collaborate to create a bot. In this session, hear how Dropbox used the Amazon Lex interface to build a chatbot as a support offering. The session covers how the Amazon Lex console allows you to easily create flows and manage them, and it details the decoupling that should exist between the bot author and developer for an optimal collaborative model. Finally, this session provides insights into conversational interface (CI) and conversational design (CD), language localization, and deployment practices.

 Answering customer questions is essential to the customer support experience. Powered by ML, Amazon Kendra is an enterprise search service that can add Q&A capabilities to your virtual agents or boost call center agent productivity with live call agent assistance. In this session, you hear how Magellan RX Management augmented the call center experience using Amazon Kendra to help agents find accurate information faster.

In this session, learn how to train custom language models in Amazon Transcribe that supercharge speech recognition accuracy. Octopus Energy, a UK-based utility company, shares how it leverages domain-specific data to train a custom language model that is fine-tuned for its business needs and specific use case.

Partner sessions

  • How to boost the return on your contact center investments with AI
    January 26 at 10:00 am PST – REGISTER HERE
    Presented by Acqueon and AWS

With AI technologies maturing, enterprises are embracing them to delight customers and improve the operational productivity of their contact centers. In this educational webinar, AI expert Chris Featherstone, Global Business Development Leader for AWS CCI and industry veteran Nicolas de Kouchkovsky, CMO at Acqueon, discuss how to integrate AI into your contact center software stack. They will provide an update on industry adoption and share the art of the possible without having to overhaul your technology investments.

  • Gain Control of your CX with a 360 CCI Power View: A step by step guide
    January 27, 2021 at 1PM EST/10AM PST – REGISTER HERE
    Presented by SuccessKPI and AWS

Managing customer experience requires tackling a complex set of metrics across agents, queues, geographies, customer types, and channels. Mix in the data from speech analytics, chatbots, and post call surveys, and the picture gets blurry very quickly. In this informative webinar, we explore the factors that make customer experience management such a quagmire and provide a series of recommendations and steps to help put you in control of your customer experience.

  • Add Intelligence to your existing contact center with AWS Contact Center Intelligence and Talkdesk
    February 24, 2021 at 9am BRT, 9am MXT, and 9am PST – REGISTER HERE
    Presented by Talkdesk and AWS at AWS Innovate – AI/ML Edition

Learn how your organization can leverage AWS Contact Center Intelligence (CCI) solutions and AWS Partner, Talkdesk, to improve customer experience and reduce cost with AI. We will explore how AWS CCI solutions can be built easily to provide self-service interactions, live and post-call analytics and agent assist on existing contact center systems. Talkdesk will also share how they improve customer experience and tackle challenging business problems such as improving agent effectiveness, and automating quality management in enterprise contact centers.


About the Author

Eron Kelly is the worldwide leader of Product Marketing for a broad portfolio of AWS services that cover Compute, Storage, Networking, Contact Centers, End User Computing and Business Applications. In this capacity, his team leads all aspects of product marketing including messaging, positioning, launches, web strategy and execution, service adoption, and field enablement. Prior to AWS, he has led sales and marketing teams at Microsoft, Proctor and Gamble and was a Captain in the Air Force. Outside of work, Mr. Kelly is very active raising a family of four kids. He is a member of the Board of Trustees at Eastside Catholic School in Sammamish, WA, and spent the last 10 years coaching youth lacrosse.

Esther Lee is a Product Manager for AWS Language AI Services. She is passionate about the intersection of technology and education. Out of the office, Esther enjoys long walks along the beach, dinners with friends and friendly rounds of Mahjong.

Read More

Hosting a private PyPI server for Amazon SageMaker Studio notebooks in a VPC

Amazon SageMaker Studio notebooks provide a full-featured integrated development environment (IDE) for flexible machine learning (ML) experimentation and development. Security measures secure and support a versatile and collaborative environment. In some cases, such as to protect sensitive data or meet regulatory requirements, security protocols require that public internet access be disabled in the development environment.

Typically, developers have access to the public internet and can install any new libraries you want to import. You can install Python packages from the public Python Package Index (PyPI), a Python software repository, using standard tools such as pip. You can find hundreds of thousands of packages, including common packages such as NumPy, Pandas, Matplotlib, Pytest, Requests, Django, and BeautifulSoup.

In a development environment with internet access disabled, you can instead mirror packages and host your own PyPI server hosted in your own Amazon Virtual Private Cloud (Amazon VPC). A VPC is a logically isolated virtual network into which you can launch resources, such as Amazon Elastic Compute Cloud (Amazon EC2) instances and SageMaker Studio domains. You have fine-grained access control over its network connectivity. You can specify an IP address range for the VPC and associate security groups to control its inbound and outbound traffic. You can also add subnets that use a subset of IP addresses within the VPC, and choose whether each subnet is open to the public internet or is private.

When you use a local PyPI server with this architecture and install Python libraries from your SageMaker Studio notebook, you connect to your private server instead of a public package index, and all traffic remains within a single secured VPC and private subnet.

SageMaker Studio recently launched VPC integration to meet these security needs. You can now launch Studio notebooks within a private VPC, disabling internet access. To install Python packages within this secure environment, you can configure an EC2 instance in your VPC that acts as a PyPI server for your notebooks. This enables you to maintain productivity and ease of package installation while working within a private environment that isn’t accessible from the public internet.

Solution overview

This solution creates a private PyPI server on an EC2 instance, and connects it to a SageMaker Studio notebook through network configuration including a VPC, private subnet, security group, and elastic network interface. The following diagram illustrates this architecture.

The following diagram illustrates this architecture.

You complete the following steps to implement this solution:

  1. Launch an EC2 instance within a VPC, subnet, and security group.
  2. Configure the instance to function as a private PyPI server.
  3. Create a VPC endpoint and add security group rules.
  4. Create a VPC-only SageMaker Studio domain, user, and notebook with the necessary permissions and networking.
  5. Install a Python package from the PyPI server onto the SageMaker Studio notebook.

Prerequisites

This is an intermediate-level solution with the following prerequisites:

  • An AWS account
  • Sufficient level of access to create Amazon SageMaker, Amazon EC2, and Amazon VPC resources
  • Familiarity with creating and modifying AWS resources on the AWS Management Console
  • Basic command-line experience, such as SSHing onto an EC2 instance, installing packages, and editing files using vim or another command-line text editor

Launching an EC2 instance

For this post, we launch a new EC2 instance in the us-east-2 Region. For the full list of available Regions supporting SageMaker Studio, see Supported Regions and Quotas.

  1. On the Amazon EC2 console, launch a new instance in a Region supporting SageMaker Studio.
  2. Choose an Amazon Linux 2 AMI.
  3. Choose a t2.medium instance (or larger t2, if preferred).
  4. On the Step 3: Configure Instance Details page, for Network, choose your VPC.
  5. For Subnet, choose your subnet.

You can use the default VPC and subnet, use other existing resources, or create new ones. Make sure to note the VPC and subnet you select for later reference.

  1. Leave all other settings as-is.
  2. Use default storage and tag settings.
  3. On the Step 6: Configure Security Group page, for Assign a security group, select Create a new security group.
  4. For Security group name, enter studio-SG.
  5. For Type, choose SSH on port range 22.
  6. For Source, choose My IP.

This allows you to SSH onto the instance from your current internet network.

  1. Create a new key pair, studio-host.
  2. Launch the instance.

For more information about launching an instance, see Tutorial: Getting started with Amazon EC2 Linux instances.

Configuring the instance as a PyPI server

To configure your instance, complete the following steps:

  1. Open a terminal window and navigate to the directory containing your .pem file.
  2. Change the key permissions and SSH onto your instance, substituting in the public IP address and Region:
    chmod 400 studio-host.pem
    ssh -i "studio-host.pem" ec2-user@ec2-x-x-x-x.{region}.compute.amazonaws.com

If needed, you can find the SSH command by selecting your instance on the console, choosing Connect, and navigating to the SSH Client tab.

  1. Install pip, which you use to install Python packages, and bandersnatch, which you use to mirror packages from the public PyPI server onto your instance. For this post, we use the package AWS Data Wrangler, an AWS Professional Services open-source library that integrates Pandas DataFrames with AWS services:
    sudo yum install python3-pip
    sudo pip3 install multidict==4.7.6
    sudo pip3 install yarl==1.6.0
    sudo pip3 install bandersnatch

You now configure bandersnatch to specify packages and their versions to mirror.

  1. Open a config file:
    sudo vim /etc/bandersnatch.conf

  1. Enter the following file contents:
    [mirror]
    directory = /pypi
    master = https://pypi.org
    timeout = 10
    workers = 3
    hash-index = false
    stop-on-error = false
    json = false
    
    [plugins]
    enabled =
        whitelist_project
        allowlist_release
    
    [whitelist]
    packages =
        awswrangler==1.10.0
        pyarrow==2.0.0
        SQLAlchemy==1.3.10
        s3fs==0.4.2
        numpy==1.18.4
        sqlalchemy-redshift==0.7.9
        boto3==1.15.10
        pandas==1.1.0
        psycopg2-binary==2.8.0
        pymysql==0.9.3
        botocore==1.18.10
        fsspec==0.7.4
        s3transfer==0.3.2
        jmespath==0.9.4
        pytz==2019.3
        python-dateutil==2.8.1
        urllib3==1.25.8
        six==1.14.0
    

  1. Mirror the libraries and list the directory contents to view that the libraries have been copied onto the instance:
    sudo /usr/local/bin/bandersnatch mirror
    ls /pypi/web/simple/

You must configure pip so that when pip is run to install packages, they are searched for within your private PyPI server instead of on the public server. The file already exists, and you add two more lines to the existing file.

  1. Open the file:
    sudo vim /etc/pip.conf

  1. Ensure your pip config file reads as follows, adding the last two lines:
    [global] 
    disable_pip_version_check = 1 
    format = columns 
    index-url = http://localhost/simple 
    trusted-host = localhost

  1. Install and configure nginx so that the instance can function as a private web server:
    sudo amazon-linux-extras install nginx1
    sudo vim /etc/nginx/nginx.conf

  1. Update the server section of the nginx config file to change the server_name to localhost, listen on the private IP address, and add the root and index locations. The server section of the nginx config file should be as follows:
    server {
            listen x.x.x.x:80;
            listen       80;
            listen       [::]:80;
            server_name localhost;
            root         /usr/share/nginx/html;
    
            # Load configuration files for the default server block.
            include /etc/nginx/default.d/*.conf;
    
            location / { root /pypi/web/; index index.html index.htm index.php; }
    
            error_page 404 /404.html;
                location = /40x.html {
            }
    
            error_page 500 502 503 504 /50x.html;
                location = /50x.html {
            }
        }
    

  2. Start the server and install the package locally to test it out:
    sudo service nginx start
    pip3 install --user awswrangler

Note that the packages are collected from the localhost, not the public package index.

You now have a private PyPI server ready for use.

Creating a VPC endpoint

VPC endpoints allow resources within a VPC to access AWS services. For this solution, you will create an endpoint for the SageMaker API. You can extend this solution by adding more endpoints for other services you need to access from your notebook.

There are two types of VPC endpoints:

  • Interface endpoints – Elastic network interfaces within a subnet that serve as entry points for traffic destined to a supported AWS service, such as SageMaker
  • Gateway endpoints – Only supported for Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB
  1. On the Amazon VPC console, choose Endpoints.
  2. Choose Create Endpoint.
  3. Create the SageMaker API endpoint com.amazonaws.{region}.sagemaker.api.
  4. Make sure you choose the same VPC, subnet, and security group used by your EC2 instance.

Make sure you choose the same VPC, subnet, and security group used by your EC2 instance.

When finished, your endpoint is listed as shown in the following screenshot.

For more information about VPC endpoints, including the distinction between interface endpoints and gateway endpoints, see VPC endpoints.

Editing your security group rules

Edit your security group to add an inbound rule allowing all traffic from within the security group. This allows the Studio notebook to communicate with the EC2 instance because they both reside within this security group.

You can search for the security group name on the Amazon EC2 console, and you receive a suggested ID.

After you add the rule, the security group has two inbound rules: one allowing SSH on port 22 from your IP to connect to the EC2 instance, and another allowing all traffic from within the security group.

For more information about security groups, see Security groups for your VPC.

Creating VPC-only SageMaker Studio resources

All SageMaker Studio resources reside within a domain, with a maximum of one domain per Region in an AWS account. A domain contains one or more users, and as a user you can open a Studio notebook. For more information about creating a domain, see CreateDomain.

With the recent release of VPC support for Studio, you can choose from two networking options: public internet only and VPC only. For more information, see Connect SageMaker Studio Notebooks to Resources in a VPC and Securing Amazon SageMaker Studio connectivity using a private VPC. For this post, we create a VPC-only domain.

  1. On the SageMaker Studio console, Select Standard setup.

This allows for detailed configuration.

  1. For Authentication method, select AWS Identity and Access Management (IAM).For Authentication method, select AWS Identity and Access Management (IAM).
  2. Under Permissions, choose Create a new role.
  3. Use the default settings.
  4. Choose Create role.

This creates a new SageMaker execution role.

  1. In the Network and Storage section, configure your VPC and subnet to match those of the EC2 instance.
  2. For Network Access for Studio, select VPC Only.
  3. For Security group(s), choose the same security group as used for the EC2 instance.
  4. Choose Submit.

Wait approximately a minute to see the banner notification that SageMaker Studio is ready.

You now create a Studio user within the domain.

  1. Choose Add user.
  2. Give the user a name (for example, studio-user).
  3. Choose the role you just created, AmazonSageMaker-ExecutionRole-<timestamp when the role was created>.
  4. Choose Submit.

This concludes the initial SageMaker Studio resource creation. You now have a Studio domain and user ready for use and can proceed with creating and using a notebook.

Installing a Python package onto the SageMaker Studio notebook

To start using the PyPI server from the SageMaker Studio notebook, complete the following steps:

  1. On the SageMaker Studio Control Panel, choose Open Studio next to the user name.
  2. Wait for your Studio environment to load.

You can now see the Studio UI. For more information, see the Amazon SageMaker Studio UI Overview.

  1. Use the default SageMaker JumpStart Data Science image and create a new Notebook Python 3.
  2. Wait a few minutes for the image to launch and your notebook to be available.

If you try to run a command before the notebook is available, you get the message: Note: The kernel is still starting. Please execute this cell again after the kernel is started. After your image has launched, you see it listed under Kernel Sessions, along with items for Running Instances and Running Apps. The kernel runs within the app, and the app runs on the instance.

Now you’re ready to configure your notebook. The first step is pip configuration, so that when you install a package using pip, your notebook searches for the package on the private PyPI server instead of through the public internet at pypi.org.

  1. Run the following command in a notebook cell, substituting your EC2 instance’s private IP address:
    !printf '[global]nindex-url = http://x.x.x.x/simplentrusted-host = x.x.x.x'| sudo tee /etc/pip.conf

  1. To check that the file was successfully written, run the following command:
    !head /etc/pip.conf

Now you’re ready to install Python packages from your server.

  1. To see that AWS Data Wrangler isn’t installed by default, try to import it with the command:
    import awswrangler

  1. Install the package and append to your Python path:
    !pip install awswrangler
    import sys
    sys.path.append('/home/sagemaker-user/.local/lib/python3.7/site-packages')

The library was installed from your private server’s index, as you specified in the pip config file, http://{EC2-IP}/simple.

The library was installed from our private server’s index, as you specified in the pip config file,

  1. Now that the package has been installed, you can import the package smoothly:
    import awswrangler

    Now that the package has been installed, you can import the package smoothly:

Now your notebook is ready for development, including installation of the Python libraries of your choice! Moreover, your PyPI server remains operational and available even when you delete your notebooks or use multiple notebooks. Your PyPI server is separated from your development environment, giving you freedom to manage your notebook resources in the way that best suits your needs.

Cleaning up

To clean up your resources, complete the following steps:

  1. Shut down the running instance in the SageMaker Studio notebook.
  2. Delete any remaining user’s apps on the SageMaker Studio console, including the default app.
  3. Delete the SageMaker Studio user.
  4. Delete Studio in the SageMaker Studio Control Panel.
  5. Stop the EC2 instance.
  6. Terminate the EC2 instance.
  7. Delete the IAM role, VPC endpoint, studio-SG security group, and Amazon Elastic File System (EFS) file system.
  8. Delete the rules in the inbound and outbound NFS security groups.
  9. Delete the security groups.

Conclusion

This post demonstrated how to get started with SageMaker Studio in VPC-only mode, while retaining the ability to install Python packages by hosting a private PyPI server. Now you can move forward with your ML development in notebooks residing within this secure environment.

We invite you to explore other exciting applications of SageMaker Studio, including Amazon SageMaker Experiments and scheduling notebooks on SageMaker ephemeral instances.


About the Author

Julia Kroll is a Data & Machine Learning Engineer for AWS Professional Services. She works with enterprise and public sector customers to build data lake, analytics, and machine learning solutions.

Read More

AI, Computational Advances Ring In New Era for Healthcare

We’re at a pivotal moment to unlock a new, AI-accelerated era of discovery and medicine, says Kimberly Powell, NVIDIA’s vice president of healthcare.

Speaking today at the J.P. Morgan Healthcare conference, held virtually, Powell outlined how AI and accelerated computing are enabling scientists to take advantage of the boom in biomedical data to power faster research breakthroughs and better patient care.

Understanding disease and discovering therapies is our greatest human endeavor, she said — and the trillion-dollar drug discovery industry illustrates just how complex a challenge it is.

How AI Can Drive Down Drug Discovery Costs

The typical drug discovery process takes about a decade, costs $2 billion and suffers a 90 percent failure rate during clinical development. But the rise of digital data in healthcare in recent years presents an opportunity to improve those statistics with AI.

“We can produce today more biomedical data in about three months than the entire 300-year history of healthcare,” she said. “And so this is now becoming a problem that no human really can synthesize that level of data, and we need to call upon artificial intelligence.”  

Powell called AI “the most powerful technology force of our time. It’s software that writes software that no humans can.”

But AI works best when it’s domain specific, combining data and algorithms tailored to a specific field like radiology, pathology or patient monitoring. The NVIDIA Clara application framework bridges this gap by providing researchers and clinicians the tools for GPU-accelerated AI in medical imaging, genomics, drug discovery and smart hospitals.

Downloads of NVIDIA Clara grew 5x last year, Powell shared, with developers taking up our new platforms for conversational AI and federated learning.

Healthcare Ecosystem Rallies Around AI

She noted that amid the COVID-19 pandemic, momentum around AI for healthcare has accelerated, with startups estimated to have raised well over $5 billion in 2020. More than 1,000 healthcare startups are in the NVIDIA Inception accelerator program, up 4x since 2017. And over 20,000 AI healthcare papers were submitted last year to PubMed, showing exponential growth over the past decade.

Leading research institutions like the University of California, San Francisco, are using NVIDIA GPUs to power their work in cryo-electron microscopy, a technique used to study the structure of molecules — such as the spike proteins on the COVID-19 virus — and accelerate drug and vaccine discovery.

And pharmaceutical companies, including GlaxoSmithKline, and major healthcare systems, like the U.K.’s National Health Service, will harness the Cambridge-1 supercomputer — an NVIDIA DGX SuperPOD system and the U.K.’s fastest AI supercomputer — to solve large-scale problems and improve patient care, diagnosis and delivery of critical medicines and vaccines.

Software-Defined Instruments Link AI Innovation and Medical Practice

Powell sees software-defined instruments — devices that can be regularly updated to reflect the latest scientific understanding and AI algorithms — as key to connecting the latest research breakthroughs with the practice of medicine.

“Artificial intelligence, like the practice of medicine, is constantly learning. We want to learn from the data, we want to learn from the changing environment,” Powell said.

By making medical instruments software-defined, tools like smart cameras for patient monitoring or AI-guided ultrasound systems can not only be developed in the first place, she said, but also retain their value and improve over time.

U.K.-based sequencing company Oxford Nanopore Technologies is a leader in software-defined instruments, deploying a new generation of DNA sequencing technology across an electronics-based platform. Its nanopore sequencing devices have been used in more than 50 countries to sequence and track new variants of the virus that causes COVID-19, as well as for large-scale genomic analyses to study the biology of cancer.

The company uses NVIDIA GPUs to power several of its instruments, from the handheld MinION Mk1C device to its ultra-high throughput PromethION, which can produce more than three human genomes’ worth of sequence data in a single run. To power the next generation of PromethION, Oxford Nanopore is adopting NVIDIA DGX Station, enabling its real-time sequencing technology to pair with rapid and highly accurate genomic analyses.

For years, the company has been using AI to improve the accuracy of basecalling, the process of determining the order of a molecule’s DNA bases from tiny electrical signals that pass through a nanoscale hole, or nanopore.

This technology “truly touches on the entire practice of medicine,” Powell said, whether COVID epidemiology or in human genetics and long read sequencing. “Through deep learning, their base calling model is able to reach an overall accuracy of 98.3 percent, and AI-driven single nucleotide variant calling gets them to 99.9 percent accuracy.”

Path Forward for AI-Powered Healthcare

AI-powered breakthroughs like these have grown in significance amid the pandemic, said Powell.

“The tremendous focus of AI on a single problem in 2020, like COVID-19, really showed us that with that tremendous focus, we can see every piece and part that can benefit from artificial intelligence,” she said. “What we’ve discovered over the last 12 months is only going to propel us further in the future. Everything we’ve learned is applicable for every future drug discovery program there is.”

Across fields as diverse as genome analysis, computational drug discovery and clinical diagnostics, healthcare heavyweights are making strides with GPU-accelerated AI. Hear more about it on Jan. 13 at 11 a.m. Pacific, when Powell joins a Washington Post Live conversation on AI in healthcare.

Subscribe to NVIDIA healthcare news here.

The post AI, Computational Advances Ring In New Era for Healthcare appeared first on The Official NVIDIA Blog.

Read More

Meet the researcher creating more access with language

When you’ve got your hands full, so you use your voice to ask your phone to play your favorite song, it can feel like magic. In reality, it’s a more complicated combination of engineering, design and natural language processing at work, making it easier for many of us to use our smartphones. But what happens when this voice technology isn’t available in our own language? 

This is something Google India researcher Shachi Dave considers as part of her day-to-day work. While English is the most widely spoken language globally, it ranks third as the most widely spoken native language (behind Mandarin and Spanish)—just ahead of Hindi, Bengali and a number of other languages that are official in India. Home to more than one billion people and an impressive number of official languages—22, to be exact—India is at the cutting edge of Google’s language localization or L10n (10 represents the number of letters between ‘l’ and ‘n’) efforts. 

Shachi, who is a founding member of the Google India Research team, works on natural language understanding, a field of artificial intelligence (AI) which builds computer algorithms to understand our everyday speech and language. Working with Google’s AI principles, she aims to ensure teams build our products to be socially beneficial and inclusive. Born and raised in India, Shachi graduated with a master’s degree in computer science from the University of Southern California. After working at a few U.S. startups, she joined Google over 12 years ago and returned to India to take on more research and leadership responsibilities. Since she joined the company, she has worked closely with teams in Mountain View, New York, Zurich and Tel Aviv. She also actively contributes towards improving diversity and inclusion at Google through mentoring fellow female software engineers.

How would you explain your job to someone who isn’t in tech?

My job is to make sure computers can understand and interact with humans naturally, a field of computer science we call natural language processing (NLP). Our research has found that many Indian users tend to use a mix of English and their native language when interacting with our technology, so that’s why understanding natural language is so important—it’s key to localization, our efforts to provide our services in every language and culture—while making sure our technology is fun to use and natural-sounding along the way.

What are some of the biggest challenges you’re tackling in your work now?

The biggest challenge is that India is a multilingual country, with 22 official languages. I have seen friends, family and even strangers struggle with technology that doesn’t work for them in their language, even though it can work so well in other languages. 

Let’s say one of our users is a shop owner and lives in a small village in the southern Indian state of Telangana. She goes online for the first time with her phone. But since she has never used a computer or smartphone before, using her voice is the most natural way for her to interact with her phone. While she knows some English, she is also more comfortable speaking in her native language, Telugu. Our job is to make sure that she has a positive experience and does not have to struggle to get the information she needs. Perhaps she’s able to order more goods for her shop through the web, or maybe she decides to have her services listed online to grow her business. 

So that’s part of my motivation to do my research, and that’s one of Google’s AI Principles, too—to make sure our technology is socially beneficial. 

Speaking of the AI Principles, what other principles help inform your research?

Another one of Google’s AI Principles is avoiding creating or reinforcing unfair bias. AI systems are good at recognizing patterns within data. Given that most data that we feed into training an AI system is generated by humans, it tends to have human biases and prejudices. I look for systematic ways to remove these biases. This requires constant awareness: being aware of how people have different languages, backgrounds and financial statuses. Our society has people from the entire financial spectrum, from super rich to low-income, so what works on the most expensive phones might not work on lower-cost devices. Also, some of our users might not be able to read or write, so we need to provide some audio and visual tools for them to have a better internet experience.

What led you to this career and inspired you to join Google?  

I took an Introduction to Artificial Intelligence course as an undergraduate, and it piqued my interest and curiosity. That ultimately led to research on machine translation at the Indian Institute of Technology Bombay and then an advanced degree at the University of Southern California. After that, I spent some time working at U.S. startups that were using NLP and machine learning. 

But I wanted more. I wanted to be intellectually challenged, solving hard problems. Since Google had the computing power and reputation for solving problems at scale, it became one of my top choices for places to work. 

Now you’ve been at Google for over 12 years. What are some of the most rewarding moments of your career?

Definitely when I saw the quality improvements I worked on go live on Google Search and Assistant, positively impacting millions of people. I remember I was able to help launch local features like getting the Assistant to play the songs people wanted to hear. Playing music upon request makes people happy, and it’s a feature that still works today. 

Over the years, I have gone through difficult situations as someone from an underrepresented group. I was fortunate to have a great support network—women peers as well as allies—who helped me. I try to pay it forward by being a mentor for underrepresented groups both within and outside Google.

How should aspiring AI researchers prepare for a career in this field? 

First, be a lifelong learner: The industry is moving at a fast pace. It’s important to carve out time to keep yourself well-read about the latest research in your field as well as related fields.

Second, know your motivation: When a problem is super challenging and super hard, you need to have that focus and belief that what you’re doing is going to contribute positively to our society.

Read More

Freeze the Day: How UCSF Researchers Clear Up Cryo-EM Images with GPUs

When photographers take long-exposure photos, they maximize the amount of light their camera sensors receive. The technique helps capture scenes like the night sky, but it introduces blurring in the final image, as in the example at right.

It’s not too different from cryo-electron microscopy, or cryo-EM, which scientists use to study the structure of tiny molecules frozen in vitreous ice. But while motion-induced blur in photography can create beautiful images, in structural biology it’s an unwanted side effect.

Protein samples for cryo-EM are frozen at -196 degrees Celcius to protect the biological structures, which would otherwise be destroyed by the microscope’s high-energy electron beam. But even when frozen, samples are disturbed by the powerful electron dose, causing motion that would blur a long-exposure photo.

To get around it, UCSF researchers use specialized cameras to instead capture videos of the biological molecules, so they appear nearly stationary in each frame of the video. Correcting the motion across frames is a computationally demanding task — but can be done in seconds on NVIDIA GPUs.

“If the motion was left uncorrected, we’d lose the high-resolution picture of a molecule’s 3D structures,” said Shawn Zheng, scientific software developer at the University of California, San Francisco and Howard Hughes Medical Institute. “And knowing the structure of a molecule is critical to understanding its function.”

Zheng and his colleagues run MotionCor2, the world’s most widely used motion-correction application, on NVIDIA GPUs to align each molecule in the video from frame to frame — creating a clean image researchers can turn into a 3D model.

These 3D models are essential for scientists to understand the complex chains of interactions taking place in an individual protein, such as spike proteins on the COVID-19 virus, speeding drug and vaccine discovery.

Solving the Bottleneck

UCSF, a leader in cryo-EM research, has been the source of groundbreaking work to improve the resolution of microscopy images. The technology enables scientists to visualize proteins at an atomic scale — something considered impossible just a decade ago.

But the pipeline is lengthy, involving freezing samples, capturing them on multimillion dollar cryo-EM microscopes, correcting their motion and then reconstructing detailed 3D models of the molecules. To keep things running smoothly, it’s critical that the motion-correction process runs fast enough to keep pace with the new data being collected.

“Cryo-EM microscopes are very expensive instruments. You don’t want it just sitting there idle. But if we have a backlog of movies piled up in the machine’s data storage, nobody else can collect more,” said Zheng. “It’d be a waste of this expensive instrument, and slow down the research of others.”

To achieve rapid motion correction, UCSF’s Center of Advanced Electron Microscopy uses workstations with eight NVIDIA GPUs for each microscope. These workstations are needed to keep up with the cryo-EM data collection, which acquires four movies per microscope per minute.

The GPU setup can run eight jobs concurrently, taking on the iterative process of motion correction for videos with as many as 400 frames, each with nearly 100 million pixels.

To speed the development of new applications, Zheng, who’s used NVIDIA GPUs for his research for a decade, uses a workstation powered by two NVIDIA Tensor Core GPUs. The system can analyze a 70GB microscope movie in under a minute.

Accelerating COVID Research

Zheng and his colleagues also use GPUs to run alignment software for cryo-electron tomography, or cryo-ET. This technique is better suited to study slightly heterogeneous specimens like macromolecules and cells. Samples are tilted at different angles, collecting a series of images that can be aligned and reconstructed into a detailed 3D model.

NVIDIA GPUs can fully automate the reconstruction process, taking a half hour on a single GPU, he says.

In a recent paper in Science, Zheng collaborated with lead researchers from the Netherlands’ Leiden University Medical Center to use cryo-ET to study molecular pores involved in COVID-19 virus replication in cells. A better understanding of this pore structure could help scientists develop a drug that targets it, blocking the virus from replicating in an infected patient.

To learn more about Zheng’s work, watch this on-demand talk from the GPU Technology Conference.

Main image shows a cryo-EM density map for the enzyme beta-galactosidase, showing the gradual increase in quality of the cryo-EM structures from low to high resolution. Image by Veronica Falconieri and Sriram Subramaniam, licensed from the National Cancer Institute under public domain.

The post Freeze the Day: How UCSF Researchers Clear Up Cryo-EM Images with GPUs appeared first on The Official NVIDIA Blog.

Read More

Out of This World Graphics: ‘Gods of Mars’ Come Alive with NVIDIA RTX Real-Time Rendering

The journey to making the upcoming film Gods of Mars changed course dramatically once real-time rendering entered the picture.

The movie, currently in production, features a mix of cinematic visual effects with live-action elements. The film crew had planned to make the movie primarily using real-life miniature figures. But they switched gears once they experienced the power of real-time NVIDIA RTX graphics and Unreal Engine.

Director Peter Hyoguchi and producer Joan Webb used an Epic MegaGrant from Epic Games to bring together VFX professionals and game developers to create the film. The virtual production started with scanning the miniature models and animating them in Unreal Engine.

“I’ve been working as a CGI and VFX supervisor for 20 years, and I never wanna go back to older workflows,” said Hyoguchi. “This is a total pivot point for the next 100 years of cinema — everyone is going to use this technology for their effects.”

Hyoguchi and team produced rich, photorealistic worlds in 4K to create rich, intergalactic scenes using a combination of NVIDIA Quadro RTX 6000 GPU-powered Lenovo ThinkStation P920 workstations, ASUS ProArt Display PA32UCX-P monitors, Blackmagic Design cameras and DaVinci Resolve, and the Wacom Cintiq Pro 24.

Stepping Outside the Ozone: Technology Makes Way for More Creativity

Gods of Mars tells the tale of a fighter pilot who leads a team against rebels in a battle on Mars. The live-action elements of the film are supported by LED walls with real-time rendered graphics created from Unreal Engine. Actors are filmed on-set, with a virtual background projected behind them.

To keep the set minimal, the team only builds what actors will physically interact with, and then uses the projected environment from Unreal Engine for the rest of the scenes.

One big advantage of working with digital environments and assets is real-time lighting. When previously working with CGI, Hyoguchi and his team would pre-visualize everything inside a grayscale environment. Then they’d wait hours for one frame to render before seeing a preview of what an image or scene would look like.

With Unreal Engine, Hyoguchi can have scenes ray-trace rendered immediately with lights, shadows and colors. He can move around the environment and see how everything would look in the scene, saving weeks of pre-planning.

Real-time rendering also saves money and resources. Hyoguchi doesn’t need to spend thousands of dollars for render farms, or wait weeks for one shot to complete rendering. The RTX-powered ThinkStation P920 renders everything in real time, which leads to more iterations, making way for a much more efficient, flexible and faster creative workflow.

“Ray tracing is what makes this movie possible,” said Hyoguchi. “With NVIDIA RTX and the ability to do real-time ray tracing, we can make a movie with low cost and less people, and yet I still have the flexibility to make more creative choices than I’ve ever had in my life.”

Hyoguchi and his team are shooting the film with Blackmagic Design’s new URSA Mini Pro 12K camera. Capturing such high-resolution footage provides more options in post-production. They can crop images or zoom in for a close-up shot of an actor without worrying about losing resolution.

They can also color and edit scenes in real time using Blackmagic DaVinci Resolve Studio, which uses NVIDIA GPUs to accelerate editing workflows. With the 32-inch ASUS ProArt Display PA32UCX-P monitors, the team calibrated their screens so all the artists can see the same rendered color and details, even while working in different locations across the country.

The Wacom Cintiq Pro 24 pen displays speed up the 3D artist’s workflow, and provides a natural connection between the artist and the Unreal editor, both when moving scene elements around to create the 3D environment and when keyframing actors for animation.

Learn more about Gods of Mars and NVIDIA RTX.

The post Out of This World Graphics: ‘Gods of Mars’ Come Alive with NVIDIA RTX Real-Time Rendering appeared first on The Official NVIDIA Blog.

Read More

New Year, New Energy: Leading EV Makers Kick Off 2021 with NVIDIA DRIVE

Electric vehicle upstarts have gained a foothold in the industry and are using NVIDIA DRIVE to keep that momentum going.

Nowhere is the trend of electric vehicles more apparent than in China, the world’s largest automotive market, where electric vehicle startups have exploded in popularity. NIO, Li Auto and Xpeng are bolstering the initial growth in new energy vehicles with models that push the limits of everyday driving with extended battery range and AI-powered features.

All three companies doubled their sales in 2020, with a combined volume of more than 103,000 vehicles.

Along with more efficient powertrains, these fleets are also introducing new and intelligent features to daily commutes with NVIDIA DRIVE.

NIO Unveils a Supercharged Compute Platform

Last week, NIO announced a supercomputer to power its automated and autonomous driving features, with NVIDIA DRIVE Orin at its core.

The computer, known as Adam, achieves over 1,000 trillion operations per second (TOPS) of performance with the redundancy and diversity necessary for safe autonomous driving. It also enables personalization in the vehicle, learning from individual driving habits and preferences while continuously improving from fleet data.

The Orin-powered supercomputer will debut in the flagship ET7 sedan, scheduled for production in 2022, and will be in every NIO model to follow.

The NIO ET7, powered by NVIDIA DRIVE Orin.

The ET7 leapfrogs current model capabilities, with more than 600 miles of battery range and advanced autonomous driving. As the first vehicle equipped with Adam, the EV can perform point-to-point autonomy, leveraging 33 sensors and high-performance compute to continuously expand the domains in which it operates  — from urban to highway driving to battery swap stations.

With this centralized, software-defined computing architecture, NIO’s future fleet of EVs will feature the latest AI-enabled capabilities designed to make its vehicles perpetually upgradable.

Li Auto Powers Ahead

In September, standout EV maker Li Auto said it would develop its next generation of electric vehicles using NVIDIA DRIVE AGX Orin.

These new vehicles are being developed in collaboration with tier 1 supplier Desay SV and feature advanced autonomous driving features, as well as extended battery range for truly intelligent mobility.

This high-performance platform will enable Li Auto to deploy an independent, advanced autonomous driving system with its next-generation fleet.

The automaker began rolling out its first vehicle, the Li Auto One SUV, in November 2019. Since then, sales have skyrocketed, with a 530 percent increase in volume in December, year-over-year, and a total of 32,624 vehicles in 2020.

The Li Auto One

Li Auto plans to continue this momentum with its upcoming models, packed with even more intelligent features enabled by NVIDIA DRIVE.

Cruising on Xpeng XPilot

Xpeng has been developing on DRIVE since 2018, developing a level 3 autopilot system in collaboration with Desay.

The technology debuted last April with the Xpeng P7, an all-electric sports sedan developed from the ground up for an intelligent driving future.

The Xpeng P7

The XPilot 3.0 level 3 autonomous driving system leverages NVIDIA DRIVE AGX Xavier as well as a redundant and diverse halo of sensors for automated highway driving and valet parking. XPilot was born in the data center, with NVIDIA’s AI infrastructure for training and testing self-driving deep neural networks.

With high-performance data center GPUs and advanced AI learning tools, this scalable infrastructure allows developers to manage massive amounts of data and train autonomous driving DNNs.

The burgeoning EV market is driving the next decade of personal transportation. And with NVIDIA DRIVE at the core, these vehicles have the intelligence and performance to go the distance.

The post New Year, New Energy: Leading EV Makers Kick Off 2021 with NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

Read More