Open source observability for AWS Inferentia nodes within Amazon EKS clusters

Open source observability for AWS Inferentia nodes within Amazon EKS clusters

Recent developments in machine learning (ML) have led to increasingly large models, some of which require hundreds of billions of parameters. Although they are more powerful, training and inference on those models require significant computational resources. Despite the availability of advanced distributed training libraries, it’s common for training and inference jobs to need hundreds of accelerators (GPUs or purpose-built ML chips such as AWS Trainium and AWS Inferentia), and therefore tens or hundreds of instances.

In such distributed environments, observability of both instances and ML chips becomes key to model performance fine-tuning and cost optimization. Metrics allow teams to understand workload behavior and optimize resource allocation and utilization, diagnose anomalies, and increase overall infrastructure efficiency. For data scientists, ML chips utilization and saturation are also relevant for capacity planning.

This post walks you through the Open Source Observability pattern for AWS Inferentia, which shows you how to monitor the performance of ML chips, used in an Amazon Elastic Kubernetes Service (Amazon EKS) cluster, with data plane nodes based on Amazon Elastic Compute Cloud (Amazon EC2) instances of type Inf1 and Inf2.

The pattern is part of the AWS CDK Observability Accelerator, a set of opinionated modules to help you set observability for Amazon EKS clusters. The AWS CDK Observability Accelerator is organized around patterns, which are reusable units for deploying multiple resources. The open source observability set of patterns instruments observability with Amazon Managed Grafana dashboards, an AWS Distro for OpenTelemetry collector to collect metrics, and Amazon Managed Service for Prometheus to store them.

Solution overview

The following diagram illustrates the solution architecture.

This solution deploys an Amazon EKS cluster with a node group that includes Inf1 instances.

The AMI type of the node group is AL2_x86_64_GPU, which uses the Amazon EKS optimized accelerated Amazon Linux AMI. In addition to the standard Amazon EKS-optimized AMI configuration, the accelerated AMI includes the NeuronX runtime.

To access the ML chips from Kubernetes, the pattern deploys the AWS Neuron device plugin.

Metrics are exposed to Amazon Managed Service for Prometheus by the neuron-monitor DaemonSet, which deploys a minimal container, with the Neuron tools installed. Specifically, the neuron-monitor DaemonSet runs the neuron-monitor command piped into the neuron-monitor-prometheus.py companion script (both commands are part of the container):

neuron-monitor | neuron-monitor-prometheus.py --port <port>

The command uses the following components:

  • neuron-monitor collects metrics and stats from the Neuron applications running on the system and streams the collected data to stdout in JSON format
  • neuron-monitor-prometheus.py maps and exposes the telemetry data from JSON format into Prometheus-compatible format

Data is visualized in Amazon Managed Grafana by the corresponding dashboard.

The rest of the setup to collect and visualize metrics with Amazon Managed Service for Prometheus and Amazon Managed Grafana is similar to that used in other open source based patterns, which are included in the AWS Observability Accelerator for CDK GitHub repository.

Prerequisites

You need the following to complete the steps in this post:

Set up the environment

Complete the following steps to set up your environment:

  1. Open a terminal window and run the following commands:
export AWS_REGION=<YOUR AWS REGION>
export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)
  1. Retrieve the workspace IDs of any existing Amazon Managed Grafana workspace:
aws grafana list-workspaces

The following is our sample output:

{
  "workspaces": [
    {
      "authentication": {
        "providers": [
          "AWS_SSO"
        ]
      },
      "created": "2023-06-07T12:23:56.625000-04:00",
      "description": "accelerator-workspace",
      "endpoint": "g-XYZ.grafana-workspace.us-east-2.amazonaws.com",
      "grafanaVersion": "9.4",
      "id": "g-XYZ",
      "modified": "2023-06-07T12:30:09.892000-04:00",
      "name": "accelerator-workspace",
      "notificationDestinations": [
        "SNS"
      ],
      "status": "ACTIVE",
      "tags": {}
    }
  ]
}
  1. Assign the values of id and endpoint to the following environment variables:
export COA_AMG_WORKSPACE_ID="<<YOUR-WORKSPACE-ID, similar to the above g-XYZ, without quotation marks>>"
export COA_AMG_ENDPOINT_URL="<<https://YOUR-WORKSPACE-URL, including protocol (i.e. https://), without quotation marks, similar to the above https://g-XYZ.grafana-workspace.us-east-2.amazonaws.com>>"

COA_AMG_ENDPOINT_URL needs to include https://.

  1. Create a Grafana API key from the Amazon Managed Grafana workspace:
export AMG_API_KEY=$(aws grafana create-workspace-api-key 
--key-name "grafana-operator-key" 
--key-role "ADMIN" 
--seconds-to-live 432000 
--workspace-id $COA_AMG_WORKSPACE_ID 
--query key 
--output text)
  1. Set up a secret in AWS Systems Manager:
aws ssm put-parameter --name "/cdk-accelerator/grafana-api-key" 
--type "SecureString" 
--value $AMG_API_KEY 
--region $AWS_REGION

The secret will be accessed by the External Secrets add-on and made available as a native Kubernetes secret in the EKS cluster.

Bootstrap the AWS CDK environment

The first step to any AWS CDK deployment is bootstrapping the environment. You use the cdk bootstrap command in the AWS CDK CLI to prepare the environment (a combination of AWS account and AWS Region) with resources required by AWS CDK to perform deployments into that environment. AWS CDK bootstrapping is needed for each account and Region combination, so if you already bootstrapped AWS CDK in a Region, you don’t need to repeat the bootstrapping process.

cdk bootstrap aws://$ACCOUNT_ID/$AWS_REGION

Deploy the solution

Complete the following steps to deploy the solution:

  1. Clone the cdk-aws-observability-accelerator repository and install the dependency packages. This repository contains AWS CDK v2 code written in TypeScript.
git clone https://github.com/aws-observability/cdk-aws-observability-accelerator.git
cd cdk-aws-observability-accelerator

The actual settings for Grafana dashboard JSON files are expected to be specified in the AWS CDK context. You need to update context in the cdk.json file, located in the current directory. The location of the dashboard is specified by the fluxRepository.values.GRAFANA_NEURON_DASH_URL parameter, and neuronNodeGroup is used to set the instance type, number, and Amazon Elastic Block Store (Amazon EBS) size used for the nodes.

  1. Enter the following snippet into cdk.json, replacing context:
"context": {
    "fluxRepository": {
      "name": "grafana-dashboards",
      "namespace": "grafana-operator",
      "repository": {
        "repoUrl": "https://github.com/aws-observability/aws-observability-accelerator",
        "name": "grafana-dashboards",
        "targetRevision": "main",
        "path": "./artifacts/grafana-operator-manifests/eks/infrastructure"
      },
      "values": {
        "GRAFANA_CLUSTER_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/cluster.json",
        "GRAFANA_KUBELET_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/kubelet.json",
        "GRAFANA_NSWRKLDS_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/namespace-workloads.json",
        "GRAFANA_NODEEXP_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/nodeexporter-nodes.json",
        "GRAFANA_NODES_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/nodes.json",
        "GRAFANA_WORKLOADS_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/workloads.json",
        "GRAFANA_NEURON_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/neuron/neuron-monitor.json"
      },
      "kustomizations": [
        {
          "kustomizationPath": "./artifacts/grafana-operator-manifests/eks/infrastructure"
        },
        {
          "kustomizationPath": "./artifacts/grafana-operator-manifests/eks/neuron"
        }
      ]
    },
     "neuronNodeGroup": {
      "instanceClass": "inf1",
      "instanceSize": "2xlarge",
      "desiredSize": 1, 
      "minSize": 1, 
      "maxSize": 3,
      "ebsSize": 512
    }
  }

You can replace the Inf1 instance type with Inf2 and change the size as needed. To check availability in your selected Region, run the following command (amend Values as you see fit):

aws ec2 describe-instance-type-offerings 
--filters Name=instance-type,Values="inf1*" 
--query "InstanceTypeOfferings[].InstanceType" 
--region $AWS_REGION
  1. Install the project dependencies:
npm install
  1. Run the following commands to deploy the open source observability pattern:
make build
make pattern single-new-eks-inferentia-opensource-observability deploy

Validate the solution

Complete the following steps to validate the solution:

  1. Run the update-kubeconfig command. You should be able to get the command from the output message of the previous command:
aws eks update-kubeconfig --name single-new-eks-inferentia-opensource... --region <your region> --role-arn arn:aws:iam::xxxxxxxxx:role/single-new-eks-....
  1. Verify the resources you created:
kubectl get pods -A

The following screenshot shows our sample output.

  1. Make sure the neuron-device-plugin-daemonset DaemonSet is running:
kubectl get ds neuron-device-plugin-daemonset --namespace kube-system

The following is our expected output:

NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
neuron-device-plugin-daemonset   1         1         1       1            1           <none>          2h
  1. Confirm that the neuron-monitor DaemonSet is running:
kubectl get ds neuron-monitor --namespace kube-system

The following is our expected output:

NAME             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
neuron-monitor   1         1         1       1            1           <none>          2h
  1. To verify that the Neuron devices and cores are visible, run the neuron-ls and neuron-top commands from, for example, your neuron-monitor pod (you can get the pod’s name from the output of kubectl get pods -A):
kubectl exec -it {your neuron-monitor pod} -n kube-system -- /bin/bash -c "neuron-ls"

The following screenshot shows our expected output.

kubectl exec -it {your neuron-monitor pod} -n kube-system -- /bin/bash -c "neuron-top"

The following screenshot shows our expected output.

Visualize data using the Grafana Neuron dashboard

Log in to your Amazon Managed Grafana workspace and navigate to the Dashboards panel. You should see a dashboard named Neuron / Monitor.

To see some interesting metrics on the Grafana dashboard, we apply the following manifest:

curl https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/k8s-deployment-manifest-templates/neuron/pytorch-inference-resnet50.yml | kubectl apply -f -

This is a sample workload that compiles the torchvision ResNet50 model and runs repetitive inference in a loop to generate telemetry data.

To verify the pod was successfully deployed, run the following code:

kubectl get pods

You should see a pod named pytorch-inference-resnet50.

After a few minutes, looking into the Neuron / Monitor dashboard, you should see the gathered metrics similar to the following screenshots.

Grafana Operator and Flux always work together to synchronize your dashboards with Git. If you delete your dashboards by accident, they will be re-provisioned automatically.

Clean up

You can delete the whole AWS CDK stack with the following command:

make pattern single-new-eks-inferentia-opensource-observability destroy

Conclusion

In this post, we showed you how to introduce observability, with open source tooling, into an EKS cluster featuring a data plane running EC2 Inf1 instances. We started by selecting the Amazon EKS-optimized accelerated AMI for the data plane nodes, which includes the Neuron container runtime, providing access to AWS Inferentia and Trainium Neuron devices. Then, to expose the Neuron cores and devices to Kubernetes, we deployed the Neuron device plugin. The actual collection and mapping of telemetry data into Prometheus-compatible format was achieved via neuron-monitor and neuron-monitor-prometheus.py. Metrics were sourced from Amazon Managed Service for Prometheus and displayed on the Neuron dashboard of Amazon Managed Grafana.

We recommend that you explore additional observability patterns in the AWS Observability Accelerator for CDK GitHub repo. To learn more about Neuron, refer to the AWS Neuron Documentation.


About the Author

Riccardo Freschi is a Sr. Solutions Architect at AWS, focusing on application modernization. He works closely with partners and customers to help them transform their IT landscapes in their journey to the AWS Cloud by refactoring existing applications and building new ones.

Read More

Moving Pictures: Transform Images Into 3D Scenes With NVIDIA Instant NeRF

Moving Pictures: Transform Images Into 3D Scenes With NVIDIA Instant NeRF

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and which showcases new hardware, software, tools and accelerations for RTX PC users.

Imagine a gorgeous vista, like a cliff along the water’s edge. Even as a 2D image, the scene would be beautiful and inviting. Now imagine exploring that same view in 3D – without needing to be there.

NVIDIA RTX technology-powered AI helps turn such an imagination into reality. Using Instant NeRF, creatives are transforming collections of still images into digital 3D scenes in just seconds.

Simply Radiant

A NeRF, or neural radiance field, is an AI model that takes 2D images representing a scene as input and interpolates between them to render a complete 3D scene. The model operates as a neural network — a model that replicates how the brain is organized and is often used for tasks that require pattern recognition.

Using spatial location and volumetric rendering, a NeRF uses the camera pose from the images to render a 3D iteration of the scene. Traditionally, these models are computationally intensive, requiring large amounts of rendering power and time.

A recent NVIDIA AI research project changed that.

Instant NeRF takes NeRFs to the next level, using AI-accelerated inverse rendering to approximate how light behaves in the real world. It enables researchers to construct a 3D scene from 2D images taken at different angles. Scenes can now be generated in seconds, and the longer the NeRF model is trained, the more detailed the resulting 3D renders.

NVIDIA researchers released four neural graphics primitives, or pretrained datasets, as part of the Instant-NGP training toolset at the SIGGRAPH computer graphics conference in 2022. The tools let anyone create NeRFs with their own data. The researchers won a best paper award for the work, and TIME Magazine named Instant NeRF a best invention of 2022.

In addition to speeding rendering for NeRFs, Instant NeRF makes the entire image reconstruction process accessible using NVIDIA RTX and GeForce RTX desktop and laptop GPUs. While the time it takes to render a scene depends on factors like dataset size and the mix of image and video source content, the AI training doesn’t require server-grade or cloud-based hardware.

NVIDIA RTX workstations and GeForce RTX PCs are ideally suited to meet the computational demands of rendering NeRFs. NVIDIA RTX and GeForce RTX GPUs feature Tensor Cores, dedicated AI hardware accelerators that provide the horsepower to run generative AI locally.

Ready, Set, Go

Get started with Instant NeRF to learn about radiance fields and experience imagery in a new way.

Developers and tech enthusiasts can download the source-code base to compile. Nontechnical users can download the Windows installers for Instant-NGP software, available on GitHub.

While the installer is available for a wide range of RTX GPUs, the program performs best on the latest-architecture GeForce RTX 40 Series and NVIDIA RTX Ada Generation GPUs.

The “Getting Started With Instant NeRF” guide walks users through the process, including loading one of the primitives, such as “NeRF Fox,” to get a sense of what’s possible. Detailed instructions and video walkthroughs — like the one above — demonstrate how to create NeRFs with custom data, including tips for capturing good input imagery and compiling codebases (if built from source). The guide also covers using the Instant NeRF graphical user interface, optimizing scene parameters and creating an animation from the scene.

The NeRF community also offers many tips and tricks to help users get started. For example, check out the livestream below and this technical blog post.

Show and Tell

Digital artists are composing beautiful scenes and telling fresh stories with NVIDIA Instant NeRF. The Instant NeRF gallery showcases some of the most innovative and thought-provoking examples, viewable as video clips in any web browser.

Here are a few:

  • “Through the Looking Glass” by Karen X. Cheng and James Perlman A pianist practices her song, part of her daily routine, though there’s nothing mundane about what happens next. The viewer peers into the mirror, a virtual world that can be observed but not traversed; it’s unreachable by normal means. Then, crossing the threshold, it’s revealed that this mirror is in fact a window into an inverted reality that can be explored from within. Which one is real?

 

  • “Meditation” by Franc Lucent As soon as they walked into one of many rooms in Nico Santucci’s estate, Lucent knew they needed to turn it into a NeRF. Playing with the dynamic range and reflections in the pond, it presented the artist with an unknown exploration. They were pleased with the softness of the light and the way the NeRF elevates the room into what looks like something out of a dream — the perfect place to meditate. A NeRF can freeze a moment in a way that’s more immersive than a photo or video.

 

  • “Zeus” by Hugues Bruyère These rendered 3D scenes with Instant NeRF use the data Bruyère previously captured for traditional photogrammetry using mirrorless digital cameras, smartphones, 360-degree cameras and drones. Instant NeRF gives him a powerful tool to help preserve and share cultural artifacts through online libraries, museums, virtual-reality experiences and heritage-conservation projects. This NeRF was trained using a dataset of photos taken with an iPhone at the Royal Ontario Museum.

 

From Image to Video to Reality

Transforming images into a 3D scene with AI is cool. Stepping into that 3D creation is next level.

Thanks to a recent Instant NeRF update, users can render their scenes from static images and virtually step inside the environments, moving freely within the 3D space. In virtual-reality (VR) environments, users can feel complete immersion into new worlds, all within their headsets.

The potential benefits are nearly endless.

For example, a realtor can create and share a 3D model of a property, offering virtual tours at new levels. Retailers can showcase products in an online shop, powered by a collection of images and AI running on RTX GPUs. These AI models power creativity and are helping drive the accessibility of 3D immersive experiences across other industries.

Instant NeRF comes with the capability to clean up scenes easily in VR, making the creation of high-quality NeRFs more intuitive than ever. Learn more about navigating Instant NeRF spaces in VR.

Download Instant-NGP to get started, and share your creations on social media with the hashtag #InstantNeRF.

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More

Explore data with ease: Use SQL and Text-to-SQL in Amazon SageMaker Studio JupyterLab notebooks

Explore data with ease: Use SQL and Text-to-SQL in Amazon SageMaker Studio JupyterLab notebooks

Amazon SageMaker Studio provides a fully managed solution for data scientists to interactively build, train, and deploy machine learning (ML) models. In the process of working on their ML tasks, data scientists typically start their workflow by discovering relevant data sources and connecting to them. They then use SQL to explore, analyze, visualize, and integrate data from various sources before using it in their ML training and inference. Previously, data scientists often found themselves juggling multiple tools to support SQL in their workflow, which hindered productivity.

We’re excited to announce that JupyterLab notebooks in SageMaker Studio now come with built-in support for SQL. Data scientists can now:

  • Connect to popular data services including Amazon Athena, Amazon Redshift, Amazon DataZone, and Snowflake directly within the notebooks
  • Browse and search for databases, schemas, tables, and views, and preview data within the notebook interface
  • Mix SQL and Python code in the same notebook for efficient exploration and transformation of data for use in ML projects
  • Use developer productivity features such as SQL command completion, code formatting assistance, and syntax highlighting to help accelerate code development and improve overall developer productivity

In addition, administrators can securely manage connections to these data services, allowing data scientists to access authorized data without the need to manage credentials manually.

In this post, we guide you through setting up this feature in SageMaker Studio, and walk you through various capabilities of this feature. Then we show how you can enhance the in-notebook SQL experience using Text-to-SQL capabilities provided by advanced large language models (LLMs) to write complex SQL queries using natural language text as input. Finally, to enable a broader audience of users to generate SQL queries from natural language input in their notebooks, we show you how to deploy these Text-to-SQL models using Amazon SageMaker endpoints.

Solution overview

With SageMaker Studio JupyterLab notebook’s SQL integration, you can now connect to popular data sources like Snowflake, Athena, Amazon Redshift, and Amazon DataZone. This new feature enables you to perform various functions.

For example, you can visually explore data sources like databases, tables, and schemas directly from your JupyterLab ecosystem. If your notebook environments are running on SageMaker Distribution 1.6 or higher, look for a new widget on the left side of your JupyterLab interface. This addition enhances data accessibility and management within your development environment.

If you’re not currently on suggested SageMaker Distribution (1.5 or lower) or in a custom environment, refer to appendix for more information.

After you have set up connections (illustrated in the next section), you can list data connections, browse databases and tables, and inspect schemas.

The SageMaker Studio JupyterLab built-in SQL extension also enables you to run SQL queries directly from a notebook. Jupyter notebooks can differentiate between SQL and Python code using the %%sm_sql magic command, which must be placed at the top of any cell that contains SQL code. This command signals to JupyterLab that the following instructions are SQL commands rather than Python code. The output of a query can be displayed directly within the notebook, facilitating seamless integration of SQL and Python workflows in your data analysis.

The output of a query can be displayed visually as HTML tables, as shown in the following screenshot.

They can also be written to a pandas DataFrame.

Prerequisites

Make sure you have satisfied the following prerequisites in order to use the SageMaker Studio notebook SQL experience:

  • SageMaker Studio V2 – Make sure you’re running the most up-to-date version of your SageMaker Studio domain and user profiles. If you’re currently on SageMaker Studio Classic, refer to Migrating from Amazon SageMaker Studio Classic.
  • IAM role – SageMaker requires an AWS Identity and Access Management (IAM) role to be assigned to a SageMaker Studio domain or user profile to manage permissions effectively. An execution role update may be required to bring in data browsing and the SQL run feature. The following example policy enables users to grant, list, and run AWS Glue, Athena, Amazon Simple Storage Service (Amazon S3), AWS Secrets Manager, and Amazon Redshift resources:
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Sid":"SQLRelatedS3Permissions",
             "Effect":"Allow",
             "Action":[
                "s3:ListBucket",
                "s3:GetObject",
                "s3:GetBucketLocation",
                "s3:ListMultipartUploadParts",
                "s3:AbortMultipartUpload",
                "s3:PutObject"
             ],
             "Resource":[
                "arn:aws:s3:::sagemaker*/*",
                "arn:aws:s3:::sagemaker*"
             ]
          },
          {
             "Sid":"GlueDataAccess",
             "Effect":"Allow",
             "Action":[
                "glue:GetDatabases",
                "glue:GetSchema",
                "glue:GetTables",
                "glue:GetDatabase",
                "glue:GetTable",
                "glue:ListSchemas",
                "glue:GetPartitions",
                "glue:GetConnections",
                "glue:GetConnection",
                "glue:CreateConnection"
             ],
             "Resource":[
                "arn:aws:glue:<region>:<account>:table/sagemaker*/*",
                "arn:aws:glue:<region>:<account>:database/sagemaker*",
                "arn:aws:glue:<region>:<account>:schema/sagemaker*",
                "arn:aws:glue:<region>:<account>:connection/sagemaker*",
                "arn:aws:glue:<region>:<account>:registry/sagemaker*",
                "arn:aws:glue:<region>:<account>:catalog"
             ]
          },
          {
             "Sid":"AthenaQueryExecution",
             "Effect":"Allow",
             "Action":[
                "athena:ListDataCatalogs",
                "athena:ListDatabases",
                "athena:ListTableMetadata",
                "athena:StartQueryExecution",
                "athena:GetQueryExecution",
                "athena:RunQuery",
                "athena:StartSession",
                "athena:GetQueryResults",
                "athena:ListWorkGroups",
                "athena:GetDataCatalog",
                "athena:GetWorkGroup"
             ],
             "Resource":[
                "arn:aws:athena:<region>:<account>:workgroup/sagemaker*",
                "arn:aws:athena:<region>:<account>:datacatalog/sagemaker*"
             ]
          },
          {
             "Sid":"GetSecretsAndCredentials",
             "Effect":"Allow",
             "Action":[
                "secretsmanager:GetSecretValue",
                "redshift:GetClusterCredentials"
             ],
             "Resource":[
                "arn:aws:secretsmanager:<region>:<account>:secret:sagemaker*",
                "arn:aws:redshift:<region>:<account>:dbuser:sagemaker*/sagemaker*",
                "arn:aws:redshift:<region>:<account>:dbgroup:sagemaker*/sagemaker*",
                "arn:aws:redshift:<region>:<account>:dbname:sagemaker*/sagemaker*"
             ]
          }
       ]
    }

  • JupyterLab Space – You need access to the updated SageMaker Studio and JupyterLab Space with SageMaker Distribution v1.6 or later image versions. If you’re using custom images for JupyterLab Spaces or older versions of SageMaker Distribution (v1.5 or lower), refer to the appendix for instructions to install necessary packages and modules to enable this feature in your environments. To learn more about SageMaker Studio JupyterLab Spaces, refer to Boost productivity on Amazon SageMaker Studio: Introducing JupyterLab Spaces and generative AI tools.
  • Data source access credentials – This SageMaker Studio notebook feature requires user name and password access to data sources such as Snowflake and Amazon Redshift. Create user name and password-based access to these data sources if you do not already have one. OAuth-based access to Snowflake is not a supported feature as of this writing.
  • Load SQL magic – Before you run SQL queries from a Jupyter notebook cell, it’s essential to load the SQL magics extension. Use the command %load_ext amazon_sagemaker_sql_magic to enable this feature. Additionally, you can run the %sm_sql? command to view a comprehensive list of supported options for querying from a SQL cell. These options include setting a default query limit of 1,000, running a full extraction, and injecting query parameters, among others. This setup allows for flexible and efficient SQL data manipulation directly within your notebook environment.

Create database connections

The built-in SQL browsing and execution capabilities of SageMaker Studio are enhanced by AWS Glue connections. An AWS Glue connection is an AWS Glue Data Catalog object that stores essential data such as login credentials, URI strings, and virtual private cloud (VPC) information for specific data stores. These connections are used by AWS Glue crawlers, jobs, and development endpoints to access various types of data stores. You can use these connections for both source and target data, and even reuse the same connection across multiple crawlers or extract, transform, and load (ETL) jobs.

To explore SQL data sources in the left pane of SageMaker Studio, you first need to create AWS Glue connection objects. These connections facilitate access to different data sources and allow you to explore their schematic data elements.

In the following sections, we walk through the process of creating SQL-specific AWS Glue connectors. This will enable you to access, view, and explore datasets across a variety of data stores. For more detailed information about AWS Glue connections, refer to Connecting to data.

Create an AWS Glue connection

The only way to bring data sources into SageMaker Studio is with AWS Glue connections. You need to create AWS Glue connections with specific connection types. As of this writing, the only supported mechanism of creating these connections is using the AWS Command Line Interface (AWS CLI).

Connection definition JSON file

When connecting to different data sources in AWS Glue, you must first create a JSON file that defines the connection properties—referred to as the connection definition file. This file is crucial for establishing an AWS Glue connection and should detail all the necessary configurations for accessing the data source. For security best practices, it’s recommended to use Secrets Manager to securely store sensitive information such as passwords. Meanwhile, other connection properties can be managed directly through AWS Glue connections. This approach makes sure that sensitive credentials are protected while still making the connection configuration accessible and manageable.

The following is an example of a connection definition JSON:

{
    "ConnectionInput": {
        "Name": <GLUE_CONNECTION_NAME>,
        "Description": <GLUE_CONNECTION_DESCRIPTION>,
        "ConnectionType": "REDSHIFT | SNOWFLAKE | ATHENA",
        "ConnectionProperties": {
            "PythonProperties": "{"aws_secret_arn": <SECRET_ARN>, "database": <...>}"
        }
    }
}

When setting up AWS Glue connections for your data sources, there are a few important guidelines to follow to provide both functionality and security:

  • Stringification of properties – Within the PythonProperties key, make sure all properties are stringified key-value pairs. It’s crucial to properly escape double-quotes by using the backslash () character where necessary. This helps maintain the correct format and avoid syntax errors in your JSON.
  • Handling sensitive information – Although it’s possible to include all connection properties within PythonProperties, it is advisable not to include sensitive details like passwords directly in these properties. Instead, use Secrets Manager for handling sensitive information. This approach secures your sensitive data by storing it in a controlled and encrypted environment, away from the main configuration files.

Create an AWS Glue connection using the AWS CLI

After you include all the necessary fields in your connection definition JSON file, you’re ready to establish an AWS Glue connection for your data source using the AWS CLI and the following command:

aws --region <REGION> glue create-connection 
--cli-input-json file:///path/to/file/connection/definition/file.json

This command initiates a new AWS Glue connection based on the specifications detailed in your JSON file. The following is a quick breakdown of the command components:

  • –region <REGION> – This specifies the AWS Region where your AWS Glue connection will be created. It is crucial to select the Region where your data sources and other services are located to minimize latency and comply with data residency requirements.
  • –cli-input-json file:///path/to/file/connection/definition/file.json – This parameter directs the AWS CLI to read the input configuration from a local file that contains your connection definition in JSON format.

You should be able to create AWS Glue connections with the preceding AWS CLI command from your Studio JupyterLab terminal. On the File menu, choose New and Terminal.

If the create-connection command runs successfully, you should see your data source listed in the SQL browser pane. If you don’t see your data source listed, choose Refresh to update the cache.

Create a Snowflake connection

In this section, we focus on integrating a Snowflake data source with SageMaker Studio. Creating Snowflake accounts, databases, and warehouses falls outside the scope of this post. To get started with Snowflake, refer to the Snowflake user guide. In this post, we concentrate on creating a Snowflake definition JSON file and establishing a Snowflake data source connection using AWS Glue.

Create a Secrets Manager secret

You can connect to your Snowflake account by either using a user ID and password or using private keys. To connect with a user ID and password, you need to securely store your credentials in Secrets Manager. As mentioned previously, although it’s possible to embed this information under PythonProperties, it is not recommended to store sensitive information in plain text format. Always make sure that sensitive data is handled securely to avoid potential security risks.

To store information in Secrets Manager, complete the following steps:

  1. On the Secrets Manager console, choose Store a new secret.
  2. For Secret type, choose Other type of secret.
  3. For the key-value pair, choose Plaintext and enter the following:
    {
        "user":"TestUser",
        "password":"MyTestPassword",
        "account":"AWSSAGEMAKERTEST"
    }

  4. Enter a name for your secret, such as sm-sql-snowflake-secret.
  5. Leave the other settings as default or customize if required.
  6. Create the secret.

Create an AWS Glue connection for Snowflake

As discussed earlier, AWS Glue connections are essential for accessing any connection from SageMaker Studio. You can find a list of all supported connection properties for Snowflake. The following is a sample connection definition JSON for Snowflake. Replace the placeholder values with the appropriate values before saving it to disk:

{
    "ConnectionInput": {
        "Name": "Snowflake-Airlines-Dataset",
        "Description": "SageMaker-Snowflake Airlines Dataset",
        "ConnectionType": "SNOWFLAKE",
        "ConnectionProperties": {
            "PythonProperties": "{"aws_secret_arn": "arn:aws:secretsmanager:<region>:<account>:secret:sm-sql-snowflake-secret", "database": "SAGEMAKERDEMODATABASE1"}"
        }
    }
}

To create an AWS Glue connection object for the Snowflake data source, use the following command:

aws --region <REGION> glue create-connection 
--cli-input-json file:///path/to/file/snowflake/definition/file.json

This command creates a new Snowflake data source connection in your SQL browser pane that’s browsable, and you can run SQL queries against it from your JupyterLab notebook cell.

Create an Amazon Redshift connection

Amazon Redshift is a fully managed, petabyte-scale data warehouse service that simplifies and reduces the cost of analyzing all your data using standard SQL. The procedure for creating an Amazon Redshift connection closely mirrors that for a Snowflake connection.

Create a Secrets Manager secret

Similar to the Snowflake setup, to connect to Amazon Redshift using a user ID and password, you need to securely store the secrets information in Secrets Manager. Complete the following steps:

  1. On the Secrets Manager console, choose Store a new secret.
  2. For Secret type, choose Credentials for Amazon Redshift cluster.
  3. Enter the credentials used to log in to access Amazon Redshift as a data source.
  4. Choose the Redshift cluster associated with the secrets.
  5. Enter a name for the secret, such as sm-sql-redshift-secret.
  6. Leave the other settings as default or customize if required.
  7. Create the secret.

By following these steps, you make sure your connection credentials are handled securely, using the robust security features of AWS to manage sensitive data effectively.

Create an AWS Glue connection for Amazon Redshift

To set up a connection with Amazon Redshift using a JSON definition, fill in the necessary fields and save the following JSON configuration to disk:

{
    "ConnectionInput": {
        "Name": "Redshift-US-Housing-Dataset",
        "Description": "sagemaker redshift us housing dataset connection",
        "ConnectionType": "REDSHIFT",
        "ConnectionProperties": {
            "PythonProperties": "{"aws_secret_arn": "arn:aws:secretsmanager:<region>:<account>:sm-sql-redshift-secret", "database": "us-housing-database"}"
        }
    }
}

To create an AWS Glue connection object for the Redshift data source, use the following AWS CLI command:

aws --region <REGION> glue create-connection 
--cli-input-json file:///path/to/file/redshift/definition/file.json

This command creates a connection in AWS Glue linked to your Redshift data source. If the command runs successfully, you will be able to see your Redshift data source within the SageMaker Studio JupyterLab notebook, ready for running SQL queries and performing data analysis.

Create an Athena connection

Athena is a fully managed SQL query service from AWS that enables analysis of data stored in Amazon S3 using standard SQL. To set up an Athena connection as a data source in the JupyterLab notebook’s SQL browser, you need to create an Athena sample connection definition JSON. The following JSON structure configures the necessary details to connect to Athena, specifying the data catalog, the S3 staging directory, and the Region:

{
    "ConnectionInput": {
        "Name": "Athena-Credit-Card-Fraud",
        "Description": "SageMaker-Athena Credit Card Fraud",
        "ConnectionType": "ATHENA",
        "ConnectionProperties": {
            "PythonProperties": "{"catalog_name": "AwsDataCatalog","s3_staging_dir": "s3://sagemaker-us-east-2-123456789/athena-data-source/credit-card-fraud/", "region_name": "us-east-2"}"
        }
    }
}

To create an AWS Glue connection object for the Athena data source, use the following AWS CLI command:

aws --region <REGION> glue create-connection 
--cli-input-json file:///path/to/file/athena/definition/file.json

If the command is successful, you will be able to access Athena data catalog and tables directly from the SQL browser within your SageMaker Studio JupyterLab notebook.

Query data from multiple sources

If you have multiple data sources integrated into SageMaker Studio through the built-in SQL browser and the notebook SQL feature, you can quickly run queries and effortlessly switch between data source backends in subsequent cells within a notebook. This capability allows for seamless transitions between different databases or data sources during your analysis workflow.

You can run queries against a diverse collection of data source backends and bring the results directly into the Python space for further analysis or visualization. This is facilitated by the %%sm_sql magic command available in SageMaker Studio notebooks. To output the results of your SQL query into a pandas DataFrame, there are two options:

  • From your notebook cell toolbar, choose the output type DataFrame and name your DataFrame variable
  • Append the following parameter to your %%sm_sql command:
    --output '{"format": "DATAFRAME", "dataframe_name": "df"}'

The following diagram illustrates this workflow and showcases how you can effortlessly run queries across various sources in subsequent notebook cells, as well as train a SageMaker model using training jobs or directly within the notebook using local compute. Additionally, the diagram highlights how the built-in SQL integration of SageMaker Studio simplifies the processes of extraction and building directly within the familiar environment of a JupyterLab notebook cell.

Text to SQL: Using natural language to enhance query authoring

SQL is a complex language that requires an understanding of databases, tables, syntaxes, and metadata. Today, generative artificial intelligence (AI) can enable you to write complex SQL queries without requiring in-depth SQL experience. The advancement of LLMs has significantly impacted natural language processing (NLP)-based SQL generation, allowing for the creation of precise SQL queries from natural language descriptions—a technique referred to as Text-to-SQL. However, it is essential to acknowledge the inherent differences between human language and SQL. Human language can sometimes be ambiguous or imprecise, whereas SQL is structured, explicit, and unambiguous. Bridging this gap and accurately converting natural language into SQL queries can present a formidable challenge. When provided with appropriate prompts, LLMs can help bridge this gap by understanding the intent behind the human language and generating accurate SQL queries accordingly.

With the release of the SageMaker Studio in-notebook SQL query feature, SageMaker Studio makes it straightforward to inspect databases and schemas, and author, run, and debug SQL queries without ever leaving the Jupyter notebook IDE. This section explores how the Text-to-SQL capabilities of advanced LLMs can facilitate the generation of SQL queries using natural language within Jupyter notebooks. We employ the cutting-edge Text-to-SQL model defog/sqlcoder-7b-2 in conjunction with Jupyter AI, a generative AI assistant specifically designed for Jupyter notebooks, to create complex SQL queries from natural language. By using this advanced model, we can effortlessly and efficiently create complex SQL queries using natural language, thereby enhancing our SQL experience within notebooks.

Notebook prototyping using the Hugging Face Hub

To begin prototyping, you need the following:

  • GitHub code – The code presented in this section is available in the following GitHub repo and by referencing the example notebook.
  • JupyterLab Space – Access to a SageMaker Studio JupyterLab Space backed by GPU-based instances is essential. For the defog/sqlcoder-7b-2 model, a 7B parameter model, using an ml.g5.2xlarge instance is recommended. Alternatives such as defog/sqlcoder-70b-alpha or defog/sqlcoder-34b-alpha are also viable for natural language to SQL conversion, but larger instance types may be required for prototyping. Make sure you have the quota to launch a GPU-backed instance by navigating to the Service Quotas console, searching for SageMaker, and searching for Studio JupyterLab Apps running on <instance type>.

Launch a new GPU-backed JupyterLab Space from your SageMaker Studio. It’s recommended to create a new JupyterLab Space with at least 75 GB of Amazon Elastic Block Store (Amazon EBS) storage for a 7B parameter model.

  • Hugging Face Hub – If your SageMaker Studio domain has access to download models from the Hugging Face Hub, you can use the AutoModelForCausalLM class from huggingface/transformers to automatically download models and pin them to your local GPUs. The model weights will be stored in your local machine’s cache. See the following code:
    model_id = "defog/sqlcoder-7b-2" # or use "defog/sqlcoder-34b-alpha", "defog/sqlcoder-70b-alpha
    
    # download model and tokenizer in fp16 and pin model to local notebook GPUs
    model = AutoModelForCausalLM.from_pretrained(
        model_id, 
        device_map="auto",
        torch_dtype=torch.float16
    )
    
    tokenizer = AutoTokenizer.from_pretrained(model_id)
    tokenizer.pad_token = tokenizer.eos_token

After the model has been fully downloaded and loaded into memory, you should observe an increase in GPU utilization on your local machine. This indicates that the model is actively using the GPU resources for computational tasks. You can verify this in your own JupyterLab space by running nvidia-smi (for a one-time display) or nvidia-smi —loop=1 (to repeat every second) from your JupyterLab terminal.

Text-to-SQL models excel at understanding the intent and context of a user’s request, even when the language used is conversational or ambiguous. The process involves translating natural language inputs into the correct database schema elements, such as table names, column names, and conditions. However, an off-the-shelf Text-to-SQL model will not inherently know the structure of your data warehouse, the specific database schemas, or be able to accurately interpret the content of a table based solely on column names. To effectively use these models to generate practical and efficient SQL queries from natural language, it is necessary to adapt the SQL text-generation model to your specific warehouse database schema. This adaptation is facilitated through the use of LLM prompts. The following is a recommended prompt template for the defog/sqlcoder-7b-2 Text-to-SQL model, divided into four parts:

  • Task – This section should specify a high-level task to be accomplished by the model. It should include the type of database backend (such as Amazon RDS, PostgreSQL, or Amazon Redshift) to make the model aware of any nuanced syntactical differences that may affect the generation of the final SQL query.
  • Instructions – This section should define task boundaries and domain awareness for the model, and may include few-shot examples to guide the model in generating finely tuned SQL queries.
  • Database Schema – This section should detail your warehouse database schemas, outlining the relationships between tables and columns to aid the model in understanding the database structure.
  • Answer – This section is reserved for the model to output the SQL query response to the natural language input.

An example of the database schema and prompt used in this section is available in the GitHub Repo.

### Task
Generate a SQL query to answer [QUESTION]{user_question}[/QUESTION]

### Instructions
- If you cannot answer the question with the available database schema, return 'I do not know'

### Database Schema
The query will run on a database with the following schema:
{table_metadata_string_DDL_statements}

### Answer
Given the database schema, here is the SQL query that 
 [QUESTION]
    {user_question}
 [/QUESTION]

[SQL]

Prompt engineering is not just about forming questions or statements; it’s a nuanced art and science that significantly impacts the quality of interactions with an AI model. The way you craft a prompt can profoundly influence the nature and usefulness of the AI’s response. This skill is pivotal in maximizing the potential of AI interactions, especially in complex tasks requiring specialized understanding and detailed responses.

It’s important to have the option to quickly build and test a model’s response for a given prompt and optimize the prompt based on the response. JupyterLab notebooks provide the ability to receive instant model feedback from a model running on local compute and optimize the prompt and tune a model’s response further or change a model entirely. In this post, we use a SageMaker Studio JupyterLab notebook backed by ml.g5.2xlarge’s NVIDIA A10G 24 GB GPU to run Text-to-SQL model inference on the notebook and interactively build our model prompt until the model’s response is sufficiently tuned to provide responses that are directly executable in JupyterLab’s SQL cells. To run model inference and simultaneously stream model responses, we use a combination of model.generate and TextIteratorStreamer as defined in the following code:

streamer = TextIteratorStreamer(
    tokenizer=tokenizer, 
    timeout=240.0, 
    skip_prompt=True, 
    skip_special_tokens=True
)


def llm_generate_query(user_question):
    """ Generate text-gen SQL responses"""
    
    updated_prompt = prompt.format(question=user_question)
    inputs = tokenizer(updated_prompt, return_tensors="pt").to("cuda")
    
    return model.generate(
        **inputs,
        num_return_sequences=1,
        eos_token_id=tokenizer.eos_token_id,
        pad_token_id=tokenizer.eos_token_id,
        max_new_tokens=1024,
        temperature=0.1,
        do_sample=False,
        num_beams=1, 
        streamer=streamer,
    )

The model’s output can be decorated with SageMaker SQL magic %%sm_sql ..., which allows the JupyterLab notebook to identify the cell as a SQL cell.

Host Text-to-SQL models as SageMaker endpoints

At the end of the prototyping stage, we have selected our preferred Text-to-SQL LLM, an effective prompt format, and an appropriate instance type for hosting the model (either single-GPU or multi-GPU). SageMaker facilitates the scalable hosting of custom models through the use of SageMaker endpoints. These endpoints can be defined according to specific criteria, allowing for the deployment of LLMs as endpoints. This capability enables you to scale the solution to a wider audience, allowing users to generate SQL queries from natural language inputs using custom hosted LLMs. The following diagram illustrates this architecture.

To host your LLM as a SageMaker endpoint, you generate several artifacts.

The first artifact is model weights. SageMaker Deep Java Library (DJL) Serving containers allow you to set up configurations through a meta serving.properties file, which enables you to direct how models are sourced—either directly from the Hugging Face Hub or by downloading model artifacts from Amazon S3. If you specify model_id=defog/sqlcoder-7b-2, DJL Serving will attempt to directly download this model from the Hugging Face Hub. However, you may incur networking ingress/egress charges each time the endpoint is deployed or elastically scaled. To avoid these charges and potentially speed up the download of model artifacts, it is recommended to skip using model_id in serving.properties and save model weights as S3 artifacts and only specify them with s3url=s3://path/to/model/bin.

Saving a model (with its tokenizer) to disk and uploading it to Amazon S3 can be accomplished with just a few lines of code:

# save model and tokenizer to local disk
model.save_pretrained(local_model_path)
tokenizer.save_pretrained(local_model_path)
...
...
...
# upload file to s3
s3_bucket_name = "<my llm artifact bucket name>>"
# s3 prefix to save model weights and tokenizer defs
model_s3_prefix = "sqlcoder-7b-instruct/weights"
# s3 prefix to store s
meta_model_s3_prefix = "sqlcoder-7b-instruct/meta-model"

sagemaker.s3.S3Uploader.upload(local_model_path,  f"s3://{s3_bucket_name}/{model_s3_prefix}")

You also use a database prompt file. In this setup, the database prompt is composed of Task, Instructions, Database Schema, and Answer sections. For the current architecture, we allocate a separate prompt file for each database schema. However, there is flexibility to expand this setup to include multiple databases per prompt file, allowing the model to run composite joins across databases on the same server. During our prototyping stage, we save the database prompt as a text file named <Database-Glue-Connection-Name>.prompt, where Database-Glue-Connection-Name corresponds to the connection name visible in your JupyterLab environment. For instance, this post refers to a Snowflake connection named Airlines_Dataset, so the database prompt file is named Airlines_Dataset.prompt. This file is then stored on Amazon S3 and subsequently read and cached by our model serving logic.

Moreover, this architecture permits any authorized users of this endpoint to define, store, and generate natural language to SQL queries without the need for multiple redeployments of the model. We use the following example of a database prompt to demonstrate the Text-to-SQL functionality.

Next, you generate custom model service logic. In this section, you outline a custom inference logic named model.py. This script is designed to optimize the performance and integration of our Text-to-SQL services:

  • Define the database prompt file caching logic – To minimize latency, we implement a custom logic for downloading and caching database prompt files. This mechanism makes sure that prompts are readily available, reducing the overhead associated with frequent downloads.
  • Define custom model inference logic – To enhance inference speed, our text-to-SQL model is loaded in the float16 precision format and then converted into a DeepSpeed model. This step allows for more efficient computation. Additionally, within this logic, you specify which parameters users can adjust during inference calls to tailor the functionality according to their needs.
  • Define custom input and output logic – Establishing clear and customized input/output formats is essential for smooth integration with downstream applications. One such application is JupyterAI, which we discuss in the subsequent section.
%%writefile {meta_model_filename}/model.py
...

predictor = None
prompt_for_db_dict_cache = {}

def download_prompt_from_s3(prompt_filename):

    print(f"downloading prompt file: {prompt_filename}")
    s3 = boto3.resource('s3')
    ...


def get_model(properties):
    
    ...
    print(f"Loading model from {cwd}")
    model = AutoModelForCausalLM.from_pretrained(
        cwd, 
        low_cpu_mem_usage=True, 
        torch_dtype=torch.bfloat16
    )
    model = deepspeed.init_inference(
        model, 
        mp_size=properties["tensor_parallel_degree"]
    )
    
    ...


def handle(inputs: Input) -> None:

    ...

    global predictor
    if not predictor:
        predictor = get_model(inputs.get_properties())

    ...
    result = f"""%%sm_sql --metastore-id {prompt_for_db_key.split('.')[0]} --metastore-type GLUE_CONNECTIONnn{result}n"""
    result = [{'generated_text': result}]
    
    return Output().add(result)

Additionally, we include a serving.properties file, which acts as a global configuration file for models hosted using DJL serving. For more information, refer to Configurations and settings.

Lastly, you can also include a requirements.txt file to define additional modules required for inference and package everything into a tarball for deployment.

See the following code:

os.system(f"tar czvf {meta_model_filename}.tar.gz ./{meta_model_filename}/")

>>>./deepspeed-djl-serving-7b/
>>>./deepspeed-djl-serving-7b/serving.properties
>>>./deepspeed-djl-serving-7b/model.py
>>>./deepspeed-djl-serving-7b/requirements.txt

Integrate your endpoint with the SageMaker Studio Jupyter AI assistant

Jupyter AI is an open source tool that brings generative AI to Jupyter notebooks, offering a robust and user-friendly platform for exploring generative AI models. It enhances productivity in JupyterLab and Jupyter notebooks by providing features like the %%ai magic for creating a generative AI playground inside notebooks, a native chat UI in JupyterLab for interacting with AI as a conversational assistant, and support for a wide array of LLMs from providers like Amazon Titan, AI21, Anthropic, Cohere, and Hugging Face or managed services like Amazon Bedrock and SageMaker endpoints. For this post, we use Jupyter AI’s out-of-the-box integration with SageMaker endpoints to bring the Text-to-SQL capability into JupyterLab notebooks. The Jupyter AI tool comes pre-installed in all SageMaker Studio JupyterLab Spaces backed by SageMaker Distribution images; end-users are not required to make any additional configurations to start using the Jupyter AI extension to integrate with a SageMaker hosted endpoint. In this section, we discuss the two ways to use the integrated Jupyter AI tool.

Jupyter AI inside a notebook using magics

Jupyter AI’s %%ai magic command allows you to transform your SageMaker Studio JupyterLab notebooks into a reproducible generative AI environment. To begin using AI magics, make sure you have loaded the jupyter_ai_magics extension to use %%ai magic, and additionally load amazon_sagemaker_sql_magic to use %%sm_sql magic:

# load sm_sql magic extension and ai magic extension
%load_ext jupyter_ai_magics
%load_ext amazon_sagemaker_sql_magic

To run a call to your SageMaker endpoint from your notebook using the %%ai magic command, provide the following parameters and structure the command as follows:

  • –region-name – Specify the Region where your endpoint is deployed. This makes sure that the request is routed to the correct geographic location.
  • –request-schema – Include the schema of the input data. This schema outlines the expected format and types of the input data that your model needs to process the request.
  • –response-path – Define the path within the response object where the output of your model is located. This path is used to extract the relevant data from the response returned by your model.
  • -f (optional) – This is an output formatter flag that indicates the type of output returned by the model. In the context of a Jupyter notebook, if the output is code, this flag should be set accordingly to format the output as executable code at the top of a Jupyter notebook cell, followed by a free text input area for user interaction.

For example, the command in a Jupyter notebook cell might look like the following code:

%%ai sagemaker-endpoint:<endpoint-name> --region-name=us-east-1 
--request-schema={
    "inputs":"<prompt>", 
    "parameters":{
        "temperature":0.1,
        "top_p":0.2,
        "max_new_tokens":1024,
        "return_full_text":false
    }, 
    "db_prompt":"Airlines_Dataset.prompt"
  } 
--response-path=[0].generated_text -f code

My natural language query goes here...

Jupyter AI chat window

Alternatively, you can interact with SageMaker endpoints through a built-in user interface, simplifying the process of generating queries or engaging in dialogue. Before beginning to chat with your SageMaker endpoint, configure the relevant settings in Jupyter AI for the SageMaker endpoint, as shown in the following screenshot.

Conclusion

SageMaker Studio now simplifies and streamlines the data scientist workflow by integrating SQL support into JupyterLab notebooks. This allows data scientists to focus on their tasks without the need to manage multiple tools. Furthermore, the new built-in SQL integration in SageMaker Studio enables data personas to effortlessly generate SQL queries using natural language text as input, thereby accelerating their workflow.

We encourage you to explore these features in SageMaker Studio. For more information, refer to Prepare data with SQL in Studio.

Appendix

Enable the SQL browser and notebook SQL cell in custom environments

If you’re not using a SageMaker Distribution image or using Distribution images 1.5 or below, run the following commands to enable the SQL browsing feature inside your JupyterLab environment:

npm install -g vscode-jsonrpc
npm install -g sql-language-server
pip install amazon-sagemaker-sql-execution==0.1.0
pip install amazon-sagemaker-sql-editor
restart-jupyter-server

Relocate the SQL browser widget

JupyterLab widgets allow for relocation. Depending on your preference, you can move widgets to the either side of JupyterLab widgets pane. If you prefer, you can move the direction of the SQL widget to the opposite side (right to left) of the sidebar with a simple right-click on the widget icon and choosing Switch Sidebar Side.


About the authors

Pranav Murthy is an AI/ML Specialist Solutions Architect at AWS. He focuses on helping customers build, train, deploy and migrate machine learning (ML) workloads to SageMaker. He previously worked in the semiconductor industry developing large computer vision (CV) and natural language processing (NLP) models to improve semiconductor processes using state of the art ML techniques. In his free time, he enjoys playing chess and traveling. You can find Pranav on LinkedIn.

Varun Shah is a Software Engineer working on Amazon SageMaker Studio at Amazon Web Services. He is focused on building interactive ML solutions which simplify data processing and data preparation journeys . In his spare time, Varun enjoys outdoor activities including hiking and skiing, and is always up for discovering new, exciting places.

Sumedha Swamy is a Principal Product Manager at Amazon Web Services where he leads SageMaker Studio team in its mission to develop IDE of choice for data science and machine learning. He has dedicated the past 15 years building Machine Learning based consumer and enterprise products.

Bosco Albuquerque is a Sr. Partner Solutions Architect at AWS and has over 20 years of experience working with database and analytics products from enterprise database vendors and cloud providers. He has helped technology companies design and implement data analytics solutions and products.

Read More

Distributed training and efficient scaling with the Amazon SageMaker Model Parallel and Data Parallel Libraries

Distributed training and efficient scaling with the Amazon SageMaker Model Parallel and Data Parallel Libraries

There has been tremendous progress in the field of distributed deep learning for large language models (LLMs), especially after the release of ChatGPT in December 2022. LLMs continue to grow in size with billions or even trillions of parameters, and they often won’t fit into a single accelerator device such as GPU or even a single node such as ml.p5.32xlarge because of memory limitations. Customers training LLMs often must distribute their workload across hundreds or even thousands of GPUs. Enabling training at such scale remains a challenge in distributed training, and training efficiently in such a large system is another equally important problem. Over the past years, the distributed training community has introduced 3D parallelism (data parallelism, pipeline parallelism, and tensor parallelism) and other techniques (such as sequence parallelism and expert parallelism) to address such challenges.

In December 2023, Amazon announced the release of the SageMaker model parallel library 2.0 (SMP), which achieves state-of-the-art efficiency in large model training, together with the SageMaker distributed data parallelism library (SMDDP). This release is a significant update from 1.x: SMP is now integrated with open source PyTorch Fully Sharded Data Parallel (FSDP) APIs, which allows you to use a familiar interface when training large models, and is compatible with Transformer Engine (TE), unlocking tensor parallelism techniques alongside FSDP for the first time. To learn more about the release, refer to Amazon SageMaker model parallel library now accelerates PyTorch FSDP workloads by up to 20%.

In this post, we explore the performance benefits of Amazon SageMaker (including SMP and SMDDP), and how you can use the library to train large models efficiently on SageMaker. We demonstrate the performance of SageMaker with benchmarks on ml.p4d.24xlarge clusters up to 128 instances, and FSDP mixed precision with bfloat16 for the Llama 2 model. We start with a demonstration of near-linear scaling efficiencies for SageMaker, followed by analyzing contributions from each feature for optimal throughput, and end with efficient training with various sequence lengths up to 32,768 through tensor parallelism.

Near-linear scaling with SageMaker

To reduce the overall training time for LLM models, preserving high throughput when scaling to large clusters (thousands of GPUs) is crucial given the inter-node communication overhead. In this post, we demonstrate robust and near-linear scaling (by varying the number of GPUs for a fixed total problem size) efficiencies on p4d instances invoking both SMP and SMDDP.

In this section, we demonstrate SMP’s near-linear scaling performance. Here we train Llama 2 models of various sizes (7B, 13B, and 70B parameters) using a fixed sequence length of 4,096, the SMDDP backend for collective communication, TE enabled, a global batch size of 4 million, with 16 to 128 p4d nodes. The following table summarizes our optimal configuration and training performance (model TFLOPs per second).

Model size Number of nodes TFLOPs* sdp* tp* offload* Scaling efficiency
7B 16 136.76 32 1 N 100.0%
32 132.65 64 1 N 97.0%
64 125.31 64 1 N 91.6%
128 115.01 64 1 N 84.1%
13B 16 141.43 32 1 Y 100.0%
32 139.46 256 1 N 98.6%
64 132.17 128 1 N 93.5%
128 120.75 128 1 N 85.4%
70B 32 154.33 256 1 Y 100.0%
64 149.60 256 1 N 96.9%
128 136.52 64 2 N 88.5%

*At the given model size, sequence length, and number of nodes, we show the globally optimal throughput and configurations after exploring various sdp, tp, and activation offloading combinations.

The preceding table summarizes the optimal throughput numbers subject to sharded data parallel (sdp) degree (typically using FSDP hybrid sharding instead of full sharding, with more details in the next section), tensor parallel (tp) degree, and activation offloading value changes, demonstrating a near-linear scaling for SMP together with SMDDP. For example, given the Llama 2 model size 7B and sequence length 4,096, overall it achieves scaling efficiencies of 97.0%, 91.6%, and 84.1% (relative to 16 nodes) at 32, 64, and 128 nodes, respectively. The scaling efficiencies are stable across different model sizes and increase slightly as the model size gets larger.

SMP and SMDDP also demonstrate similar scaling efficiencies for other sequence lengths such as 2,048 and 8,192.

SageMaker model parallel library 2.0 performance: Llama 2 70B

Model sizes have continued to grow over the past years, along with frequent state-of-the-art performance updates in the LLM community. In this section, we illustrate performance in SageMaker for the Llama 2 model using a fixed model size 70B, sequence length of 4,096, and a global batch size of 4 million. To compare with the previous table’s globally optimal configuration and throughput (with SMDDP backend, typically FSDP hybrid sharding and TE), the following table extends to other optimal throughputs (potentially with tensor parallelism) with extra specifications on the distributed backend (NCCL and SMDDP), FSDP sharding strategies (full sharding and hybrid sharding), and enabling TE or not (default).

Model size Number of nodes TFLOPS TFLOPs #3 config TFLOPs improvement over baseline
. . NCCL full sharding: #0 SMDDP full sharding: #1 SMDDP hybrid sharding: #2 SMDDP hybrid sharding with TE: #3 sdp* tp* offload* #0 → #1 #1 → #2 #2 → #3 #0 → #3
70B 32 150.82 149.90 150.05 154.33 256 1 Y -0.6% 0.1% 2.9% 2.3%
64 144.38 144.38 145.42 149.60 256 1 N 0.0% 0.7% 2.9% 3.6%
128 68.53 103.06 130.66 136.52 64 2 N 50.4% 26.8% 4.5% 99.2%

*At the given model size, sequence length, and number of nodes, we show the globally optimal throughput and configuration after exploring various sdp, tp, and activation offloading combinations.

The latest release of SMP and SMDDP supports multiple features including native PyTorch FSDP, extended and more flexible hybrid sharding, transformer engine integration, tensor parallelism, and optimized all gather collective operation. To better understand how SageMaker achieves efficient distributed training for LLMs, we explore incremental contributions from SMDDP and the following SMP core features:

  • SMDDP enhancement over NCCL with FSDP full sharding
  • Replacing FSDP full sharding with hybrid sharding, which reduces communication cost to improve throughput
  • A further boost to throughput with TE, even when tensor parallelism is disabled
  • At lower resource settings, activation offloading might be able to enable training that would otherwise be infeasible or very slow due to high memory pressure

FSDP full sharding: SMDDP enhancement over NCCL

As shown in the previous table, when models are fully sharded with FSDP, although NCCL (TFLOPs #0) and SMDDP (TFLOPs #1) throughputs are comparable at 32 or 64 nodes, there is a huge improvement of 50.4% from NCCL to SMDDP at 128 nodes.

At smaller model sizes, we observe consistent and significant improvements with SMDDP over NCCL, starting at smaller cluster sizes, because SMDDP is able to mitigate the communication bottleneck effectively.

FSDP hybrid sharding to reduce communication cost

In SMP 1.0, we launched sharded data parallelism, a distributed training technique powered by Amazon in-house MiCS technology. In SMP 2.0, we introduce SMP hybrid sharding, an extensible and more flexible hybrid sharding technique that allows models to be sharded among a subset of GPUs, instead of all training GPUs, which is the case for FSDP full sharding. It’s useful for medium-sized models that don’t need to be sharded across the entire cluster in order to satisfy per-GPU memory constraints. This leads to clusters having more than one model replica and each GPU communicating with fewer peers at runtime.

SMP’s hybrid sharding enables efficient model sharding over a wider range, from the smallest shard degree with no out of memory issues up to the whole cluster size (which equates to full sharding).

The following figure illustrates the throughput dependence on sdp at tp = 1 for simplicity. Although it’s not necessarily the same as the optimal tp value for NCCL or SMDDP full sharding in the previous table, the numbers are quite close. It clearly validates the value of switching from full sharding to hybrid sharding at a large cluster size of 128 nodes, which is applicable to both NCCL and SMDDP. For smaller model sizes, significant improvements with hybrid sharding start at smaller cluster sizes, and the difference keeps increasing with cluster size.

Improvements with TE

TE is designed to accelerate LLM training on NVIDIA GPUs. Despite not using FP8 because it’s unsupported on p4d instances, we still see significant speedup with TE on p4d.

On top of MiCS trained with the SMDDP backend, TE introduces a consistent boost for throughput across all cluster sizes (the only exception is full sharding at 128 nodes), even when tensor parallelism is disabled (tensor parallel degree is 1).

For smaller model sizes or various sequence lengths, the TE boost is stable and non-trivial, in the range of approximately 3–7.6%.

Activation offloading at low resource settings

At low resource settings (given a small number of nodes), FSDP might experience a high memory pressure (or even out of memory in the worst case) when activation checkpointing is enabled. For such scenarios bottlenecked by memory, turning on activation offloading is potentially an option to improve performance.

For example, as we saw previously, although the Llama 2 at model size 13B and sequence length 4,096 is able to train optimally with at least 32 nodes with activation checkpointing and without activation offloading, it achieves the best throughput with activation offloading when limited to 16 nodes.

Enable training with long sequences: SMP tensor parallelism

Longer sequence lengths are desired for long conversations and context, and are getting more attention in the LLM community. Therefore, we report various long sequence throughputs in the following table. The table shows optimal throughputs for Llama 2 training on SageMaker, with various sequence lengths from 2,048 up to 32,768. At sequence length 32,768, native FSDP training is infeasible with 32 nodes at a global batch size of 4 million.

. . . TFLOPS
Model size Sequence length Number of nodes Native FSDP and NCCL SMP and SMDDP SMP improvement
7B 2048 32 129.25 138.17 6.9%
4096 32 124.38 132.65 6.6%
8192 32 115.25 123.11 6.8%
16384 32 100.73 109.11 8.3%
32768 32 N.A. 82.87 .
13B 2048 32 137.75 144.28 4.7%
4096 32 133.30 139.46 4.6%
8192 32 125.04 130.08 4.0%
16384 32 111.58 117.01 4.9%
32768 32 N.A. 92.38 .
*: max . . . . 8.3%
*: median . . . . 5.8%

When the cluster size is large and given a fixed global batch size, some model training might be infeasible with native PyTorch FSDP, lacking a built-in pipeline or tensor parallelism support. In the preceding table, given a global batch size of 4 million, 32 nodes, and sequence length 32,768, the effective batch size per GPU is 0.5 (for example, tp = 2 with batch size 1), which would otherwise be infeasible without introducing tensor parallelism.

Conclusion

In this post, we demonstrated efficient LLM training with SMP and SMDDP on p4d instances, attributing contributions to multiple key features, such as SMDDP enhancement over NCCL, flexible FSDP hybrid sharding instead of full sharding, TE integration, and enabling tensor parallelism in favor of long sequence lengths. After being tested over a wide range of settings with various models, model sizes, and sequence lengths, it exhibits robust near-linear scaling efficiencies, up to 128 p4d instances on SageMaker. In summary, SageMaker continues to be a powerful tool for LLM researchers and practitioners.

To learn more, refer to SageMaker model parallelism library v2, or contact the SMP team at sm-model-parallel-feedback@amazon.com.

Acknowledgements

We’d like to thank Robert Van Dusen, Ben Snyder, Gautam Kumar, and Luis Quintela for their constructive feedback and discussions.


About the Authors

Xinle Sheila Liu is an SDE in Amazon SageMaker. In her spare time, she enjoys reading and outdoor sports.

Suhit Kodgule is a Software Development Engineer with the AWS Artificial Intelligence group working on deep learning frameworks. In his spare time, he enjoys hiking, traveling, and cooking.

Victor Zhu is a Software Engineer in Distributed Deep Learning at Amazon Web Services. He can be found enjoying hiking and board games around the SF Bay Area.

Derya Cavdar works as a software engineer at AWS. Her interests include deep learning and distributed training optimization.

Teng Xu is a Software Development Engineer in the Distributed Training group in AWS AI. He enjoys reading.

Read More

Manage your Amazon Lex bot via AWS CloudFormation templates

Manage your Amazon Lex bot via AWS CloudFormation templates

Amazon Lex is a fully managed artificial intelligence (AI) service with advanced natural language models to design, build, test, and deploy conversational interfaces in applications. It employs advanced deep learning technologies to understand user input, enabling developers to create chatbots, virtual assistants, and other applications that can interact with users in natural language.

Managing your Amazon Lex bots using AWS CloudFormation allows you to create templates defining the bot and all the AWS resources it depends on. AWS CloudFormation provides and configures those resources on your behalf, removing the risk of human error when deploying bots to new environments. The benefits of using CloudFormation include:

  • Consistency – A CloudFormation template provides a more consistent and automated way to deploy and manage the resources associated with an Amazon Lex bot.
  • Version control – With AWS CloudFormation, you can use version control systems like Git to manage your CloudFormation templates. This allows you to maintain different versions of your bot and roll back to previous versions if needed.
  • Reusability – You can reuse CloudFormation templates across multiple environments, such as development, staging, and production. This saves time and effort in defining the same bot across different environments.
  • Expandability – As your Amazon Lex bot grows in complexity, managing it through the AWS Management Console becomes more challenging. AWS CloudFormation allows for a more streamlined and efficient approach to managing the bot’s definition and resources.
  • Automation – Using a CloudFormation template allows you to automate the deployment process. You can use AWS services like AWS CodePipeline and AWS CodeBuild to build, test, and deploy your Amazon Lex bot automatically.

In this post, we guide you through the steps involved in creating a CloudFormation template for an Amazon Lex V2 bot.

Solution overview

We have chosen the Book Trip bot as our starting point for this exercise. We use a CloudFormation template to create a new bot from scratch, including defining intents, slots, and other required components. Additionally, we explore topics such as version control, aliases, integrating AWS Lambda functions, creating conditional branches, and enabling logging.

Prerequisites

You should have the following prerequisites:

  • An AWS account to create and deploy a CloudFormation template
  • The necessary AWS Identity and Access Management (IAM) permissions to deploy AWS CloudFormation and the resources used in the template
  • Basic knowledge of Amazon Lex, Lambda functions, and associated services
  • Basic knowledge of creating and deploying CloudFormation templates

Create an IAM role

To begin, you need to create an IAM role that the bot will use. You can achieve this by initializing a CloudFormation template and adding the IAM role as a resource. You can use the following template to create the role. If you download the example template and deploy it, you should see that an IAM role has been created. We provide examples of templates as we go through this post and merge them as we get further along.

AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Description: CloudFormation template for book hotel bot.
Resources:
  # 1. IAM role that is used by the bot at runtime
  BotRuntimeRole:    
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - lexv2.amazonaws.com
            Action:
              - "sts:AssumeRole"
      Path: "/"
      Policies:
        - PolicyName: LexRuntimeRolePolicy
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - "polly:SynthesizeSpeech"
                  - "comprehend:DetectSentiment"
                Resource: "*"

Configure the Amazon Lex bot

Next, you need to add the bot definition. The following is the YAML template for the Amazon Lex bot definition; you construct the necessary components one by one:

Type: AWS::Lex::Bot
Properties:
  AutoBuildBotLocales: Boolean
  BotFileS3Location: 
    S3Location
  BotLocales: 
    - BotLocale
  BotTags: 
    - Tag
  DataPrivacy: 
    DataPrivacy
  Description: String
  IdleSessionTTLInSeconds: Integer
  Name: String
  RoleArn: String
  TestBotAliasSettings: 
    TestBotAliasSettings
  TestBotAliasTags: 
    - Tag

To create a bot that only includes the bot definition without any intent, you can use the following template. Here, you specify the bot’s name, the ARN of the role that you previously created, data privacy settings, and more:

BookHotelBot:
    DependsOn: BotRuntimeRole # The role created in the previous step
    Type: AWS::Lex::Bot
    Properties:
      Name: "BookHotel"
      Description: "Sample Bot to book a hotel"
      RoleArn: !GetAtt BotRuntimeRole.Arn      
      #For each Amazon Lex bot created with the Amazon Lex Model Building Service, you must specify whether your use of Amazon Lex 
      #is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under 
      #age 13 and subject to the Children's Online Privacy Protection Act (COPPA) by specifying true or false in the 
      #childDirected field.
      DataPrivacy:
        ChildDirected: false
      IdleSessionTTLInSeconds: 300

You can download the updated template. Deploying the updated template allows you to create both the role and the bot definition. Note that you’re updating the stack you created in the previous step.

The final step entails defining the BotLocales, which form the majority of the bot’s functionality. This includes, for example, Intents and Slot types. The following is the YAML template:

  CustomVocabulary: 
    CustomVocabulary
  Description: String
  Intents: 
    - Intent
  LocaleId: String
  NluConfidenceThreshold: Number
  SlotTypes: 
    - SlotType
  VoiceSettings: 
    VoiceSettings

In this case, you build the BookHotel intent, which requires a custom slot type for room types. You set the LocaleId, then the VoiceSettings. Then you add the SlotTypes and their corresponding values.

The next step is to define the Intents, starting with the first intent, BookHotel, which involves adding utterances, slots, and slot priorities. The details of these nodes are demonstrated in the provided template. Finally, you add the second intent, which is the FallbackIntent. See the following code:

BotLocales:
        - LocaleId: "en_US"
          Description: "en US locale"
          NluConfidenceThreshold: 0.40
          VoiceSettings:
            VoiceId: "Matthew"
          SlotTypes:
            - Name: "RoomTypeValues"
              Description: "Type of room"
              SlotTypeValues:
                - SampleValue:
                    Value: queen
                - SampleValue:
                    Value: king
                - SampleValue:
                    Value: deluxe
              ValueSelectionSetting:
                ResolutionStrategy: ORIGINAL_VALUE
          Intents:
            - Name: "BookHotel"
              Description: "Intent to book a hotel room"
              SampleUtterances:
                - Utterance: "Book a hotel"
                - Utterance: "I want a make hotel reservations"
                - Utterance: "Book a {Nights} night stay in {Location}"
              IntentConfirmationSetting:
                PromptSpecification:
                  MessageGroupsList:
                    - Message:
                        PlainTextMessage:
                          Value: "Okay, I have you down for a {Nights} night stay in {Location} starting {CheckInDate}.  Shall I book the reservation?"
                  MaxRetries: 3
                  AllowInterrupt: false
                DeclinationResponse:
                  MessageGroupsList:
                    - Message:
                        PlainTextMessage:
                          Value: "Okay, I have cancelled your reservation in progress."
                  AllowInterrupt: false
              SlotPriorities:
                - Priority: 1
                  SlotName: Location
                - Priority: 2
                  SlotName: CheckInDate
                - Priority: 3
                  SlotName: Nights
                - Priority: 4
                  SlotName: RoomType
              Slots:
                - Name: "Location"
                  Description: "Location of the city in which the hotel is located"
                  SlotTypeName: "AMAZON.City"
                  ValueElicitationSetting:
                    SlotConstraint: "Required"
                    PromptSpecification:
                      MessageGroupsList:
                        - Message:
                            PlainTextMessage:
                              Value: "What city will you be staying in?"
                      MaxRetries: 2
                      AllowInterrupt: false
                - Name: "CheckInDate"
                  Description: "Date of check-in"
                  SlotTypeName: "AMAZON.Date"
                  ValueElicitationSetting:
                    SlotConstraint: "Required"
                    PromptSpecification:
                      MessageGroupsList:
                        - Message:
                            PlainTextMessage:
                              Value: "What day do you want to check in?"
                      MaxRetries: 2
                      AllowInterrupt: false
                - Name: "Nights"
                  Description: "something"
                  SlotTypeName: "AMAZON.Number"
                  ValueElicitationSetting:
                    SlotConstraint: "Required"
                    PromptSpecification:
                      MessageGroupsList:
                        - Message:
                            PlainTextMessage:
                              Value: "How many nights will you be staying?"
                      MaxRetries: 2
                      AllowInterrupt: false
                - Name: "RoomType"
                  Description: "Enumeration of types of rooms that are offered by a hotel."
                  SlotTypeName: "RoomTypeValues"
                  ValueElicitationSetting:
                    SlotConstraint: "Required"
                    PromptSpecification:
                      MessageGroupsList:
                        - Message:
                            PlainTextMessage:
                              Value: "What type of room would you like, queen, king or deluxe?"
                      MaxRetries: 2
                      AllowInterrupt: false
            - Name: "FallbackIntent"
              Description: "Default intent when no other intent matches"
              ParentIntentSignature: "AMAZON.FallbackIntent"

You can download the CloudFormation template for the work done until now. After you update your stack with this template, a functional bot will be deployed. On the Amazon Lex console, you can confirm that there is a draft version of the bot, and a default alias named TestBotAlias has been created.

bot alias

Create a new bot version and alias

Amazon Lex supports publishing versions of bots, intents, and slot types so that you can control your client applications’ implementation. A version is a numbered snapshot of your bot definition that you can publish for use in different parts of your workflow, such as development, beta deployment, and production. Amazon Lex bots also support aliases. An alias is a pointer to a specific version of a bot. With an alias, you can update your client applications’ version. In practical scenarios, bot aliases are used for blue/green deployments and managing environment-specific configurations like development and production environments.

To illustrate, let’s say you point an alias to version 1 of your bot. When it’s time to update the bot, you can publish version 2 and change the alias to point to the new version. Because your applications use the alias instead of a specific version, all clients receive the new functionality without requiring updates.

Keep in mind that when you modify the CloudFormation template and initiate deployment, the changes are implemented within the draft version, primarily meant for testing. After you complete your testing phase, you can establish a new version to finalize the changes you’ve incorporated so far.

Next, you create a new bot version based on your draft, set up a new alias, and link the version to this alias. The following are the two new resources to add to your template:

BookHotelInitialVersion:
    DependsOn: BookHotelBot
    Type: AWS::Lex::BotVersion
    Properties:
      BotId: !Ref BookHotelBot
      BotVersionLocaleSpecification:
        - LocaleId: en_US
          BotVersionLocaleDetails:
            SourceBotVersion: DRAFT
      Description: Hotel Bot initial version

  BookHotelDemoAlias:
    Type: AWS::Lex::BotAlias
    Properties:
      BotId: !Ref BookHotelBot
      BotAliasName: "BookHotelDemoAlias"
      BotVersion: !GetAtt BookHotelInitialVersion.BotVersion

You can download the new version of the template and deploy it by updating your stack. You can see on the Amazon Lex console that a new version is created and associated with a new alias called BookHotelDemoAlias.

demo alias

When you create a new version of an Amazon Lex bot, it typically increments the version number sequentially, starting from 1. To discern a specific version, you can refer to its description.

initial version

Add a Lambda function

To initialize values or validate user input for your bot, you can add a Lambda function as a code hook to your bot. Similarly, you can use a Lambda function for fulfillment as well, for example writing data to databases or calling APIs save the collected information. For more information, refer to Enabling custom logic with AWS Lambda functions.

Let’s add a new resource for the Lambda function to the CloudFormation template. Although it’s generally not advised to embed code in CloudFormation templates, we do so here solely for the sake of making the demo deployment less complicated. See the following code:

HotelBotFunction:
    DependsOn: BotRuntimeRole # So that the Lambda function is ready before the bot deployment
    Type: AWS::Serverless::Function
    Properties:
      FunctionName: book_hotel_lambda
      Runtime: python3.11
      Timeout: 15
      Handler: index.lambda_handler
      InlineCode: |
        import os
        import json

        def close(intent_request):
            intent_request['sessionState']['intent']['state'] = 'Fulfilled'

            message = {"contentType": "PlainText",
                      "content": "Your Booking is confirmed"}

            session_attributes = {}
            sessionState = intent_request['sessionState']
            if 'sessionAttributes' in sessionState:
                session_attributes = sessionState['sessionAttributes']

            requestAttributes = None
            if 'requestAttributes' in intent_request:
                requestAttributes = intent_request['requestAttributes']

            return {
                'sessionState': {
                    'sessionAttributes': session_attributes,
                    'dialogAction': {
                        'type': 'Close'
                    },
                    'intent': intent_request['sessionState']['intent'],
                    'originatingRequestId': 'xxxxxxx-xxxx-xxxx-xxxx'
                },
                'messages':  [message],
                'sessionId': intent_request['sessionId'],
                'requestAttributes': requestAttributes
            }

        def router(event):
            intent_name = event['sessionState']['intent']['name']
            slots = event['sessionState']['intent']['slots']
            if (intent_name == 'BookHotel'):
                # invoke lambda and return result
                return close(event)

            raise Exception(
                'The intent is not supported by Lambda: ' + intent_name)

        def lambda_handler(event, context):
            response = router(event)
            return response

To use this Lambda function for the fulfillment, enable the code hook settings in your intent:

Intents:
  - Name: "BookHotel"
    Description: "Intent to book a hotel room"
    FulfillmentCodeHook:
      Enabled: true
    SampleUtterances:
      - Utterance: "Book a hotel"
      - Utterance: "I want a make hotel reservations"
      - Utterance: "Book a {Nights} night stay in {Location}"

Because you made changes to your bot, you can create a new version of the bot by adding a new resource named BookHotelVersionWithLambda in the template:

BookHotelVersionWithLambda:
    DependsOn: BookHotelInitialVersion
    Type: AWS::Lex::BotVersion
    Properties:
      BotId: !Ref BookHotelBot
      BotVersionLocaleSpecification:
        - LocaleId: en_US
          BotVersionLocaleDetails:
            SourceBotVersion: DRAFT
      Description: Hotel Bot with a lambda function

The Lambda function is associated with a bot alias. Amazon Lex V2 can use one Lambda function per bot alias per language. Therefore, you must update your alias in the template to add the Lambda function resource. You can do so in the BotAliasLocalSettings section. You also need to point the alias to the new version you created. The following code is the modified alias configuration:

  BookHotelDemoAlias:
    Type: AWS::Lex::BotAlias
    Properties:
      BotId: !Ref BookHotelBot
      BotAliasName: "BookHotelDemoAlias"
      BotVersion: !GetAtt BookHotelVersionWithLambda.BotVersion
      # Remove BotAliasLocaleSettings if you aren't concerned with Lambda setup.
      # If you are you can modify the LambdaArn below to get started.
      BotAliasLocaleSettings:
        - LocaleId: en_US
          BotAliasLocaleSetting:
            Enabled: true
            CodeHookSpecification:
              LambdaCodeHook:
                CodeHookInterfaceVersion: "1.0"
                LambdaArn: !GetAtt HotelBotFunction.Arn

Up until now, you have only linked the Lambda function with the alias. However, you need to grant permission to allow the alias to invoke the Lambda function. In the following code, you add the Lambda invoke permission for Amazon Lex and specify the alias ARN as the source ARN:

  LexInvokeLambdaPermission:
    Type: AWS::Lambda::Permission
    Properties:
      Action: "lambda:InvokeFunction"
      FunctionName: !GetAtt HotelBotFunction.Arn
      Principal: "lexv2.amazonaws.com"
      SourceArn: !GetAtt BookHotelDemoAlias.Arn

You can download the latest version of the template. After updating your stack with this version, you will have an Amazon Lex bot integrated with a Lambda function.

second version

updated alis

Conditional branches

Now let’s explore the conditional branch feature of the Amazon Lex bot and consider a scenario where booking more than five nights in Seattle is not allowed for the next week. As per the business requirement, the conversation should end with an appropriate message if the user attempts to book more than five nights in Seattle. The conditional branch for that is represented in the CloudFormation template under the SlotCaptureSetting:

- Name: "Nights"
                  Description: “Number of nights.”
                  SlotTypeName: "AMAZON.Number"
                  ValueElicitationSetting:
                    SlotConstraint: "Required"
                    SlotCaptureSetting:
                      CaptureConditional:
                        DefaultBranch:
                          NextStep:
                            DialogAction:
                              Type: "ElicitSlot"
                              SlotToElicit: "RoomType"
                        ConditionalBranches:
                          - Name: "Branch1"
                            Condition:
                              ExpressionString: '{Nights}>5 AND {Location} = "Seattle"'
                            Response:
                              AllowInterrupt: true
                              MessageGroupsList:
                                - Message:
                                    PlainTextMessage:
                                      Value: “Sorry, we cannot book more than five nights in {Location} right now."
                            NextStep:
                              DialogAction:
                                Type: "EndConversation"
                        IsActive: true

                    PromptSpecification:
                      MessageGroupsList:
                        - Message:
                            PlainTextMessage:
                              Value: "How many nights will you be staying?"
                      MaxRetries: 2
                      AllowInterrupt: false

Because you changed the bot definition, you need to create a new version in the template and link it with the alias. This is a temporary modification because the business plans to allow large bookings in Seattle soon. The following are the two new resources you add to the template:

BookHotelConditionalBranches:
    DependsOn: BookHotelVersionWithLambda
    Type: AWS::Lex::BotVersion
    Properties:
      BotId: !Ref BookHotelBot
      BotVersionLocaleSpecification:
        - LocaleId: en_US
          BotVersionLocaleDetails:
            SourceBotVersion: DRAFT
      Description: Hotel Bot Version with conditional branches

  BookHotelDemoAlias:
    Type: AWS::Lex::BotAlias
    Properties:
      BotId: !Ref BookHotelBot
      BotAliasName: "BookHotelDemoAlias"
      BotVersion: !GetAtt BookHotelConditionalBranches.BotVersion
      # Remove BotAliasLocaleSettings if you aren't concerned with Lambda setup.
      # If you are you can modify the LambdaArn below to get started.
      BotAliasLocaleSettings:
        - LocaleId: en_US
          BotAliasLocaleSetting:
            Enabled: true
            CodeHookSpecification:
              LambdaCodeHook:
                CodeHookInterfaceVersion: "1.0"
                LambdaArn: !GetAtt HotelBotFunction.Arn

You can download the updated template. After you update your stack with this template version, the alias will be directed to the version incorporating the conditional branching feature. To undo this modification, you can update the alias to revert back to the previous version.

third version

alias for third version

Logs

You can also enable logs for your Amazon Lex bot. To do so, you must update the bot’s role to grant permissions for writing Amazon CloudWatch logs. The following is an example of adding a CloudWatch policy to the role:

BotRuntimeRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - lexv2.amazonaws.com
            Action:
              - "sts:AssumeRole"
      Path: "/"
      Policies:
        - PolicyName: LexRuntimeRolePolicy
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - "polly:SynthesizeSpeech"
                  - "comprehend:DetectSentiment"
                Resource: "*"
        - PolicyName: CloudWatchPolicy
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - "logs:CreateLogStream"
                  - "logs:PutLogEvents"
                Resource: "*"

To ensure consistent and predictable behavior, you should be as specific as possible when defining resource names and properties in CloudFormation templates. This is because the use of the wildcard character (*) in CloudFormation templates can pose potential security risks and lead to unintended consequences. Therefore, it’s recommended to avoid using wildcards and instead use explicit values wherever possible.

Next, you create a CloudWatch log group resource, as shown in the following code, to direct your logs to this group:

  #Log Group
  LexLogGroup:
    Type: AWS::Logs::LogGroup
    Properties:
      LogGroupName: /lex/hotel-bot
      RetentionInDays: 5

Finally, you update your alias to enable conversation log settings:

BookHotelDemoAlias:
    Type: AWS::Lex::BotAlias
    Properties:
      BotId: !Ref BookHotelBot
      BotAliasName: "BookHotelDemoAlias"
      BotVersion: !GetAtt BookHotelConditionalBranches.BotVersion
      BotAliasLocaleSettings:
        - LocaleId: en_US
          BotAliasLocaleSetting:
            Enabled: true
            CodeHookSpecification:
              LambdaCodeHook:
                CodeHookInterfaceVersion: "1.0"
                LambdaArn: !GetAtt HotelBotFunction.Arn
      ConversationLogSettings:
        TextLogSettings:
          - Destination:
              CloudWatch:
                CloudWatchLogGroupArn: !GetAtt LexLogGroup.Arn
                LogPrefix: bookHotel
            Enabled: true

When you update the stack with this template, you enable the conversation logs for your bot. A new version is not created in this step because there are no changes to your bot resource. You can download the latest version of the template.

Clean Up

To prevent incurring charges in the future, delete the CloudFormation stack you created.

Conclusion

In this post, we discussed the step-by-step process to create a CloudFormation template for an Amazon Lex V2 bot. Initially, we deployed a basic bot, then we explored the potential of aliases and versions and how to use them efficiently with templates. Next, we learned how to integrate a Lambda function with an Amazon Lex V2 bot and implemented conditional branching in the bot’s conversation flow to accommodate business requirements. Finally, we added logging features by creating a CloudWatch log group resource and updating the bot’s role with the necessary permissions.

The template allows for the straightforward deployment and management of the bot, with the ability to revert changes as necessary. Overall, the CloudFormation template is useful for managing and optimizing an Amazon Lex V2 bot.

As the next step, you can explore sample Amazon Lex bots and apply the techniques discussed in this post to convert them into CloudFormation templates. This hands-on practice will solidify your understanding of managing Amazon Lex V2 bots through infrastructure as code.


About the Authors

Thomas Rindfuss is a Sr. Solutions Architect on the Amazon Lex team. He invents, develops, prototypes, and evangelizes new technical features and solutions for Language AI services that improves the customer experience and eases adoption.

Rijeesh Akkambeth Chathoth is a Professional Services Consultant at AWS. He helps customers in achieving their desired business
outcomes in the Contact Center space by leveraging Amazon Connect, Amazon Lex and GenAI features.

Read More

New NVIDIA RTX A400 and A1000 GPUs Enhance AI-Powered Design and Productivity Workflows

New NVIDIA RTX A400 and A1000 GPUs Enhance AI-Powered Design and Productivity Workflows

AI integration across design and productivity applications is becoming the new standard, fueling demand for advanced computing performance. This means professionals and creatives will need to tap into increased compute power, regardless of the scale, complexity or scope of their projects.

To meet this growing need, NVIDIA is expanding its RTX professional graphics offerings with two new NVIDIA Ampere architecture-based GPUs for desktops: the NVIDIA RTX A400 and NVIDIA RTX A1000.

They expand access to AI and ray-tracing technology, equipping professionals with the tools they need to transform their daily workflows.

A New Era of Creativity, Performance and Efficiency

The RTX A400 GPU introduces accelerated ray tracing and AI to the RTX 400 series GPUs. With 24 Tensor Cores for AI processing, it surpasses traditional CPU-based solutions, enabling professionals to run cutting-edge AI applications, such as intelligent chatbots and copilots, directly on their desktops.

The GPU delivers real-time ray tracing so creators can build vivid, physically accurate 3D renders that push the boundaries of creativity and realism.

The A400 also includes four display outputs, a first for its series. This makes it ideal for high-density display environments, which are critical for industries like financial services, command and control, retail, and transportation.

The NVIDIA RTX A1000 GPU brings Tensor Cores and RT Cores to the RTX 1000 series GPUs for the first time, unlocking accelerated AI and ray-tracing performance for creatives and professionals.

With 72 Tensor Cores, the A1000 offers a tremendous upgrade over the previous generation, delivering over 3x faster generative AI processing for tools like Stable Diffusion. In addition, its 18 RT Cores speed graphics and rendering tasks by up to 3x, accelerating professional workflows such as 2D and 3D computer-aided design (CAD), product and architectural design, and 4K video editing.

The A1000 also excels in video processing, handling up to 38% more encode streams and offering 2x faster decode performance over the previous generation.

With a sleek, single-slot design and consuming just 50W, the A400 and A1000 GPUs bring impressive features to compact, energy-efficient workstations.

Expanding the Reach of RTX

These new GPUs empower users with cutting-edge AI, graphics and compute capabilities to boost productivity and unlock creative possibilities. Advanced workflows involving ray-traced renders and AI are now within reach, allowing professionals to push the boundaries of their work and achieve stunning levels of realism.

Industrial planners can use ‌these new powerful and energy-efficient computing solutions for edge deployments. Creators can boost editing and rendering speeds to produce richer visual content. Architects and engineers can seamlessly transition ideas from 3D CAD concepts into tangible designs. Teams working in smart spaces can use the GPUs for real-time data processing, AI-enhanced security and digital signage management in space-constrained settings. And healthcare professionals can achieve quicker, more precise medical imaging analyses.

Financial professionals have always used expansive, high-resolution visual workspaces for more effective trading, analysis and data management. With the RTX A400 GPU supporting up to four 4K displays natively, financial services users can now achieve a high display density with fewer GPUs, streamlining their setups and reducing costs.

Next-Generation Features and Accelerated Performance 

The NVIDIA RTX A400 and A1000 GPUs are equipped with features designed to supercharge everyday workflows, including:

  • Second-generation RT Cores: Real-time ray tracing, photorealistic, physically based rendering and visualization for all professional workflows, including architectural drafting, 3D design and content creation, where accurate lighting and shadow simulations can greatly enhance the quality of work.
  • Third-generation Tensor Cores: Accelerates AI-augmented tools and applications such as generative AI, image rendering denoising and deep learning super sampling to improve image generation speed and quality.
  • Ampere architecture-based CUDA cores: Up to 2x the single-precision floating point throughput of the previous generation for significant speedups in graphics and compute workloads.
  • 4GB or 8GB of GPU memory: 4GB of GPU memory with the A400 GPU and 8GB with the A1000 GPU accommodate a range of professional needs, from basic graphic design and photo editing to more demanding 3D modeling with textures or high-resolution editing and data analyses. The GPUs also feature increased memory bandwidth over the previous generation for quicker data processing and smoother handling of larger datasets and scenes.
  • Encode and decode engines: With seventh-generation encode (NVENC) and fifth-generation decode (NVDEC) engines, the GPUs offer efficient video processing to support high-resolution video editing, streaming and playback with ultra-low latency. Inclusion of AV1 decode enables higher efficiency and smoother playback of more video formats.

Availability 

The NVIDIA RTX A1000 GPU is now available through global distribution partners such as PNY and Ryoyo Electric. The RTX A400 GPU is expected to be available from channel partners starting in May, with anticipated availability from manufacturers in the summer.

Read More

A secure approach to generative AI with AWS

A secure approach to generative AI with AWS

Generative artificial intelligence (AI) is transforming the customer experience in industries across the globe. Customers are building generative AI applications using large language models (LLMs) and other foundation models (FMs), which enhance customer experiences, transform operations, improve employee productivity, and create new revenue channels.

FMs and the applications built around them represent extremely valuable investments for our customers. They’re often used with highly sensitive business data, like personal data, compliance data, operational data, and financial information, to optimize the model’s output. The biggest concern we hear from customers as they explore the advantages of generative AI is how to protect their highly sensitive data and investments. Because their data and model weights are incredibly valuable, customers require them to stay protected, secure, and private, whether that’s from their own administrator’s accounts, their customers, vulnerabilities in software running in their own environments, or even their cloud service provider from having access.

At AWS, our top priority is safeguarding the security and confidentiality of our customers’ workloads. We think about security across the three layers of our generative AI stack:

  • Bottom layer – Provides the tools for building and training LLMs and other FMs
  • Middle layer – Provides access to all the models along with tools you need to build and scale generative AI applications
  • Top layer – Includes applications that use LLMs and other FMs to make work stress-free by writing and debugging code, generating content, deriving insights, and taking action

Each layer is important to making generative AI pervasive and transformative.

With the AWS Nitro System, we delivered a first-of-its-kind innovation on behalf of our customers. The Nitro System is an unparalleled computing backbone for AWS, with security and performance at its core. Its specialized hardware and associated firmware are designed to enforce restrictions so that nobody, including anyone in AWS, can access your workloads or data running on your Amazon Elastic Compute Cloud (Amazon EC2) instances. Customers have benefited from this confidentiality and isolation from AWS operators on all Nitro-based EC2 instances since 2017.

By design, there is no mechanism for any Amazon employee to access a Nitro EC2 instance that customers use to run their workloads, or to access data that customers send to a machine learning (ML) accelerator or GPU. This protection applies to all Nitro-based instances, including instances with ML accelerators like AWS Inferentia and AWS Trainium, and instances with GPUs like P4, P5, G5, and G6.

The Nitro System enables Elastic Fabric Adapter (EFA), which uses the AWS-built AWS Scalable Reliable Datagram (SRD) communication protocol for cloud-scale elastic and large-scale distributed training, enabling the only always-encrypted Remote Direct Memory Access (RDMA) capable network. All communication through EFA is encrypted with VPC encryption without incurring any performance penalty.

The design of the Nitro System has been validated by the NCC Group, an independent cybersecurity firm. AWS delivers a high level of protection for customer workloads, and we believe this is the level of security and confidentiality that customers should expect from their cloud provider. This level of protection is so critical that we’ve added it in our AWS Service Terms to provide an additional assurance to all of our customers.

Innovating secure generative AI workloads using AWS industry-leading security capabilities

From day one, AWS AI infrastructure and services have had built-in security and privacy features to give you control over your data. As customers move quickly to implement generative AI in their organizations, you need to know that your data is being handled securely across the AI lifecycle, including data preparation, training, and inferencing. The security of model weights—the parameters that a model learns during training that are critical for its ability to make predictions—is paramount to protecting your data and maintaining model integrity.

This is why it is critical for AWS to continue to innovate on behalf of our customers to raise the bar on security across each layer of the generative AI stack. To do this, we believe that you must have security and confidentiality built in across each layer of the generative AI stack. You need to be able to secure the infrastructure to train LLMs and other FMs, build securely with tools to run LLMs and other FMs, and run applications that use FMs with built-in security and privacy that you can trust.

At AWS, securing AI infrastructure refers to zero access to sensitive AI data, such as AI model weights and data processed with those models, by any unauthorized person, either at the infrastructure operator or at the customer. It’s comprised of three key principles:

  1. Complete isolation of the AI data from the infrastructure operator – The infrastructure operator must have no ability to access customer content and AI data, such as AI model weights and data processed with models.
  2. Ability for customers to isolate AI data from themselves – The infrastructure must provide a mechanism to allow model weights and data to be loaded into hardware, while remaining isolated and inaccessible from customers’ own users and software.
  3. Protected infrastructure communications – The communication between devices in the ML accelerator infrastructure must be protected. All externally accessible links between the devices must be encrypted.

The Nitro System fulfills the first principle of Secure AI Infrastructure by isolating your AI data from AWS operators. The second principle provides you with a way to remove administrative access of your own users and software to your AI data. AWS not only offers you a way to achieve that, but we also made it straightforward and practical by investing in building an integrated solution between AWS Nitro Enclaves and AWS Key Management Service (AWS KMS). With Nitro Enclaves and AWS KMS, you can encrypt your sensitive AI data using keys that you own and control, store that data in a location of your choice, and securely transfer the encrypted data to an isolated compute environment for inferencing. Throughout this entire process, the sensitive AI data is encrypted and isolated from your own users and software on your EC2 instance, and AWS operators cannot access this data. Use cases that have benefited from this flow include running LLM inferencing in an enclave. Until today, Nitro Enclaves operate only in the CPU, limiting the potential for larger generative AI models and more complex processing.

We announced our plans to extend this Nitro end-to-end encrypted flow to include first-class integration with ML accelerators and GPUs, fulfilling the third principle. You will be able to decrypt and load sensitive AI data into an ML accelerator for processing while providing isolation from your own operators and verified authenticity of the application used for processing the AI data. Through the Nitro System, you can cryptographically validate your applications to AWS KMS and decrypt data only when the necessary checks pass. This enhancement allows AWS to offer end-to-end encryption for your data as it flows through generative AI workloads.

We plan to offer this end-to-end encrypted flow in the upcoming AWS-designed Trainium2 as well as GPU instances based on NVIDIA’s upcoming Blackwell architecture, which both offer secure communications between devices, the third principle of Secure AI Infrastructure. AWS and NVIDIA are collaborating closely to bring a joint solution to market, including NVIDIA’s new NVIDIA Blackwell GPU 21 platform, which couples NVIDIA’s GB200 NVL72 solution with the Nitro System and EFA technologies to provide an industry-leading solution for securely building and deploying next-generation generative AI applications.

Advancing the future of generative AI security

Today, tens of thousands of customers are using AWS to experiment and move transformative generative AI applications into production. Generative AI workloads contain highly valuable and sensitive data that needs the level of protection from your own operators and the cloud service provider. Customers using AWS Nitro-based EC2 instances have received this level of protection and isolation from AWS operators since 2017, when we launched our innovative Nitro System.

At AWS, we’re continuing that innovation as we invest in building performant and accessible capabilities to make it practical for our customers to secure their generative AI workloads across the three layers of the generative AI stack, so that you can focus on what you do best: building and extending the uses of the generative AI to more areas. Learn more here.


About the authors

Anthony Liguori is an AWS VP and Distinguished Engineer for EC2

Colm MacCárthaigh is an AWS VP and Distinguished Engineer for EC2

Read More

To Cut a Long Story Short: Video Editors Benefit From DaVinci Resolve’s New AI Features Powered by RTX

To Cut a Long Story Short: Video Editors Benefit From DaVinci Resolve’s New AI Features Powered by RTX

Editor’s note: This post is part of our In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Video editors have more to look forward to than just April showers.

Blackmagic Design’s DaVinci Resolve released version 19, adding the IntelliTrack AI point tracker and UltraNR AI-powered features to further streamline video editing workflows.

The NAB 2024 trade show is bringing together thousands of content professionals
from all corners of the broadcast, media and entertainment industries, with video editors and livestreamers seeking ways to improve their creative workflows with NVIDIA RTX technology.

The recently launched Design app SketchUp 2024 introduced a new graphics engine that uses DirectX 12, which renders scenes 2.5x faster than the previous engine.

April also brings the latest NVIDIA Studio Driver, which optimizes the latest creative app updates, available for download today.

And this week’s featured In the NVIDIA Studio artist Rakesh Kumar created his captivating 3D scene The Rooted Vault using RTX acceleration.

Video Editor’s DaVinci Code

DaVinci Resolve is a powerful video editing package with color correction, visual effects, motion graphics and audio post-production all in one software tool. Its elegant, modern interface is easy to learn for new users, while offering powerful capabilities for professionals.

Two new AI features make video editing even more efficient: the IntelliTrack AI point tracker for object tracking, stabilization and audio panning, and UltraNR, which uses AI for spatial noise reduction — doing so 3x faster on the GeForce RTX 4090 vs. the Mac M2 Ultra.

All DaVinci Resolve AI effects are accelerated on RTX GPUs by NVIDIA TensorRT, boosting AI performance by up to 2x. The update also includes acceleration for Beauty, Edge Detect and Watercolor effects, doubling performance on NVIDIA GPUs.

For more information, check out the DaVinci Resolve website.

SketchUp Steps Up

SketchUp 2024 is a professional-grade 3D design software toolkit for designing buildings and landscapes, commonly used by designers and architects.

The new app, already receiving positive reviews, introduced a robust graphics engine that uses DirectX 12, which increases frames per second (FPS) by a factor of 2.5x over the previous engine. Navigating and orbiting complex models feels considerably lighter and faster with quicker, more predictable performance.

In testing, the scene below runs 4.5x faster FPS using the NVIDIA RTX 4090 vs. the Mac M2 Ultra and other competitors.

2.5x faster FPS with the GeForce RTX 4090 GPU. Image courtesy of Trimble SketchUp.

SketchUp 2024 also unlocks import and export functionality for OpenUSD files to efficiently manage the interoperability of complex 3D scenes and animations across numerous 3D apps.

Get the full release details.

Art Rooted in Nature

Rakesh Kumar’s passion for 3D modeling and animation stemmed from his love for gaming and storytelling.

“My goal is to inspire audiences and take them to new realms by showcasing the power of immersive storytelling, captivating visuals and the idea of creating worlds and characters that evoke emotions,” said Kumar.

His scene The Rooted Vault aims to convey the beauty of the natural world, transporting viewers to a serene setting filled with the soothing melodies of nature.

 

Kumar began by gathering reference material.

There’s reference sheets … and then there’s reference sheets.

He then used Autodesk Maya to block out the basic structure and piece together the house as a series of modules. GPU-accelerated viewport graphics ensured fast, interactive 3D modeling and animations.

Next, Kumar used ZBrush to sculpt high-resolution details into the modular assets.

Fine details applied in ZBrush.

“I chose an NVIDIA RTX GPU-powered system for real-time ray tracing to achieve lifelike visuals, reliable performance for smoother workflows, faster render times and industry-standard software compatibility.” — Rakesh Kumar

He used the ZBrush decimation tool alongside Unreal Engine’s Nanite workflow to efficiently create most of the modular building props.

Traditional poly-modeling workflows for the walls enabled vertex blending shaders for seamless texture transitions.

Textures were created with Adobe Substance 3D Painter. Kumar’s RTX GPU used RTX-accelerated light and ambient occlusion to bake and optimize assets in mere seconds.

Kumar moved the project to Unreal Engine 5, where near-final finishing touches such as lighting, shadows and visual effects were applied.

Textures applied in Adobe Substance 3D Painter.

GPU acceleration played a crucial role in real-time rendering, allowing him to instantly see and adjust the scene.

Adobe Premiere Pro has a vast selection of GPU-accelerated features.

Kumar then moved to Blackmagic Design’s DaVinci Resolve to color grade the scene for the desired mood and aesthetic, before he began final editing in Premiere Pro, adding transitions and audio.

“While the initial concept required significant revisions, the final result demonstrates the iterative nature of artistic creation — all inspired by my mentors, friends and family, who were always there to support me,” Kumar said.

3D artist Rakesh Kumar.

Check out Kumar’s latest work on Instagram.

Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Read More