Uncover hidden connections in unstructured financial data with Amazon Bedrock and Amazon Neptune

Uncover hidden connections in unstructured financial data with Amazon Bedrock and Amazon Neptune

In asset management, portfolio managers need to closely monitor companies in their investment universe to identify risks and opportunities, and guide investment decisions. Tracking direct events like earnings reports or credit downgrades is straightforward—you can set up alerts to notify managers of news containing company names. However, detecting second and third-order impacts arising from events at suppliers, customers, partners, or other entities in a company’s ecosystem is challenging.

For example, a supply chain disruption at a key vendor would likely negatively impact downstream manufacturers. Or the loss of a top customer for a major client poses a demand risk for the supplier. Very often, such events fail to make headlines featuring the impacted company directly, but are still important to pay attention to. In this post, we demonstrate an automated solution combining knowledge graphs and generative artificial intelligence (AI) to surface such risks by cross-referencing relationship maps with real-time news.

Broadly, this entails two steps: First, building the intricate relationships between companies (customers, suppliers, directors) into a knowledge graph. Second, using this graph database along with generative AI to detect second and third-order impacts from news events. For instance, this solution can highlight that delays at a parts supplier may disrupt production for downstream auto manufacturers in a portfolio though none are directly referenced.

With AWS, you can deploy this solution in a serverless, scalable, and fully event-driven architecture. This post demonstrates a proof of concept built on two key AWS services well suited for graph knowledge representation and natural language processing: Amazon Neptune and Amazon Bedrock. Neptune is a fast, reliable, fully managed graph database service that makes it straightforward to build and run applications that work with highly connected datasets. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.

Overall, this prototype demonstrates the art of possible with knowledge graphs and generative AI—deriving signals by connecting disparate dots. The takeaway for investment professionals is the ability to stay on top of developments closer to the signal while avoiding noise.

Build the knowledge graph

The first step in this solution is building a knowledge graph, and a valuable yet often overlooked data source for knowledge graphs is company annual reports. Because official corporate publications undergo scrutiny before release, the information they contain is likely to be accurate and reliable. However, annual reports are written in an unstructured format meant for human reading rather than machine consumption. To unlock their potential, you need a way to systematically extract and structure the wealth of facts and relationships they contain.

With generative AI services like Amazon Bedrock, you now have the capability to automate this process. You can take an annual report and trigger a processing pipeline to ingest the report, break it down into smaller chunks, and apply natural language understanding to pull out salient entities and relationships.

For example, a sentence stating that “[Company A] expanded its European electric delivery fleet with an order for 1,800 electric vans from [Company B]” would allow Amazon Bedrock to identify the following:

  • [Company A] as a customer
  • [Company B] as a supplier
  • A supplier relationship between [Company A] and [Company B]
  • Relationship details of “supplier of electric delivery vans”

Extracting such structured data from unstructured documents requires providing carefully crafted prompts to large language models (LLMs) so they can analyze text to pull out entities like companies and people, as well as relationships such as customers, suppliers, and more. The prompts contain clear instructions on what to look out for and the structure to return the data in. By repeating this process across the entire annual report, you can extract the relevant entities and relationships to construct a rich knowledge graph.

However, before committing the extracted information to the knowledge graph, you need to first disambiguate the entities. For instance, there may already be another ‘[Company A]’ entity in the knowledge graph, but it could represent a different organization with the same name. Amazon Bedrock can reason and compare the attributes such as business focus area, industry, and revenue-generating industries and relationships to other entities to determine if the two entities are actually distinct. This prevents inaccurately merging unrelated companies into a single entity.

After disambiguation is complete, you can reliably add new entities and relationships into your Neptune knowledge graph, enriching it with the facts extracted from annual reports. Over time, the ingestion of reliable data and integration of more reliable data sources will help build a comprehensive knowledge graph that can support revealing insights through graph queries and analytics.

This automation enabled by generative AI makes it feasible to process thousands of annual reports and unlocks an invaluable asset for knowledge graph curation that would otherwise go untapped due to the prohibitively high manual effort needed.

The following screenshot shows an example of the visual exploration that’s possible in a Neptune graph database using the Graph Explorer tool.

Process news articles

The next step of the solution is automatically enriching portfolio managers’ news feeds and highlighting articles relevant to their interests and investments. For the news feed, portfolio managers can subscribe to any third-party news provider through AWS Data Exchange or another news API of their choice.

When a news article enters the system, an ingestion pipeline is invoked to process the content. Using techniques similar to the processing of annual reports, Amazon Bedrock is used to extract entities, attributes, and relationships from the news article, which are then used to disambiguate against the knowledge graph to identify the corresponding entity in the knowledge graph.

The knowledge graph contains connections between companies and people, and by linking article entities to existing nodes, you can identify if any subjects are within two hops of the companies that the portfolio manager has invested in or is interested in. Finding such a connection indicates the article may be relevant to the portfolio manager, and because the underlying data is represented in a knowledge graph, it can be visualized to help the portfolio manager understand why and how this context is relevant. In addition to identifying connections to the portfolio, you can also use Amazon Bedrock to perform sentiment analysis on the entities referenced.

The final output is an enriched news feed surfacing articles likely to impact the portfolio manager’s areas of interest and investments.

Solution overview

The overall architecture of the solution looks like the following diagram.

The workflow consists of the following steps:

  1. A user uploads official reports (in PDF format) to an Amazon Simple Storage Service (Amazon S3) bucket. The reports should be officially published reports to minimize the inclusion of inaccurate data into your knowledge graph (as opposed to news and tabloids).
  2. The S3 event notification invokes an AWS Lambda function, which sends the S3 bucket and file name to an Amazon Simple Queue Service (Amazon SQS) queue. The First-In-First-Out (FIFO) queue makes sure that the report ingestion process is performed sequentially to reduce the likelihood of introducing duplicate data into your knowledge graph.
  3. An Amazon EventBridge time-based event runs every minute to start the run of an AWS Step Functions state machine asynchronously.
  4. The Step Functions state machine runs through a series of tasks to process the uploaded document by extracting key information and inserting it into your knowledge graph:
    1. Receive the queue message from Amazon SQS.
    2. Download the PDF report file from Amazon S3, split it into multiple smaller text chunks (approximately 1,000 words) for processing, and store the text chunks in Amazon DynamoDB.
    3. Use Anthropic’s Claude v3 Sonnet on Amazon Bedrock to process the first few text chunks to determine the main entity that the report is referring to, together with relevant attributes (such as industry).
    4. Retrieve the text chunks from DynamoDB and for each text chunk, invoke a Lambda function to extract out entities (such as company or person), and its relationship (customer, supplier, partner, competitor, or director) to the main entity using Amazon Bedrock.
    5. Consolidate all extracted information.
    6. Filter out noise and irrelevant entities (for example, generic terms such as “consumers”) using Amazon Bedrock.
    7. Use Amazon Bedrock to perform disambiguation by reasoning using the extracted information against the list of similar entities from the knowledge graph. If the entity does not exist, insert it. Otherwise, use the entity that already exists in the knowledge graph. Insert all relationships extracted.
    8. Clean up by deleting the SQS queue message and the S3 file.
  5. A user accesses a React-based web application to view the news articles that are supplemented with the entity, sentiment, and connection path information.
  6. Using the web application, the user specifies the number of hops (default N=2) on the connection path to monitor.
  7. Using the web application, the user specifies the list of entities to track.
  8. To generate fictional news, the user chooses Generate Sample News to generate 10 sample financial news articles with random content to be fed into the news ingestion process. Content is generated using Amazon Bedrock and is purely fictional.
  9. To download actual news, the user chooses Download Latest News to download the top news happening today (powered by NewsAPI.org).
  10. The news file (TXT format) is uploaded to an S3 bucket. Steps 8 and 9 upload news to the S3 bucket automatically, but you can also build integrations to your preferred news provider such as AWS Data Exchange or any third-party news provider to drop news articles as files into the S3 bucket. News data file content should be formatted as <date>{dd mmm yyyy}</date><title>{title}</title><text>{news content}</text>.
  11. The S3 event notification sends the S3 bucket or file name to Amazon SQS (standard), which invokes multiple Lambda functions to process the news data in parallel:
    1. Use Amazon Bedrock to extract entities mentioned in the news together with any related information, relationships, and sentiment of the mentioned entity.
    2. Check against the knowledge graph and use Amazon Bedrock to perform disambiguation by reasoning using the available information from the news and from within the knowledge graph to identify the corresponding entity.
    3. After the entity has been located, search for and return any connection paths connecting to entities marked with INTERESTED=YES in the knowledge graph that are within N=2 hops away.
  12. The web application auto refreshes every 1 second to pull out the latest set of processed news to display on the web application.

Deploy the prototype

You can deploy the prototype solution and start experimenting yourself. The prototype is available from GitHub and includes details on the following:

  • Deployment prerequisites
  • Deployment steps
  • Cleanup steps

Summary

This post demonstrated a proof of concept solution to help portfolio managers detect second- and third-order risks from news events, without direct references to companies they track. By combining a knowledge graph of intricate company relationships with real-time news analysis using generative AI, downstream impacts can be highlighted, such as production delays from supplier hiccups.

Although it’s only a prototype, this solution shows the promise of knowledge graphs and language models to connect dots and derive signals from noise. These technologies can aid investment professionals by revealing risks faster through relationship mappings and reasoning. Overall, this is a promising application of graph databases and AI that warrants exploration to augment investment analysis and decision-making.

If this example of generative AI in financial services is of interest to your business, or you have a similar idea, reach out to your AWS account manager, and we will be delighted to explore further with you.


About the Author

Xan Huang is a Senior Solutions Architect with AWS and is based in Singapore. He works with major financial institutions to design and build secure, scalable, and highly available solutions in the cloud. Outside of work, Xan spends most of his free time with his family and getting bossed around by his 3-year-old daughter. You can find Xan on LinkedIn.

Read More

Open source observability for AWS Inferentia nodes within Amazon EKS clusters

Open source observability for AWS Inferentia nodes within Amazon EKS clusters

Recent developments in machine learning (ML) have led to increasingly large models, some of which require hundreds of billions of parameters. Although they are more powerful, training and inference on those models require significant computational resources. Despite the availability of advanced distributed training libraries, it’s common for training and inference jobs to need hundreds of accelerators (GPUs or purpose-built ML chips such as AWS Trainium and AWS Inferentia), and therefore tens or hundreds of instances.

In such distributed environments, observability of both instances and ML chips becomes key to model performance fine-tuning and cost optimization. Metrics allow teams to understand workload behavior and optimize resource allocation and utilization, diagnose anomalies, and increase overall infrastructure efficiency. For data scientists, ML chips utilization and saturation are also relevant for capacity planning.

This post walks you through the Open Source Observability pattern for AWS Inferentia, which shows you how to monitor the performance of ML chips, used in an Amazon Elastic Kubernetes Service (Amazon EKS) cluster, with data plane nodes based on Amazon Elastic Compute Cloud (Amazon EC2) instances of type Inf1 and Inf2.

The pattern is part of the AWS CDK Observability Accelerator, a set of opinionated modules to help you set observability for Amazon EKS clusters. The AWS CDK Observability Accelerator is organized around patterns, which are reusable units for deploying multiple resources. The open source observability set of patterns instruments observability with Amazon Managed Grafana dashboards, an AWS Distro for OpenTelemetry collector to collect metrics, and Amazon Managed Service for Prometheus to store them.

Solution overview

The following diagram illustrates the solution architecture.

This solution deploys an Amazon EKS cluster with a node group that includes Inf1 instances.

The AMI type of the node group is AL2_x86_64_GPU, which uses the Amazon EKS optimized accelerated Amazon Linux AMI. In addition to the standard Amazon EKS-optimized AMI configuration, the accelerated AMI includes the NeuronX runtime.

To access the ML chips from Kubernetes, the pattern deploys the AWS Neuron device plugin.

Metrics are exposed to Amazon Managed Service for Prometheus by the neuron-monitor DaemonSet, which deploys a minimal container, with the Neuron tools installed. Specifically, the neuron-monitor DaemonSet runs the neuron-monitor command piped into the neuron-monitor-prometheus.py companion script (both commands are part of the container):

neuron-monitor | neuron-monitor-prometheus.py --port <port>

The command uses the following components:

  • neuron-monitor collects metrics and stats from the Neuron applications running on the system and streams the collected data to stdout in JSON format
  • neuron-monitor-prometheus.py maps and exposes the telemetry data from JSON format into Prometheus-compatible format

Data is visualized in Amazon Managed Grafana by the corresponding dashboard.

The rest of the setup to collect and visualize metrics with Amazon Managed Service for Prometheus and Amazon Managed Grafana is similar to that used in other open source based patterns, which are included in the AWS Observability Accelerator for CDK GitHub repository.

Prerequisites

You need the following to complete the steps in this post:

Set up the environment

Complete the following steps to set up your environment:

  1. Open a terminal window and run the following commands:
export AWS_REGION=<YOUR AWS REGION>
export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)
  1. Retrieve the workspace IDs of any existing Amazon Managed Grafana workspace:
aws grafana list-workspaces

The following is our sample output:

{
  "workspaces": [
    {
      "authentication": {
        "providers": [
          "AWS_SSO"
        ]
      },
      "created": "2023-06-07T12:23:56.625000-04:00",
      "description": "accelerator-workspace",
      "endpoint": "g-XYZ.grafana-workspace.us-east-2.amazonaws.com",
      "grafanaVersion": "9.4",
      "id": "g-XYZ",
      "modified": "2023-06-07T12:30:09.892000-04:00",
      "name": "accelerator-workspace",
      "notificationDestinations": [
        "SNS"
      ],
      "status": "ACTIVE",
      "tags": {}
    }
  ]
}
  1. Assign the values of id and endpoint to the following environment variables:
export COA_AMG_WORKSPACE_ID="<<YOUR-WORKSPACE-ID, similar to the above g-XYZ, without quotation marks>>"
export COA_AMG_ENDPOINT_URL="<<https://YOUR-WORKSPACE-URL, including protocol (i.e. https://), without quotation marks, similar to the above https://g-XYZ.grafana-workspace.us-east-2.amazonaws.com>>"

COA_AMG_ENDPOINT_URL needs to include https://.

  1. Create a Grafana API key from the Amazon Managed Grafana workspace:
export AMG_API_KEY=$(aws grafana create-workspace-api-key 
--key-name "grafana-operator-key" 
--key-role "ADMIN" 
--seconds-to-live 432000 
--workspace-id $COA_AMG_WORKSPACE_ID 
--query key 
--output text)
  1. Set up a secret in AWS Systems Manager:
aws ssm put-parameter --name "/cdk-accelerator/grafana-api-key" 
--type "SecureString" 
--value $AMG_API_KEY 
--region $AWS_REGION

The secret will be accessed by the External Secrets add-on and made available as a native Kubernetes secret in the EKS cluster.

Bootstrap the AWS CDK environment

The first step to any AWS CDK deployment is bootstrapping the environment. You use the cdk bootstrap command in the AWS CDK CLI to prepare the environment (a combination of AWS account and AWS Region) with resources required by AWS CDK to perform deployments into that environment. AWS CDK bootstrapping is needed for each account and Region combination, so if you already bootstrapped AWS CDK in a Region, you don’t need to repeat the bootstrapping process.

cdk bootstrap aws://$ACCOUNT_ID/$AWS_REGION

Deploy the solution

Complete the following steps to deploy the solution:

  1. Clone the cdk-aws-observability-accelerator repository and install the dependency packages. This repository contains AWS CDK v2 code written in TypeScript.
git clone https://github.com/aws-observability/cdk-aws-observability-accelerator.git
cd cdk-aws-observability-accelerator

The actual settings for Grafana dashboard JSON files are expected to be specified in the AWS CDK context. You need to update context in the cdk.json file, located in the current directory. The location of the dashboard is specified by the fluxRepository.values.GRAFANA_NEURON_DASH_URL parameter, and neuronNodeGroup is used to set the instance type, number, and Amazon Elastic Block Store (Amazon EBS) size used for the nodes.

  1. Enter the following snippet into cdk.json, replacing context:
"context": {
    "fluxRepository": {
      "name": "grafana-dashboards",
      "namespace": "grafana-operator",
      "repository": {
        "repoUrl": "https://github.com/aws-observability/aws-observability-accelerator",
        "name": "grafana-dashboards",
        "targetRevision": "main",
        "path": "./artifacts/grafana-operator-manifests/eks/infrastructure"
      },
      "values": {
        "GRAFANA_CLUSTER_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/cluster.json",
        "GRAFANA_KUBELET_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/kubelet.json",
        "GRAFANA_NSWRKLDS_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/namespace-workloads.json",
        "GRAFANA_NODEEXP_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/nodeexporter-nodes.json",
        "GRAFANA_NODES_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/nodes.json",
        "GRAFANA_WORKLOADS_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/workloads.json",
        "GRAFANA_NEURON_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/neuron/neuron-monitor.json"
      },
      "kustomizations": [
        {
          "kustomizationPath": "./artifacts/grafana-operator-manifests/eks/infrastructure"
        },
        {
          "kustomizationPath": "./artifacts/grafana-operator-manifests/eks/neuron"
        }
      ]
    },
     "neuronNodeGroup": {
      "instanceClass": "inf1",
      "instanceSize": "2xlarge",
      "desiredSize": 1, 
      "minSize": 1, 
      "maxSize": 3,
      "ebsSize": 512
    }
  }

You can replace the Inf1 instance type with Inf2 and change the size as needed. To check availability in your selected Region, run the following command (amend Values as you see fit):

aws ec2 describe-instance-type-offerings 
--filters Name=instance-type,Values="inf1*" 
--query "InstanceTypeOfferings[].InstanceType" 
--region $AWS_REGION
  1. Install the project dependencies:
npm install
  1. Run the following commands to deploy the open source observability pattern:
make build
make pattern single-new-eks-inferentia-opensource-observability deploy

Validate the solution

Complete the following steps to validate the solution:

  1. Run the update-kubeconfig command. You should be able to get the command from the output message of the previous command:
aws eks update-kubeconfig --name single-new-eks-inferentia-opensource... --region <your region> --role-arn arn:aws:iam::xxxxxxxxx:role/single-new-eks-....
  1. Verify the resources you created:
kubectl get pods -A

The following screenshot shows our sample output.

  1. Make sure the neuron-device-plugin-daemonset DaemonSet is running:
kubectl get ds neuron-device-plugin-daemonset --namespace kube-system

The following is our expected output:

NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
neuron-device-plugin-daemonset   1         1         1       1            1           <none>          2h
  1. Confirm that the neuron-monitor DaemonSet is running:
kubectl get ds neuron-monitor --namespace kube-system

The following is our expected output:

NAME             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
neuron-monitor   1         1         1       1            1           <none>          2h
  1. To verify that the Neuron devices and cores are visible, run the neuron-ls and neuron-top commands from, for example, your neuron-monitor pod (you can get the pod’s name from the output of kubectl get pods -A):
kubectl exec -it {your neuron-monitor pod} -n kube-system -- /bin/bash -c "neuron-ls"

The following screenshot shows our expected output.

kubectl exec -it {your neuron-monitor pod} -n kube-system -- /bin/bash -c "neuron-top"

The following screenshot shows our expected output.

Visualize data using the Grafana Neuron dashboard

Log in to your Amazon Managed Grafana workspace and navigate to the Dashboards panel. You should see a dashboard named Neuron / Monitor.

To see some interesting metrics on the Grafana dashboard, we apply the following manifest:

curl https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/k8s-deployment-manifest-templates/neuron/pytorch-inference-resnet50.yml | kubectl apply -f -

This is a sample workload that compiles the torchvision ResNet50 model and runs repetitive inference in a loop to generate telemetry data.

To verify the pod was successfully deployed, run the following code:

kubectl get pods

You should see a pod named pytorch-inference-resnet50.

After a few minutes, looking into the Neuron / Monitor dashboard, you should see the gathered metrics similar to the following screenshots.

Grafana Operator and Flux always work together to synchronize your dashboards with Git. If you delete your dashboards by accident, they will be re-provisioned automatically.

Clean up

You can delete the whole AWS CDK stack with the following command:

make pattern single-new-eks-inferentia-opensource-observability destroy

Conclusion

In this post, we showed you how to introduce observability, with open source tooling, into an EKS cluster featuring a data plane running EC2 Inf1 instances. We started by selecting the Amazon EKS-optimized accelerated AMI for the data plane nodes, which includes the Neuron container runtime, providing access to AWS Inferentia and Trainium Neuron devices. Then, to expose the Neuron cores and devices to Kubernetes, we deployed the Neuron device plugin. The actual collection and mapping of telemetry data into Prometheus-compatible format was achieved via neuron-monitor and neuron-monitor-prometheus.py. Metrics were sourced from Amazon Managed Service for Prometheus and displayed on the Neuron dashboard of Amazon Managed Grafana.

We recommend that you explore additional observability patterns in the AWS Observability Accelerator for CDK GitHub repo. To learn more about Neuron, refer to the AWS Neuron Documentation.


About the Author

Riccardo Freschi is a Sr. Solutions Architect at AWS, focusing on application modernization. He works closely with partners and customers to help them transform their IT landscapes in their journey to the AWS Cloud by refactoring existing applications and building new ones.

Read More

Explore data with ease: Use SQL and Text-to-SQL in Amazon SageMaker Studio JupyterLab notebooks

Explore data with ease: Use SQL and Text-to-SQL in Amazon SageMaker Studio JupyterLab notebooks

Amazon SageMaker Studio provides a fully managed solution for data scientists to interactively build, train, and deploy machine learning (ML) models. In the process of working on their ML tasks, data scientists typically start their workflow by discovering relevant data sources and connecting to them. They then use SQL to explore, analyze, visualize, and integrate data from various sources before using it in their ML training and inference. Previously, data scientists often found themselves juggling multiple tools to support SQL in their workflow, which hindered productivity.

We’re excited to announce that JupyterLab notebooks in SageMaker Studio now come with built-in support for SQL. Data scientists can now:

  • Connect to popular data services including Amazon Athena, Amazon Redshift, Amazon DataZone, and Snowflake directly within the notebooks
  • Browse and search for databases, schemas, tables, and views, and preview data within the notebook interface
  • Mix SQL and Python code in the same notebook for efficient exploration and transformation of data for use in ML projects
  • Use developer productivity features such as SQL command completion, code formatting assistance, and syntax highlighting to help accelerate code development and improve overall developer productivity

In addition, administrators can securely manage connections to these data services, allowing data scientists to access authorized data without the need to manage credentials manually.

In this post, we guide you through setting up this feature in SageMaker Studio, and walk you through various capabilities of this feature. Then we show how you can enhance the in-notebook SQL experience using Text-to-SQL capabilities provided by advanced large language models (LLMs) to write complex SQL queries using natural language text as input. Finally, to enable a broader audience of users to generate SQL queries from natural language input in their notebooks, we show you how to deploy these Text-to-SQL models using Amazon SageMaker endpoints.

Solution overview

With SageMaker Studio JupyterLab notebook’s SQL integration, you can now connect to popular data sources like Snowflake, Athena, Amazon Redshift, and Amazon DataZone. This new feature enables you to perform various functions.

For example, you can visually explore data sources like databases, tables, and schemas directly from your JupyterLab ecosystem. If your notebook environments are running on SageMaker Distribution 1.6 or higher, look for a new widget on the left side of your JupyterLab interface. This addition enhances data accessibility and management within your development environment.

If you’re not currently on suggested SageMaker Distribution (1.5 or lower) or in a custom environment, refer to appendix for more information.

After you have set up connections (illustrated in the next section), you can list data connections, browse databases and tables, and inspect schemas.

The SageMaker Studio JupyterLab built-in SQL extension also enables you to run SQL queries directly from a notebook. Jupyter notebooks can differentiate between SQL and Python code using the %%sm_sql magic command, which must be placed at the top of any cell that contains SQL code. This command signals to JupyterLab that the following instructions are SQL commands rather than Python code. The output of a query can be displayed directly within the notebook, facilitating seamless integration of SQL and Python workflows in your data analysis.

The output of a query can be displayed visually as HTML tables, as shown in the following screenshot.

They can also be written to a pandas DataFrame.

Prerequisites

Make sure you have satisfied the following prerequisites in order to use the SageMaker Studio notebook SQL experience:

  • SageMaker Studio V2 – Make sure you’re running the most up-to-date version of your SageMaker Studio domain and user profiles. If you’re currently on SageMaker Studio Classic, refer to Migrating from Amazon SageMaker Studio Classic.
  • IAM role – SageMaker requires an AWS Identity and Access Management (IAM) role to be assigned to a SageMaker Studio domain or user profile to manage permissions effectively. An execution role update may be required to bring in data browsing and the SQL run feature. The following example policy enables users to grant, list, and run AWS Glue, Athena, Amazon Simple Storage Service (Amazon S3), AWS Secrets Manager, and Amazon Redshift resources:
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Sid":"SQLRelatedS3Permissions",
             "Effect":"Allow",
             "Action":[
                "s3:ListBucket",
                "s3:GetObject",
                "s3:GetBucketLocation",
                "s3:ListMultipartUploadParts",
                "s3:AbortMultipartUpload",
                "s3:PutObject"
             ],
             "Resource":[
                "arn:aws:s3:::sagemaker*/*",
                "arn:aws:s3:::sagemaker*"
             ]
          },
          {
             "Sid":"GlueDataAccess",
             "Effect":"Allow",
             "Action":[
                "glue:GetDatabases",
                "glue:GetSchema",
                "glue:GetTables",
                "glue:GetDatabase",
                "glue:GetTable",
                "glue:ListSchemas",
                "glue:GetPartitions",
                "glue:GetConnections",
                "glue:GetConnection",
                "glue:CreateConnection"
             ],
             "Resource":[
                "arn:aws:glue:<region>:<account>:table/sagemaker*/*",
                "arn:aws:glue:<region>:<account>:database/sagemaker*",
                "arn:aws:glue:<region>:<account>:schema/sagemaker*",
                "arn:aws:glue:<region>:<account>:connection/sagemaker*",
                "arn:aws:glue:<region>:<account>:registry/sagemaker*",
                "arn:aws:glue:<region>:<account>:catalog"
             ]
          },
          {
             "Sid":"AthenaQueryExecution",
             "Effect":"Allow",
             "Action":[
                "athena:ListDataCatalogs",
                "athena:ListDatabases",
                "athena:ListTableMetadata",
                "athena:StartQueryExecution",
                "athena:GetQueryExecution",
                "athena:RunQuery",
                "athena:StartSession",
                "athena:GetQueryResults",
                "athena:ListWorkGroups",
                "athena:GetDataCatalog",
                "athena:GetWorkGroup"
             ],
             "Resource":[
                "arn:aws:athena:<region>:<account>:workgroup/sagemaker*",
                "arn:aws:athena:<region>:<account>:datacatalog/sagemaker*"
             ]
          },
          {
             "Sid":"GetSecretsAndCredentials",
             "Effect":"Allow",
             "Action":[
                "secretsmanager:GetSecretValue",
                "redshift:GetClusterCredentials"
             ],
             "Resource":[
                "arn:aws:secretsmanager:<region>:<account>:secret:sagemaker*",
                "arn:aws:redshift:<region>:<account>:dbuser:sagemaker*/sagemaker*",
                "arn:aws:redshift:<region>:<account>:dbgroup:sagemaker*/sagemaker*",
                "arn:aws:redshift:<region>:<account>:dbname:sagemaker*/sagemaker*"
             ]
          }
       ]
    }

  • JupyterLab Space – You need access to the updated SageMaker Studio and JupyterLab Space with SageMaker Distribution v1.6 or later image versions. If you’re using custom images for JupyterLab Spaces or older versions of SageMaker Distribution (v1.5 or lower), refer to the appendix for instructions to install necessary packages and modules to enable this feature in your environments. To learn more about SageMaker Studio JupyterLab Spaces, refer to Boost productivity on Amazon SageMaker Studio: Introducing JupyterLab Spaces and generative AI tools.
  • Data source access credentials – This SageMaker Studio notebook feature requires user name and password access to data sources such as Snowflake and Amazon Redshift. Create user name and password-based access to these data sources if you do not already have one. OAuth-based access to Snowflake is not a supported feature as of this writing.
  • Load SQL magic – Before you run SQL queries from a Jupyter notebook cell, it’s essential to load the SQL magics extension. Use the command %load_ext amazon_sagemaker_sql_magic to enable this feature. Additionally, you can run the %sm_sql? command to view a comprehensive list of supported options for querying from a SQL cell. These options include setting a default query limit of 1,000, running a full extraction, and injecting query parameters, among others. This setup allows for flexible and efficient SQL data manipulation directly within your notebook environment.

Create database connections

The built-in SQL browsing and execution capabilities of SageMaker Studio are enhanced by AWS Glue connections. An AWS Glue connection is an AWS Glue Data Catalog object that stores essential data such as login credentials, URI strings, and virtual private cloud (VPC) information for specific data stores. These connections are used by AWS Glue crawlers, jobs, and development endpoints to access various types of data stores. You can use these connections for both source and target data, and even reuse the same connection across multiple crawlers or extract, transform, and load (ETL) jobs.

To explore SQL data sources in the left pane of SageMaker Studio, you first need to create AWS Glue connection objects. These connections facilitate access to different data sources and allow you to explore their schematic data elements.

In the following sections, we walk through the process of creating SQL-specific AWS Glue connectors. This will enable you to access, view, and explore datasets across a variety of data stores. For more detailed information about AWS Glue connections, refer to Connecting to data.

Create an AWS Glue connection

The only way to bring data sources into SageMaker Studio is with AWS Glue connections. You need to create AWS Glue connections with specific connection types. As of this writing, the only supported mechanism of creating these connections is using the AWS Command Line Interface (AWS CLI).

Connection definition JSON file

When connecting to different data sources in AWS Glue, you must first create a JSON file that defines the connection properties—referred to as the connection definition file. This file is crucial for establishing an AWS Glue connection and should detail all the necessary configurations for accessing the data source. For security best practices, it’s recommended to use Secrets Manager to securely store sensitive information such as passwords. Meanwhile, other connection properties can be managed directly through AWS Glue connections. This approach makes sure that sensitive credentials are protected while still making the connection configuration accessible and manageable.

The following is an example of a connection definition JSON:

{
    "ConnectionInput": {
        "Name": <GLUE_CONNECTION_NAME>,
        "Description": <GLUE_CONNECTION_DESCRIPTION>,
        "ConnectionType": "REDSHIFT | SNOWFLAKE | ATHENA",
        "ConnectionProperties": {
            "PythonProperties": "{"aws_secret_arn": <SECRET_ARN>, "database": <...>}"
        }
    }
}

When setting up AWS Glue connections for your data sources, there are a few important guidelines to follow to provide both functionality and security:

  • Stringification of properties – Within the PythonProperties key, make sure all properties are stringified key-value pairs. It’s crucial to properly escape double-quotes by using the backslash () character where necessary. This helps maintain the correct format and avoid syntax errors in your JSON.
  • Handling sensitive information – Although it’s possible to include all connection properties within PythonProperties, it is advisable not to include sensitive details like passwords directly in these properties. Instead, use Secrets Manager for handling sensitive information. This approach secures your sensitive data by storing it in a controlled and encrypted environment, away from the main configuration files.

Create an AWS Glue connection using the AWS CLI

After you include all the necessary fields in your connection definition JSON file, you’re ready to establish an AWS Glue connection for your data source using the AWS CLI and the following command:

aws --region <REGION> glue create-connection 
--cli-input-json file:///path/to/file/connection/definition/file.json

This command initiates a new AWS Glue connection based on the specifications detailed in your JSON file. The following is a quick breakdown of the command components:

  • –region <REGION> – This specifies the AWS Region where your AWS Glue connection will be created. It is crucial to select the Region where your data sources and other services are located to minimize latency and comply with data residency requirements.
  • –cli-input-json file:///path/to/file/connection/definition/file.json – This parameter directs the AWS CLI to read the input configuration from a local file that contains your connection definition in JSON format.

You should be able to create AWS Glue connections with the preceding AWS CLI command from your Studio JupyterLab terminal. On the File menu, choose New and Terminal.

If the create-connection command runs successfully, you should see your data source listed in the SQL browser pane. If you don’t see your data source listed, choose Refresh to update the cache.

Create a Snowflake connection

In this section, we focus on integrating a Snowflake data source with SageMaker Studio. Creating Snowflake accounts, databases, and warehouses falls outside the scope of this post. To get started with Snowflake, refer to the Snowflake user guide. In this post, we concentrate on creating a Snowflake definition JSON file and establishing a Snowflake data source connection using AWS Glue.

Create a Secrets Manager secret

You can connect to your Snowflake account by either using a user ID and password or using private keys. To connect with a user ID and password, you need to securely store your credentials in Secrets Manager. As mentioned previously, although it’s possible to embed this information under PythonProperties, it is not recommended to store sensitive information in plain text format. Always make sure that sensitive data is handled securely to avoid potential security risks.

To store information in Secrets Manager, complete the following steps:

  1. On the Secrets Manager console, choose Store a new secret.
  2. For Secret type, choose Other type of secret.
  3. For the key-value pair, choose Plaintext and enter the following:
    {
        "user":"TestUser",
        "password":"MyTestPassword",
        "account":"AWSSAGEMAKERTEST"
    }

  4. Enter a name for your secret, such as sm-sql-snowflake-secret.
  5. Leave the other settings as default or customize if required.
  6. Create the secret.

Create an AWS Glue connection for Snowflake

As discussed earlier, AWS Glue connections are essential for accessing any connection from SageMaker Studio. You can find a list of all supported connection properties for Snowflake. The following is a sample connection definition JSON for Snowflake. Replace the placeholder values with the appropriate values before saving it to disk:

{
    "ConnectionInput": {
        "Name": "Snowflake-Airlines-Dataset",
        "Description": "SageMaker-Snowflake Airlines Dataset",
        "ConnectionType": "SNOWFLAKE",
        "ConnectionProperties": {
            "PythonProperties": "{"aws_secret_arn": "arn:aws:secretsmanager:<region>:<account>:secret:sm-sql-snowflake-secret", "database": "SAGEMAKERDEMODATABASE1"}"
        }
    }
}

To create an AWS Glue connection object for the Snowflake data source, use the following command:

aws --region <REGION> glue create-connection 
--cli-input-json file:///path/to/file/snowflake/definition/file.json

This command creates a new Snowflake data source connection in your SQL browser pane that’s browsable, and you can run SQL queries against it from your JupyterLab notebook cell.

Create an Amazon Redshift connection

Amazon Redshift is a fully managed, petabyte-scale data warehouse service that simplifies and reduces the cost of analyzing all your data using standard SQL. The procedure for creating an Amazon Redshift connection closely mirrors that for a Snowflake connection.

Create a Secrets Manager secret

Similar to the Snowflake setup, to connect to Amazon Redshift using a user ID and password, you need to securely store the secrets information in Secrets Manager. Complete the following steps:

  1. On the Secrets Manager console, choose Store a new secret.
  2. For Secret type, choose Credentials for Amazon Redshift cluster.
  3. Enter the credentials used to log in to access Amazon Redshift as a data source.
  4. Choose the Redshift cluster associated with the secrets.
  5. Enter a name for the secret, such as sm-sql-redshift-secret.
  6. Leave the other settings as default or customize if required.
  7. Create the secret.

By following these steps, you make sure your connection credentials are handled securely, using the robust security features of AWS to manage sensitive data effectively.

Create an AWS Glue connection for Amazon Redshift

To set up a connection with Amazon Redshift using a JSON definition, fill in the necessary fields and save the following JSON configuration to disk:

{
    "ConnectionInput": {
        "Name": "Redshift-US-Housing-Dataset",
        "Description": "sagemaker redshift us housing dataset connection",
        "ConnectionType": "REDSHIFT",
        "ConnectionProperties": {
            "PythonProperties": "{"aws_secret_arn": "arn:aws:secretsmanager:<region>:<account>:sm-sql-redshift-secret", "database": "us-housing-database"}"
        }
    }
}

To create an AWS Glue connection object for the Redshift data source, use the following AWS CLI command:

aws --region <REGION> glue create-connection 
--cli-input-json file:///path/to/file/redshift/definition/file.json

This command creates a connection in AWS Glue linked to your Redshift data source. If the command runs successfully, you will be able to see your Redshift data source within the SageMaker Studio JupyterLab notebook, ready for running SQL queries and performing data analysis.

Create an Athena connection

Athena is a fully managed SQL query service from AWS that enables analysis of data stored in Amazon S3 using standard SQL. To set up an Athena connection as a data source in the JupyterLab notebook’s SQL browser, you need to create an Athena sample connection definition JSON. The following JSON structure configures the necessary details to connect to Athena, specifying the data catalog, the S3 staging directory, and the Region:

{
    "ConnectionInput": {
        "Name": "Athena-Credit-Card-Fraud",
        "Description": "SageMaker-Athena Credit Card Fraud",
        "ConnectionType": "ATHENA",
        "ConnectionProperties": {
            "PythonProperties": "{"catalog_name": "AwsDataCatalog","s3_staging_dir": "s3://sagemaker-us-east-2-123456789/athena-data-source/credit-card-fraud/", "region_name": "us-east-2"}"
        }
    }
}

To create an AWS Glue connection object for the Athena data source, use the following AWS CLI command:

aws --region <REGION> glue create-connection 
--cli-input-json file:///path/to/file/athena/definition/file.json

If the command is successful, you will be able to access Athena data catalog and tables directly from the SQL browser within your SageMaker Studio JupyterLab notebook.

Query data from multiple sources

If you have multiple data sources integrated into SageMaker Studio through the built-in SQL browser and the notebook SQL feature, you can quickly run queries and effortlessly switch between data source backends in subsequent cells within a notebook. This capability allows for seamless transitions between different databases or data sources during your analysis workflow.

You can run queries against a diverse collection of data source backends and bring the results directly into the Python space for further analysis or visualization. This is facilitated by the %%sm_sql magic command available in SageMaker Studio notebooks. To output the results of your SQL query into a pandas DataFrame, there are two options:

  • From your notebook cell toolbar, choose the output type DataFrame and name your DataFrame variable
  • Append the following parameter to your %%sm_sql command:
    --output '{"format": "DATAFRAME", "dataframe_name": "df"}'

The following diagram illustrates this workflow and showcases how you can effortlessly run queries across various sources in subsequent notebook cells, as well as train a SageMaker model using training jobs or directly within the notebook using local compute. Additionally, the diagram highlights how the built-in SQL integration of SageMaker Studio simplifies the processes of extraction and building directly within the familiar environment of a JupyterLab notebook cell.

Text to SQL: Using natural language to enhance query authoring

SQL is a complex language that requires an understanding of databases, tables, syntaxes, and metadata. Today, generative artificial intelligence (AI) can enable you to write complex SQL queries without requiring in-depth SQL experience. The advancement of LLMs has significantly impacted natural language processing (NLP)-based SQL generation, allowing for the creation of precise SQL queries from natural language descriptions—a technique referred to as Text-to-SQL. However, it is essential to acknowledge the inherent differences between human language and SQL. Human language can sometimes be ambiguous or imprecise, whereas SQL is structured, explicit, and unambiguous. Bridging this gap and accurately converting natural language into SQL queries can present a formidable challenge. When provided with appropriate prompts, LLMs can help bridge this gap by understanding the intent behind the human language and generating accurate SQL queries accordingly.

With the release of the SageMaker Studio in-notebook SQL query feature, SageMaker Studio makes it straightforward to inspect databases and schemas, and author, run, and debug SQL queries without ever leaving the Jupyter notebook IDE. This section explores how the Text-to-SQL capabilities of advanced LLMs can facilitate the generation of SQL queries using natural language within Jupyter notebooks. We employ the cutting-edge Text-to-SQL model defog/sqlcoder-7b-2 in conjunction with Jupyter AI, a generative AI assistant specifically designed for Jupyter notebooks, to create complex SQL queries from natural language. By using this advanced model, we can effortlessly and efficiently create complex SQL queries using natural language, thereby enhancing our SQL experience within notebooks.

Notebook prototyping using the Hugging Face Hub

To begin prototyping, you need the following:

  • GitHub code – The code presented in this section is available in the following GitHub repo and by referencing the example notebook.
  • JupyterLab Space – Access to a SageMaker Studio JupyterLab Space backed by GPU-based instances is essential. For the defog/sqlcoder-7b-2 model, a 7B parameter model, using an ml.g5.2xlarge instance is recommended. Alternatives such as defog/sqlcoder-70b-alpha or defog/sqlcoder-34b-alpha are also viable for natural language to SQL conversion, but larger instance types may be required for prototyping. Make sure you have the quota to launch a GPU-backed instance by navigating to the Service Quotas console, searching for SageMaker, and searching for Studio JupyterLab Apps running on <instance type>.

Launch a new GPU-backed JupyterLab Space from your SageMaker Studio. It’s recommended to create a new JupyterLab Space with at least 75 GB of Amazon Elastic Block Store (Amazon EBS) storage for a 7B parameter model.

  • Hugging Face Hub – If your SageMaker Studio domain has access to download models from the Hugging Face Hub, you can use the AutoModelForCausalLM class from huggingface/transformers to automatically download models and pin them to your local GPUs. The model weights will be stored in your local machine’s cache. See the following code:
    model_id = "defog/sqlcoder-7b-2" # or use "defog/sqlcoder-34b-alpha", "defog/sqlcoder-70b-alpha
    
    # download model and tokenizer in fp16 and pin model to local notebook GPUs
    model = AutoModelForCausalLM.from_pretrained(
        model_id, 
        device_map="auto",
        torch_dtype=torch.float16
    )
    
    tokenizer = AutoTokenizer.from_pretrained(model_id)
    tokenizer.pad_token = tokenizer.eos_token

After the model has been fully downloaded and loaded into memory, you should observe an increase in GPU utilization on your local machine. This indicates that the model is actively using the GPU resources for computational tasks. You can verify this in your own JupyterLab space by running nvidia-smi (for a one-time display) or nvidia-smi —loop=1 (to repeat every second) from your JupyterLab terminal.

Text-to-SQL models excel at understanding the intent and context of a user’s request, even when the language used is conversational or ambiguous. The process involves translating natural language inputs into the correct database schema elements, such as table names, column names, and conditions. However, an off-the-shelf Text-to-SQL model will not inherently know the structure of your data warehouse, the specific database schemas, or be able to accurately interpret the content of a table based solely on column names. To effectively use these models to generate practical and efficient SQL queries from natural language, it is necessary to adapt the SQL text-generation model to your specific warehouse database schema. This adaptation is facilitated through the use of LLM prompts. The following is a recommended prompt template for the defog/sqlcoder-7b-2 Text-to-SQL model, divided into four parts:

  • Task – This section should specify a high-level task to be accomplished by the model. It should include the type of database backend (such as Amazon RDS, PostgreSQL, or Amazon Redshift) to make the model aware of any nuanced syntactical differences that may affect the generation of the final SQL query.
  • Instructions – This section should define task boundaries and domain awareness for the model, and may include few-shot examples to guide the model in generating finely tuned SQL queries.
  • Database Schema – This section should detail your warehouse database schemas, outlining the relationships between tables and columns to aid the model in understanding the database structure.
  • Answer – This section is reserved for the model to output the SQL query response to the natural language input.

An example of the database schema and prompt used in this section is available in the GitHub Repo.

### Task
Generate a SQL query to answer [QUESTION]{user_question}[/QUESTION]

### Instructions
- If you cannot answer the question with the available database schema, return 'I do not know'

### Database Schema
The query will run on a database with the following schema:
{table_metadata_string_DDL_statements}

### Answer
Given the database schema, here is the SQL query that 
 [QUESTION]
    {user_question}
 [/QUESTION]

[SQL]

Prompt engineering is not just about forming questions or statements; it’s a nuanced art and science that significantly impacts the quality of interactions with an AI model. The way you craft a prompt can profoundly influence the nature and usefulness of the AI’s response. This skill is pivotal in maximizing the potential of AI interactions, especially in complex tasks requiring specialized understanding and detailed responses.

It’s important to have the option to quickly build and test a model’s response for a given prompt and optimize the prompt based on the response. JupyterLab notebooks provide the ability to receive instant model feedback from a model running on local compute and optimize the prompt and tune a model’s response further or change a model entirely. In this post, we use a SageMaker Studio JupyterLab notebook backed by ml.g5.2xlarge’s NVIDIA A10G 24 GB GPU to run Text-to-SQL model inference on the notebook and interactively build our model prompt until the model’s response is sufficiently tuned to provide responses that are directly executable in JupyterLab’s SQL cells. To run model inference and simultaneously stream model responses, we use a combination of model.generate and TextIteratorStreamer as defined in the following code:

streamer = TextIteratorStreamer(
    tokenizer=tokenizer, 
    timeout=240.0, 
    skip_prompt=True, 
    skip_special_tokens=True
)


def llm_generate_query(user_question):
    """ Generate text-gen SQL responses"""
    
    updated_prompt = prompt.format(question=user_question)
    inputs = tokenizer(updated_prompt, return_tensors="pt").to("cuda")
    
    return model.generate(
        **inputs,
        num_return_sequences=1,
        eos_token_id=tokenizer.eos_token_id,
        pad_token_id=tokenizer.eos_token_id,
        max_new_tokens=1024,
        temperature=0.1,
        do_sample=False,
        num_beams=1, 
        streamer=streamer,
    )

The model’s output can be decorated with SageMaker SQL magic %%sm_sql ..., which allows the JupyterLab notebook to identify the cell as a SQL cell.

Host Text-to-SQL models as SageMaker endpoints

At the end of the prototyping stage, we have selected our preferred Text-to-SQL LLM, an effective prompt format, and an appropriate instance type for hosting the model (either single-GPU or multi-GPU). SageMaker facilitates the scalable hosting of custom models through the use of SageMaker endpoints. These endpoints can be defined according to specific criteria, allowing for the deployment of LLMs as endpoints. This capability enables you to scale the solution to a wider audience, allowing users to generate SQL queries from natural language inputs using custom hosted LLMs. The following diagram illustrates this architecture.

To host your LLM as a SageMaker endpoint, you generate several artifacts.

The first artifact is model weights. SageMaker Deep Java Library (DJL) Serving containers allow you to set up configurations through a meta serving.properties file, which enables you to direct how models are sourced—either directly from the Hugging Face Hub or by downloading model artifacts from Amazon S3. If you specify model_id=defog/sqlcoder-7b-2, DJL Serving will attempt to directly download this model from the Hugging Face Hub. However, you may incur networking ingress/egress charges each time the endpoint is deployed or elastically scaled. To avoid these charges and potentially speed up the download of model artifacts, it is recommended to skip using model_id in serving.properties and save model weights as S3 artifacts and only specify them with s3url=s3://path/to/model/bin.

Saving a model (with its tokenizer) to disk and uploading it to Amazon S3 can be accomplished with just a few lines of code:

# save model and tokenizer to local disk
model.save_pretrained(local_model_path)
tokenizer.save_pretrained(local_model_path)
...
...
...
# upload file to s3
s3_bucket_name = "<my llm artifact bucket name>>"
# s3 prefix to save model weights and tokenizer defs
model_s3_prefix = "sqlcoder-7b-instruct/weights"
# s3 prefix to store s
meta_model_s3_prefix = "sqlcoder-7b-instruct/meta-model"

sagemaker.s3.S3Uploader.upload(local_model_path,  f"s3://{s3_bucket_name}/{model_s3_prefix}")

You also use a database prompt file. In this setup, the database prompt is composed of Task, Instructions, Database Schema, and Answer sections. For the current architecture, we allocate a separate prompt file for each database schema. However, there is flexibility to expand this setup to include multiple databases per prompt file, allowing the model to run composite joins across databases on the same server. During our prototyping stage, we save the database prompt as a text file named <Database-Glue-Connection-Name>.prompt, where Database-Glue-Connection-Name corresponds to the connection name visible in your JupyterLab environment. For instance, this post refers to a Snowflake connection named Airlines_Dataset, so the database prompt file is named Airlines_Dataset.prompt. This file is then stored on Amazon S3 and subsequently read and cached by our model serving logic.

Moreover, this architecture permits any authorized users of this endpoint to define, store, and generate natural language to SQL queries without the need for multiple redeployments of the model. We use the following example of a database prompt to demonstrate the Text-to-SQL functionality.

Next, you generate custom model service logic. In this section, you outline a custom inference logic named model.py. This script is designed to optimize the performance and integration of our Text-to-SQL services:

  • Define the database prompt file caching logic – To minimize latency, we implement a custom logic for downloading and caching database prompt files. This mechanism makes sure that prompts are readily available, reducing the overhead associated with frequent downloads.
  • Define custom model inference logic – To enhance inference speed, our text-to-SQL model is loaded in the float16 precision format and then converted into a DeepSpeed model. This step allows for more efficient computation. Additionally, within this logic, you specify which parameters users can adjust during inference calls to tailor the functionality according to their needs.
  • Define custom input and output logic – Establishing clear and customized input/output formats is essential for smooth integration with downstream applications. One such application is JupyterAI, which we discuss in the subsequent section.
%%writefile {meta_model_filename}/model.py
...

predictor = None
prompt_for_db_dict_cache = {}

def download_prompt_from_s3(prompt_filename):

    print(f"downloading prompt file: {prompt_filename}")
    s3 = boto3.resource('s3')
    ...


def get_model(properties):
    
    ...
    print(f"Loading model from {cwd}")
    model = AutoModelForCausalLM.from_pretrained(
        cwd, 
        low_cpu_mem_usage=True, 
        torch_dtype=torch.bfloat16
    )
    model = deepspeed.init_inference(
        model, 
        mp_size=properties["tensor_parallel_degree"]
    )
    
    ...


def handle(inputs: Input) -> None:

    ...

    global predictor
    if not predictor:
        predictor = get_model(inputs.get_properties())

    ...
    result = f"""%%sm_sql --metastore-id {prompt_for_db_key.split('.')[0]} --metastore-type GLUE_CONNECTIONnn{result}n"""
    result = [{'generated_text': result}]
    
    return Output().add(result)

Additionally, we include a serving.properties file, which acts as a global configuration file for models hosted using DJL serving. For more information, refer to Configurations and settings.

Lastly, you can also include a requirements.txt file to define additional modules required for inference and package everything into a tarball for deployment.

See the following code:

os.system(f"tar czvf {meta_model_filename}.tar.gz ./{meta_model_filename}/")

>>>./deepspeed-djl-serving-7b/
>>>./deepspeed-djl-serving-7b/serving.properties
>>>./deepspeed-djl-serving-7b/model.py
>>>./deepspeed-djl-serving-7b/requirements.txt

Integrate your endpoint with the SageMaker Studio Jupyter AI assistant

Jupyter AI is an open source tool that brings generative AI to Jupyter notebooks, offering a robust and user-friendly platform for exploring generative AI models. It enhances productivity in JupyterLab and Jupyter notebooks by providing features like the %%ai magic for creating a generative AI playground inside notebooks, a native chat UI in JupyterLab for interacting with AI as a conversational assistant, and support for a wide array of LLMs from providers like Amazon Titan, AI21, Anthropic, Cohere, and Hugging Face or managed services like Amazon Bedrock and SageMaker endpoints. For this post, we use Jupyter AI’s out-of-the-box integration with SageMaker endpoints to bring the Text-to-SQL capability into JupyterLab notebooks. The Jupyter AI tool comes pre-installed in all SageMaker Studio JupyterLab Spaces backed by SageMaker Distribution images; end-users are not required to make any additional configurations to start using the Jupyter AI extension to integrate with a SageMaker hosted endpoint. In this section, we discuss the two ways to use the integrated Jupyter AI tool.

Jupyter AI inside a notebook using magics

Jupyter AI’s %%ai magic command allows you to transform your SageMaker Studio JupyterLab notebooks into a reproducible generative AI environment. To begin using AI magics, make sure you have loaded the jupyter_ai_magics extension to use %%ai magic, and additionally load amazon_sagemaker_sql_magic to use %%sm_sql magic:

# load sm_sql magic extension and ai magic extension
%load_ext jupyter_ai_magics
%load_ext amazon_sagemaker_sql_magic

To run a call to your SageMaker endpoint from your notebook using the %%ai magic command, provide the following parameters and structure the command as follows:

  • –region-name – Specify the Region where your endpoint is deployed. This makes sure that the request is routed to the correct geographic location.
  • –request-schema – Include the schema of the input data. This schema outlines the expected format and types of the input data that your model needs to process the request.
  • –response-path – Define the path within the response object where the output of your model is located. This path is used to extract the relevant data from the response returned by your model.
  • -f (optional) – This is an output formatter flag that indicates the type of output returned by the model. In the context of a Jupyter notebook, if the output is code, this flag should be set accordingly to format the output as executable code at the top of a Jupyter notebook cell, followed by a free text input area for user interaction.

For example, the command in a Jupyter notebook cell might look like the following code:

%%ai sagemaker-endpoint:<endpoint-name> --region-name=us-east-1 
--request-schema={
    "inputs":"<prompt>", 
    "parameters":{
        "temperature":0.1,
        "top_p":0.2,
        "max_new_tokens":1024,
        "return_full_text":false
    }, 
    "db_prompt":"Airlines_Dataset.prompt"
  } 
--response-path=[0].generated_text -f code

My natural language query goes here...

Jupyter AI chat window

Alternatively, you can interact with SageMaker endpoints through a built-in user interface, simplifying the process of generating queries or engaging in dialogue. Before beginning to chat with your SageMaker endpoint, configure the relevant settings in Jupyter AI for the SageMaker endpoint, as shown in the following screenshot.

Conclusion

SageMaker Studio now simplifies and streamlines the data scientist workflow by integrating SQL support into JupyterLab notebooks. This allows data scientists to focus on their tasks without the need to manage multiple tools. Furthermore, the new built-in SQL integration in SageMaker Studio enables data personas to effortlessly generate SQL queries using natural language text as input, thereby accelerating their workflow.

We encourage you to explore these features in SageMaker Studio. For more information, refer to Prepare data with SQL in Studio.

Appendix

Enable the SQL browser and notebook SQL cell in custom environments

If you’re not using a SageMaker Distribution image or using Distribution images 1.5 or below, run the following commands to enable the SQL browsing feature inside your JupyterLab environment:

npm install -g vscode-jsonrpc
npm install -g sql-language-server
pip install amazon-sagemaker-sql-execution==0.1.0
pip install amazon-sagemaker-sql-editor
restart-jupyter-server

Relocate the SQL browser widget

JupyterLab widgets allow for relocation. Depending on your preference, you can move widgets to the either side of JupyterLab widgets pane. If you prefer, you can move the direction of the SQL widget to the opposite side (right to left) of the sidebar with a simple right-click on the widget icon and choosing Switch Sidebar Side.


About the authors

Pranav Murthy is an AI/ML Specialist Solutions Architect at AWS. He focuses on helping customers build, train, deploy and migrate machine learning (ML) workloads to SageMaker. He previously worked in the semiconductor industry developing large computer vision (CV) and natural language processing (NLP) models to improve semiconductor processes using state of the art ML techniques. In his free time, he enjoys playing chess and traveling. You can find Pranav on LinkedIn.

Varun Shah is a Software Engineer working on Amazon SageMaker Studio at Amazon Web Services. He is focused on building interactive ML solutions which simplify data processing and data preparation journeys . In his spare time, Varun enjoys outdoor activities including hiking and skiing, and is always up for discovering new, exciting places.

Sumedha Swamy is a Principal Product Manager at Amazon Web Services where he leads SageMaker Studio team in its mission to develop IDE of choice for data science and machine learning. He has dedicated the past 15 years building Machine Learning based consumer and enterprise products.

Bosco Albuquerque is a Sr. Partner Solutions Architect at AWS and has over 20 years of experience working with database and analytics products from enterprise database vendors and cloud providers. He has helped technology companies design and implement data analytics solutions and products.

Read More

Distributed training and efficient scaling with the Amazon SageMaker Model Parallel and Data Parallel Libraries

Distributed training and efficient scaling with the Amazon SageMaker Model Parallel and Data Parallel Libraries

There has been tremendous progress in the field of distributed deep learning for large language models (LLMs), especially after the release of ChatGPT in December 2022. LLMs continue to grow in size with billions or even trillions of parameters, and they often won’t fit into a single accelerator device such as GPU or even a single node such as ml.p5.32xlarge because of memory limitations. Customers training LLMs often must distribute their workload across hundreds or even thousands of GPUs. Enabling training at such scale remains a challenge in distributed training, and training efficiently in such a large system is another equally important problem. Over the past years, the distributed training community has introduced 3D parallelism (data parallelism, pipeline parallelism, and tensor parallelism) and other techniques (such as sequence parallelism and expert parallelism) to address such challenges.

In December 2023, Amazon announced the release of the SageMaker model parallel library 2.0 (SMP), which achieves state-of-the-art efficiency in large model training, together with the SageMaker distributed data parallelism library (SMDDP). This release is a significant update from 1.x: SMP is now integrated with open source PyTorch Fully Sharded Data Parallel (FSDP) APIs, which allows you to use a familiar interface when training large models, and is compatible with Transformer Engine (TE), unlocking tensor parallelism techniques alongside FSDP for the first time. To learn more about the release, refer to Amazon SageMaker model parallel library now accelerates PyTorch FSDP workloads by up to 20%.

In this post, we explore the performance benefits of Amazon SageMaker (including SMP and SMDDP), and how you can use the library to train large models efficiently on SageMaker. We demonstrate the performance of SageMaker with benchmarks on ml.p4d.24xlarge clusters up to 128 instances, and FSDP mixed precision with bfloat16 for the Llama 2 model. We start with a demonstration of near-linear scaling efficiencies for SageMaker, followed by analyzing contributions from each feature for optimal throughput, and end with efficient training with various sequence lengths up to 32,768 through tensor parallelism.

Near-linear scaling with SageMaker

To reduce the overall training time for LLM models, preserving high throughput when scaling to large clusters (thousands of GPUs) is crucial given the inter-node communication overhead. In this post, we demonstrate robust and near-linear scaling (by varying the number of GPUs for a fixed total problem size) efficiencies on p4d instances invoking both SMP and SMDDP.

In this section, we demonstrate SMP’s near-linear scaling performance. Here we train Llama 2 models of various sizes (7B, 13B, and 70B parameters) using a fixed sequence length of 4,096, the SMDDP backend for collective communication, TE enabled, a global batch size of 4 million, with 16 to 128 p4d nodes. The following table summarizes our optimal configuration and training performance (model TFLOPs per second).

Model size Number of nodes TFLOPs* sdp* tp* offload* Scaling efficiency
7B 16 136.76 32 1 N 100.0%
32 132.65 64 1 N 97.0%
64 125.31 64 1 N 91.6%
128 115.01 64 1 N 84.1%
13B 16 141.43 32 1 Y 100.0%
32 139.46 256 1 N 98.6%
64 132.17 128 1 N 93.5%
128 120.75 128 1 N 85.4%
70B 32 154.33 256 1 Y 100.0%
64 149.60 256 1 N 96.9%
128 136.52 64 2 N 88.5%

*At the given model size, sequence length, and number of nodes, we show the globally optimal throughput and configurations after exploring various sdp, tp, and activation offloading combinations.

The preceding table summarizes the optimal throughput numbers subject to sharded data parallel (sdp) degree (typically using FSDP hybrid sharding instead of full sharding, with more details in the next section), tensor parallel (tp) degree, and activation offloading value changes, demonstrating a near-linear scaling for SMP together with SMDDP. For example, given the Llama 2 model size 7B and sequence length 4,096, overall it achieves scaling efficiencies of 97.0%, 91.6%, and 84.1% (relative to 16 nodes) at 32, 64, and 128 nodes, respectively. The scaling efficiencies are stable across different model sizes and increase slightly as the model size gets larger.

SMP and SMDDP also demonstrate similar scaling efficiencies for other sequence lengths such as 2,048 and 8,192.

SageMaker model parallel library 2.0 performance: Llama 2 70B

Model sizes have continued to grow over the past years, along with frequent state-of-the-art performance updates in the LLM community. In this section, we illustrate performance in SageMaker for the Llama 2 model using a fixed model size 70B, sequence length of 4,096, and a global batch size of 4 million. To compare with the previous table’s globally optimal configuration and throughput (with SMDDP backend, typically FSDP hybrid sharding and TE), the following table extends to other optimal throughputs (potentially with tensor parallelism) with extra specifications on the distributed backend (NCCL and SMDDP), FSDP sharding strategies (full sharding and hybrid sharding), and enabling TE or not (default).

Model size Number of nodes TFLOPS TFLOPs #3 config TFLOPs improvement over baseline
. . NCCL full sharding: #0 SMDDP full sharding: #1 SMDDP hybrid sharding: #2 SMDDP hybrid sharding with TE: #3 sdp* tp* offload* #0 → #1 #1 → #2 #2 → #3 #0 → #3
70B 32 150.82 149.90 150.05 154.33 256 1 Y -0.6% 0.1% 2.9% 2.3%
64 144.38 144.38 145.42 149.60 256 1 N 0.0% 0.7% 2.9% 3.6%
128 68.53 103.06 130.66 136.52 64 2 N 50.4% 26.8% 4.5% 99.2%

*At the given model size, sequence length, and number of nodes, we show the globally optimal throughput and configuration after exploring various sdp, tp, and activation offloading combinations.

The latest release of SMP and SMDDP supports multiple features including native PyTorch FSDP, extended and more flexible hybrid sharding, transformer engine integration, tensor parallelism, and optimized all gather collective operation. To better understand how SageMaker achieves efficient distributed training for LLMs, we explore incremental contributions from SMDDP and the following SMP core features:

  • SMDDP enhancement over NCCL with FSDP full sharding
  • Replacing FSDP full sharding with hybrid sharding, which reduces communication cost to improve throughput
  • A further boost to throughput with TE, even when tensor parallelism is disabled
  • At lower resource settings, activation offloading might be able to enable training that would otherwise be infeasible or very slow due to high memory pressure

FSDP full sharding: SMDDP enhancement over NCCL

As shown in the previous table, when models are fully sharded with FSDP, although NCCL (TFLOPs #0) and SMDDP (TFLOPs #1) throughputs are comparable at 32 or 64 nodes, there is a huge improvement of 50.4% from NCCL to SMDDP at 128 nodes.

At smaller model sizes, we observe consistent and significant improvements with SMDDP over NCCL, starting at smaller cluster sizes, because SMDDP is able to mitigate the communication bottleneck effectively.

FSDP hybrid sharding to reduce communication cost

In SMP 1.0, we launched sharded data parallelism, a distributed training technique powered by Amazon in-house MiCS technology. In SMP 2.0, we introduce SMP hybrid sharding, an extensible and more flexible hybrid sharding technique that allows models to be sharded among a subset of GPUs, instead of all training GPUs, which is the case for FSDP full sharding. It’s useful for medium-sized models that don’t need to be sharded across the entire cluster in order to satisfy per-GPU memory constraints. This leads to clusters having more than one model replica and each GPU communicating with fewer peers at runtime.

SMP’s hybrid sharding enables efficient model sharding over a wider range, from the smallest shard degree with no out of memory issues up to the whole cluster size (which equates to full sharding).

The following figure illustrates the throughput dependence on sdp at tp = 1 for simplicity. Although it’s not necessarily the same as the optimal tp value for NCCL or SMDDP full sharding in the previous table, the numbers are quite close. It clearly validates the value of switching from full sharding to hybrid sharding at a large cluster size of 128 nodes, which is applicable to both NCCL and SMDDP. For smaller model sizes, significant improvements with hybrid sharding start at smaller cluster sizes, and the difference keeps increasing with cluster size.

Improvements with TE

TE is designed to accelerate LLM training on NVIDIA GPUs. Despite not using FP8 because it’s unsupported on p4d instances, we still see significant speedup with TE on p4d.

On top of MiCS trained with the SMDDP backend, TE introduces a consistent boost for throughput across all cluster sizes (the only exception is full sharding at 128 nodes), even when tensor parallelism is disabled (tensor parallel degree is 1).

For smaller model sizes or various sequence lengths, the TE boost is stable and non-trivial, in the range of approximately 3–7.6%.

Activation offloading at low resource settings

At low resource settings (given a small number of nodes), FSDP might experience a high memory pressure (or even out of memory in the worst case) when activation checkpointing is enabled. For such scenarios bottlenecked by memory, turning on activation offloading is potentially an option to improve performance.

For example, as we saw previously, although the Llama 2 at model size 13B and sequence length 4,096 is able to train optimally with at least 32 nodes with activation checkpointing and without activation offloading, it achieves the best throughput with activation offloading when limited to 16 nodes.

Enable training with long sequences: SMP tensor parallelism

Longer sequence lengths are desired for long conversations and context, and are getting more attention in the LLM community. Therefore, we report various long sequence throughputs in the following table. The table shows optimal throughputs for Llama 2 training on SageMaker, with various sequence lengths from 2,048 up to 32,768. At sequence length 32,768, native FSDP training is infeasible with 32 nodes at a global batch size of 4 million.

. . . TFLOPS
Model size Sequence length Number of nodes Native FSDP and NCCL SMP and SMDDP SMP improvement
7B 2048 32 129.25 138.17 6.9%
4096 32 124.38 132.65 6.6%
8192 32 115.25 123.11 6.8%
16384 32 100.73 109.11 8.3%
32768 32 N.A. 82.87 .
13B 2048 32 137.75 144.28 4.7%
4096 32 133.30 139.46 4.6%
8192 32 125.04 130.08 4.0%
16384 32 111.58 117.01 4.9%
32768 32 N.A. 92.38 .
*: max . . . . 8.3%
*: median . . . . 5.8%

When the cluster size is large and given a fixed global batch size, some model training might be infeasible with native PyTorch FSDP, lacking a built-in pipeline or tensor parallelism support. In the preceding table, given a global batch size of 4 million, 32 nodes, and sequence length 32,768, the effective batch size per GPU is 0.5 (for example, tp = 2 with batch size 1), which would otherwise be infeasible without introducing tensor parallelism.

Conclusion

In this post, we demonstrated efficient LLM training with SMP and SMDDP on p4d instances, attributing contributions to multiple key features, such as SMDDP enhancement over NCCL, flexible FSDP hybrid sharding instead of full sharding, TE integration, and enabling tensor parallelism in favor of long sequence lengths. After being tested over a wide range of settings with various models, model sizes, and sequence lengths, it exhibits robust near-linear scaling efficiencies, up to 128 p4d instances on SageMaker. In summary, SageMaker continues to be a powerful tool for LLM researchers and practitioners.

To learn more, refer to SageMaker model parallelism library v2, or contact the SMP team at sm-model-parallel-feedback@amazon.com.

Acknowledgements

We’d like to thank Robert Van Dusen, Ben Snyder, Gautam Kumar, and Luis Quintela for their constructive feedback and discussions.


About the Authors

Xinle Sheila Liu is an SDE in Amazon SageMaker. In her spare time, she enjoys reading and outdoor sports.

Suhit Kodgule is a Software Development Engineer with the AWS Artificial Intelligence group working on deep learning frameworks. In his spare time, he enjoys hiking, traveling, and cooking.

Victor Zhu is a Software Engineer in Distributed Deep Learning at Amazon Web Services. He can be found enjoying hiking and board games around the SF Bay Area.

Derya Cavdar works as a software engineer at AWS. Her interests include deep learning and distributed training optimization.

Teng Xu is a Software Development Engineer in the Distributed Training group in AWS AI. He enjoys reading.

Read More

Manage your Amazon Lex bot via AWS CloudFormation templates

Manage your Amazon Lex bot via AWS CloudFormation templates

Amazon Lex is a fully managed artificial intelligence (AI) service with advanced natural language models to design, build, test, and deploy conversational interfaces in applications. It employs advanced deep learning technologies to understand user input, enabling developers to create chatbots, virtual assistants, and other applications that can interact with users in natural language.

Managing your Amazon Lex bots using AWS CloudFormation allows you to create templates defining the bot and all the AWS resources it depends on. AWS CloudFormation provides and configures those resources on your behalf, removing the risk of human error when deploying bots to new environments. The benefits of using CloudFormation include:

  • Consistency – A CloudFormation template provides a more consistent and automated way to deploy and manage the resources associated with an Amazon Lex bot.
  • Version control – With AWS CloudFormation, you can use version control systems like Git to manage your CloudFormation templates. This allows you to maintain different versions of your bot and roll back to previous versions if needed.
  • Reusability – You can reuse CloudFormation templates across multiple environments, such as development, staging, and production. This saves time and effort in defining the same bot across different environments.
  • Expandability – As your Amazon Lex bot grows in complexity, managing it through the AWS Management Console becomes more challenging. AWS CloudFormation allows for a more streamlined and efficient approach to managing the bot’s definition and resources.
  • Automation – Using a CloudFormation template allows you to automate the deployment process. You can use AWS services like AWS CodePipeline and AWS CodeBuild to build, test, and deploy your Amazon Lex bot automatically.

In this post, we guide you through the steps involved in creating a CloudFormation template for an Amazon Lex V2 bot.

Solution overview

We have chosen the Book Trip bot as our starting point for this exercise. We use a CloudFormation template to create a new bot from scratch, including defining intents, slots, and other required components. Additionally, we explore topics such as version control, aliases, integrating AWS Lambda functions, creating conditional branches, and enabling logging.

Prerequisites

You should have the following prerequisites:

  • An AWS account to create and deploy a CloudFormation template
  • The necessary AWS Identity and Access Management (IAM) permissions to deploy AWS CloudFormation and the resources used in the template
  • Basic knowledge of Amazon Lex, Lambda functions, and associated services
  • Basic knowledge of creating and deploying CloudFormation templates

Create an IAM role

To begin, you need to create an IAM role that the bot will use. You can achieve this by initializing a CloudFormation template and adding the IAM role as a resource. You can use the following template to create the role. If you download the example template and deploy it, you should see that an IAM role has been created. We provide examples of templates as we go through this post and merge them as we get further along.

AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Description: CloudFormation template for book hotel bot.
Resources:
  # 1. IAM role that is used by the bot at runtime
  BotRuntimeRole:    
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - lexv2.amazonaws.com
            Action:
              - "sts:AssumeRole"
      Path: "/"
      Policies:
        - PolicyName: LexRuntimeRolePolicy
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - "polly:SynthesizeSpeech"
                  - "comprehend:DetectSentiment"
                Resource: "*"

Configure the Amazon Lex bot

Next, you need to add the bot definition. The following is the YAML template for the Amazon Lex bot definition; you construct the necessary components one by one:

Type: AWS::Lex::Bot
Properties:
  AutoBuildBotLocales: Boolean
  BotFileS3Location: 
    S3Location
  BotLocales: 
    - BotLocale
  BotTags: 
    - Tag
  DataPrivacy: 
    DataPrivacy
  Description: String
  IdleSessionTTLInSeconds: Integer
  Name: String
  RoleArn: String
  TestBotAliasSettings: 
    TestBotAliasSettings
  TestBotAliasTags: 
    - Tag

To create a bot that only includes the bot definition without any intent, you can use the following template. Here, you specify the bot’s name, the ARN of the role that you previously created, data privacy settings, and more:

BookHotelBot:
    DependsOn: BotRuntimeRole # The role created in the previous step
    Type: AWS::Lex::Bot
    Properties:
      Name: "BookHotel"
      Description: "Sample Bot to book a hotel"
      RoleArn: !GetAtt BotRuntimeRole.Arn      
      #For each Amazon Lex bot created with the Amazon Lex Model Building Service, you must specify whether your use of Amazon Lex 
      #is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under 
      #age 13 and subject to the Children's Online Privacy Protection Act (COPPA) by specifying true or false in the 
      #childDirected field.
      DataPrivacy:
        ChildDirected: false
      IdleSessionTTLInSeconds: 300

You can download the updated template. Deploying the updated template allows you to create both the role and the bot definition. Note that you’re updating the stack you created in the previous step.

The final step entails defining the BotLocales, which form the majority of the bot’s functionality. This includes, for example, Intents and Slot types. The following is the YAML template:

  CustomVocabulary: 
    CustomVocabulary
  Description: String
  Intents: 
    - Intent
  LocaleId: String
  NluConfidenceThreshold: Number
  SlotTypes: 
    - SlotType
  VoiceSettings: 
    VoiceSettings

In this case, you build the BookHotel intent, which requires a custom slot type for room types. You set the LocaleId, then the VoiceSettings. Then you add the SlotTypes and their corresponding values.

The next step is to define the Intents, starting with the first intent, BookHotel, which involves adding utterances, slots, and slot priorities. The details of these nodes are demonstrated in the provided template. Finally, you add the second intent, which is the FallbackIntent. See the following code:

BotLocales:
        - LocaleId: "en_US"
          Description: "en US locale"
          NluConfidenceThreshold: 0.40
          VoiceSettings:
            VoiceId: "Matthew"
          SlotTypes:
            - Name: "RoomTypeValues"
              Description: "Type of room"
              SlotTypeValues:
                - SampleValue:
                    Value: queen
                - SampleValue:
                    Value: king
                - SampleValue:
                    Value: deluxe
              ValueSelectionSetting:
                ResolutionStrategy: ORIGINAL_VALUE
          Intents:
            - Name: "BookHotel"
              Description: "Intent to book a hotel room"
              SampleUtterances:
                - Utterance: "Book a hotel"
                - Utterance: "I want a make hotel reservations"
                - Utterance: "Book a {Nights} night stay in {Location}"
              IntentConfirmationSetting:
                PromptSpecification:
                  MessageGroupsList:
                    - Message:
                        PlainTextMessage:
                          Value: "Okay, I have you down for a {Nights} night stay in {Location} starting {CheckInDate}.  Shall I book the reservation?"
                  MaxRetries: 3
                  AllowInterrupt: false
                DeclinationResponse:
                  MessageGroupsList:
                    - Message:
                        PlainTextMessage:
                          Value: "Okay, I have cancelled your reservation in progress."
                  AllowInterrupt: false
              SlotPriorities:
                - Priority: 1
                  SlotName: Location
                - Priority: 2
                  SlotName: CheckInDate
                - Priority: 3
                  SlotName: Nights
                - Priority: 4
                  SlotName: RoomType
              Slots:
                - Name: "Location"
                  Description: "Location of the city in which the hotel is located"
                  SlotTypeName: "AMAZON.City"
                  ValueElicitationSetting:
                    SlotConstraint: "Required"
                    PromptSpecification:
                      MessageGroupsList:
                        - Message:
                            PlainTextMessage:
                              Value: "What city will you be staying in?"
                      MaxRetries: 2
                      AllowInterrupt: false
                - Name: "CheckInDate"
                  Description: "Date of check-in"
                  SlotTypeName: "AMAZON.Date"
                  ValueElicitationSetting:
                    SlotConstraint: "Required"
                    PromptSpecification:
                      MessageGroupsList:
                        - Message:
                            PlainTextMessage:
                              Value: "What day do you want to check in?"
                      MaxRetries: 2
                      AllowInterrupt: false
                - Name: "Nights"
                  Description: "something"
                  SlotTypeName: "AMAZON.Number"
                  ValueElicitationSetting:
                    SlotConstraint: "Required"
                    PromptSpecification:
                      MessageGroupsList:
                        - Message:
                            PlainTextMessage:
                              Value: "How many nights will you be staying?"
                      MaxRetries: 2
                      AllowInterrupt: false
                - Name: "RoomType"
                  Description: "Enumeration of types of rooms that are offered by a hotel."
                  SlotTypeName: "RoomTypeValues"
                  ValueElicitationSetting:
                    SlotConstraint: "Required"
                    PromptSpecification:
                      MessageGroupsList:
                        - Message:
                            PlainTextMessage:
                              Value: "What type of room would you like, queen, king or deluxe?"
                      MaxRetries: 2
                      AllowInterrupt: false
            - Name: "FallbackIntent"
              Description: "Default intent when no other intent matches"
              ParentIntentSignature: "AMAZON.FallbackIntent"

You can download the CloudFormation template for the work done until now. After you update your stack with this template, a functional bot will be deployed. On the Amazon Lex console, you can confirm that there is a draft version of the bot, and a default alias named TestBotAlias has been created.

bot alias

Create a new bot version and alias

Amazon Lex supports publishing versions of bots, intents, and slot types so that you can control your client applications’ implementation. A version is a numbered snapshot of your bot definition that you can publish for use in different parts of your workflow, such as development, beta deployment, and production. Amazon Lex bots also support aliases. An alias is a pointer to a specific version of a bot. With an alias, you can update your client applications’ version. In practical scenarios, bot aliases are used for blue/green deployments and managing environment-specific configurations like development and production environments.

To illustrate, let’s say you point an alias to version 1 of your bot. When it’s time to update the bot, you can publish version 2 and change the alias to point to the new version. Because your applications use the alias instead of a specific version, all clients receive the new functionality without requiring updates.

Keep in mind that when you modify the CloudFormation template and initiate deployment, the changes are implemented within the draft version, primarily meant for testing. After you complete your testing phase, you can establish a new version to finalize the changes you’ve incorporated so far.

Next, you create a new bot version based on your draft, set up a new alias, and link the version to this alias. The following are the two new resources to add to your template:

BookHotelInitialVersion:
    DependsOn: BookHotelBot
    Type: AWS::Lex::BotVersion
    Properties:
      BotId: !Ref BookHotelBot
      BotVersionLocaleSpecification:
        - LocaleId: en_US
          BotVersionLocaleDetails:
            SourceBotVersion: DRAFT
      Description: Hotel Bot initial version

  BookHotelDemoAlias:
    Type: AWS::Lex::BotAlias
    Properties:
      BotId: !Ref BookHotelBot
      BotAliasName: "BookHotelDemoAlias"
      BotVersion: !GetAtt BookHotelInitialVersion.BotVersion

You can download the new version of the template and deploy it by updating your stack. You can see on the Amazon Lex console that a new version is created and associated with a new alias called BookHotelDemoAlias.

demo alias

When you create a new version of an Amazon Lex bot, it typically increments the version number sequentially, starting from 1. To discern a specific version, you can refer to its description.

initial version

Add a Lambda function

To initialize values or validate user input for your bot, you can add a Lambda function as a code hook to your bot. Similarly, you can use a Lambda function for fulfillment as well, for example writing data to databases or calling APIs save the collected information. For more information, refer to Enabling custom logic with AWS Lambda functions.

Let’s add a new resource for the Lambda function to the CloudFormation template. Although it’s generally not advised to embed code in CloudFormation templates, we do so here solely for the sake of making the demo deployment less complicated. See the following code:

HotelBotFunction:
    DependsOn: BotRuntimeRole # So that the Lambda function is ready before the bot deployment
    Type: AWS::Serverless::Function
    Properties:
      FunctionName: book_hotel_lambda
      Runtime: python3.11
      Timeout: 15
      Handler: index.lambda_handler
      InlineCode: |
        import os
        import json

        def close(intent_request):
            intent_request['sessionState']['intent']['state'] = 'Fulfilled'

            message = {"contentType": "PlainText",
                      "content": "Your Booking is confirmed"}

            session_attributes = {}
            sessionState = intent_request['sessionState']
            if 'sessionAttributes' in sessionState:
                session_attributes = sessionState['sessionAttributes']

            requestAttributes = None
            if 'requestAttributes' in intent_request:
                requestAttributes = intent_request['requestAttributes']

            return {
                'sessionState': {
                    'sessionAttributes': session_attributes,
                    'dialogAction': {
                        'type': 'Close'
                    },
                    'intent': intent_request['sessionState']['intent'],
                    'originatingRequestId': 'xxxxxxx-xxxx-xxxx-xxxx'
                },
                'messages':  [message],
                'sessionId': intent_request['sessionId'],
                'requestAttributes': requestAttributes
            }

        def router(event):
            intent_name = event['sessionState']['intent']['name']
            slots = event['sessionState']['intent']['slots']
            if (intent_name == 'BookHotel'):
                # invoke lambda and return result
                return close(event)

            raise Exception(
                'The intent is not supported by Lambda: ' + intent_name)

        def lambda_handler(event, context):
            response = router(event)
            return response

To use this Lambda function for the fulfillment, enable the code hook settings in your intent:

Intents:
  - Name: "BookHotel"
    Description: "Intent to book a hotel room"
    FulfillmentCodeHook:
      Enabled: true
    SampleUtterances:
      - Utterance: "Book a hotel"
      - Utterance: "I want a make hotel reservations"
      - Utterance: "Book a {Nights} night stay in {Location}"

Because you made changes to your bot, you can create a new version of the bot by adding a new resource named BookHotelVersionWithLambda in the template:

BookHotelVersionWithLambda:
    DependsOn: BookHotelInitialVersion
    Type: AWS::Lex::BotVersion
    Properties:
      BotId: !Ref BookHotelBot
      BotVersionLocaleSpecification:
        - LocaleId: en_US
          BotVersionLocaleDetails:
            SourceBotVersion: DRAFT
      Description: Hotel Bot with a lambda function

The Lambda function is associated with a bot alias. Amazon Lex V2 can use one Lambda function per bot alias per language. Therefore, you must update your alias in the template to add the Lambda function resource. You can do so in the BotAliasLocalSettings section. You also need to point the alias to the new version you created. The following code is the modified alias configuration:

  BookHotelDemoAlias:
    Type: AWS::Lex::BotAlias
    Properties:
      BotId: !Ref BookHotelBot
      BotAliasName: "BookHotelDemoAlias"
      BotVersion: !GetAtt BookHotelVersionWithLambda.BotVersion
      # Remove BotAliasLocaleSettings if you aren't concerned with Lambda setup.
      # If you are you can modify the LambdaArn below to get started.
      BotAliasLocaleSettings:
        - LocaleId: en_US
          BotAliasLocaleSetting:
            Enabled: true
            CodeHookSpecification:
              LambdaCodeHook:
                CodeHookInterfaceVersion: "1.0"
                LambdaArn: !GetAtt HotelBotFunction.Arn

Up until now, you have only linked the Lambda function with the alias. However, you need to grant permission to allow the alias to invoke the Lambda function. In the following code, you add the Lambda invoke permission for Amazon Lex and specify the alias ARN as the source ARN:

  LexInvokeLambdaPermission:
    Type: AWS::Lambda::Permission
    Properties:
      Action: "lambda:InvokeFunction"
      FunctionName: !GetAtt HotelBotFunction.Arn
      Principal: "lexv2.amazonaws.com"
      SourceArn: !GetAtt BookHotelDemoAlias.Arn

You can download the latest version of the template. After updating your stack with this version, you will have an Amazon Lex bot integrated with a Lambda function.

second version

updated alis

Conditional branches

Now let’s explore the conditional branch feature of the Amazon Lex bot and consider a scenario where booking more than five nights in Seattle is not allowed for the next week. As per the business requirement, the conversation should end with an appropriate message if the user attempts to book more than five nights in Seattle. The conditional branch for that is represented in the CloudFormation template under the SlotCaptureSetting:

- Name: "Nights"
                  Description: “Number of nights.”
                  SlotTypeName: "AMAZON.Number"
                  ValueElicitationSetting:
                    SlotConstraint: "Required"
                    SlotCaptureSetting:
                      CaptureConditional:
                        DefaultBranch:
                          NextStep:
                            DialogAction:
                              Type: "ElicitSlot"
                              SlotToElicit: "RoomType"
                        ConditionalBranches:
                          - Name: "Branch1"
                            Condition:
                              ExpressionString: '{Nights}>5 AND {Location} = "Seattle"'
                            Response:
                              AllowInterrupt: true
                              MessageGroupsList:
                                - Message:
                                    PlainTextMessage:
                                      Value: “Sorry, we cannot book more than five nights in {Location} right now."
                            NextStep:
                              DialogAction:
                                Type: "EndConversation"
                        IsActive: true

                    PromptSpecification:
                      MessageGroupsList:
                        - Message:
                            PlainTextMessage:
                              Value: "How many nights will you be staying?"
                      MaxRetries: 2
                      AllowInterrupt: false

Because you changed the bot definition, you need to create a new version in the template and link it with the alias. This is a temporary modification because the business plans to allow large bookings in Seattle soon. The following are the two new resources you add to the template:

BookHotelConditionalBranches:
    DependsOn: BookHotelVersionWithLambda
    Type: AWS::Lex::BotVersion
    Properties:
      BotId: !Ref BookHotelBot
      BotVersionLocaleSpecification:
        - LocaleId: en_US
          BotVersionLocaleDetails:
            SourceBotVersion: DRAFT
      Description: Hotel Bot Version with conditional branches

  BookHotelDemoAlias:
    Type: AWS::Lex::BotAlias
    Properties:
      BotId: !Ref BookHotelBot
      BotAliasName: "BookHotelDemoAlias"
      BotVersion: !GetAtt BookHotelConditionalBranches.BotVersion
      # Remove BotAliasLocaleSettings if you aren't concerned with Lambda setup.
      # If you are you can modify the LambdaArn below to get started.
      BotAliasLocaleSettings:
        - LocaleId: en_US
          BotAliasLocaleSetting:
            Enabled: true
            CodeHookSpecification:
              LambdaCodeHook:
                CodeHookInterfaceVersion: "1.0"
                LambdaArn: !GetAtt HotelBotFunction.Arn

You can download the updated template. After you update your stack with this template version, the alias will be directed to the version incorporating the conditional branching feature. To undo this modification, you can update the alias to revert back to the previous version.

third version

alias for third version

Logs

You can also enable logs for your Amazon Lex bot. To do so, you must update the bot’s role to grant permissions for writing Amazon CloudWatch logs. The following is an example of adding a CloudWatch policy to the role:

BotRuntimeRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - lexv2.amazonaws.com
            Action:
              - "sts:AssumeRole"
      Path: "/"
      Policies:
        - PolicyName: LexRuntimeRolePolicy
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - "polly:SynthesizeSpeech"
                  - "comprehend:DetectSentiment"
                Resource: "*"
        - PolicyName: CloudWatchPolicy
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - "logs:CreateLogStream"
                  - "logs:PutLogEvents"
                Resource: "*"

To ensure consistent and predictable behavior, you should be as specific as possible when defining resource names and properties in CloudFormation templates. This is because the use of the wildcard character (*) in CloudFormation templates can pose potential security risks and lead to unintended consequences. Therefore, it’s recommended to avoid using wildcards and instead use explicit values wherever possible.

Next, you create a CloudWatch log group resource, as shown in the following code, to direct your logs to this group:

  #Log Group
  LexLogGroup:
    Type: AWS::Logs::LogGroup
    Properties:
      LogGroupName: /lex/hotel-bot
      RetentionInDays: 5

Finally, you update your alias to enable conversation log settings:

BookHotelDemoAlias:
    Type: AWS::Lex::BotAlias
    Properties:
      BotId: !Ref BookHotelBot
      BotAliasName: "BookHotelDemoAlias"
      BotVersion: !GetAtt BookHotelConditionalBranches.BotVersion
      BotAliasLocaleSettings:
        - LocaleId: en_US
          BotAliasLocaleSetting:
            Enabled: true
            CodeHookSpecification:
              LambdaCodeHook:
                CodeHookInterfaceVersion: "1.0"
                LambdaArn: !GetAtt HotelBotFunction.Arn
      ConversationLogSettings:
        TextLogSettings:
          - Destination:
              CloudWatch:
                CloudWatchLogGroupArn: !GetAtt LexLogGroup.Arn
                LogPrefix: bookHotel
            Enabled: true

When you update the stack with this template, you enable the conversation logs for your bot. A new version is not created in this step because there are no changes to your bot resource. You can download the latest version of the template.

Clean Up

To prevent incurring charges in the future, delete the CloudFormation stack you created.

Conclusion

In this post, we discussed the step-by-step process to create a CloudFormation template for an Amazon Lex V2 bot. Initially, we deployed a basic bot, then we explored the potential of aliases and versions and how to use them efficiently with templates. Next, we learned how to integrate a Lambda function with an Amazon Lex V2 bot and implemented conditional branching in the bot’s conversation flow to accommodate business requirements. Finally, we added logging features by creating a CloudWatch log group resource and updating the bot’s role with the necessary permissions.

The template allows for the straightforward deployment and management of the bot, with the ability to revert changes as necessary. Overall, the CloudFormation template is useful for managing and optimizing an Amazon Lex V2 bot.

As the next step, you can explore sample Amazon Lex bots and apply the techniques discussed in this post to convert them into CloudFormation templates. This hands-on practice will solidify your understanding of managing Amazon Lex V2 bots through infrastructure as code.


About the Authors

Thomas Rindfuss is a Sr. Solutions Architect on the Amazon Lex team. He invents, develops, prototypes, and evangelizes new technical features and solutions for Language AI services that improves the customer experience and eases adoption.

Rijeesh Akkambeth Chathoth is a Professional Services Consultant at AWS. He helps customers in achieving their desired business
outcomes in the Contact Center space by leveraging Amazon Connect, Amazon Lex and GenAI features.

Read More

A secure approach to generative AI with AWS

A secure approach to generative AI with AWS

Generative artificial intelligence (AI) is transforming the customer experience in industries across the globe. Customers are building generative AI applications using large language models (LLMs) and other foundation models (FMs), which enhance customer experiences, transform operations, improve employee productivity, and create new revenue channels.

FMs and the applications built around them represent extremely valuable investments for our customers. They’re often used with highly sensitive business data, like personal data, compliance data, operational data, and financial information, to optimize the model’s output. The biggest concern we hear from customers as they explore the advantages of generative AI is how to protect their highly sensitive data and investments. Because their data and model weights are incredibly valuable, customers require them to stay protected, secure, and private, whether that’s from their own administrator’s accounts, their customers, vulnerabilities in software running in their own environments, or even their cloud service provider from having access.

At AWS, our top priority is safeguarding the security and confidentiality of our customers’ workloads. We think about security across the three layers of our generative AI stack:

  • Bottom layer – Provides the tools for building and training LLMs and other FMs
  • Middle layer – Provides access to all the models along with tools you need to build and scale generative AI applications
  • Top layer – Includes applications that use LLMs and other FMs to make work stress-free by writing and debugging code, generating content, deriving insights, and taking action

Each layer is important to making generative AI pervasive and transformative.

With the AWS Nitro System, we delivered a first-of-its-kind innovation on behalf of our customers. The Nitro System is an unparalleled computing backbone for AWS, with security and performance at its core. Its specialized hardware and associated firmware are designed to enforce restrictions so that nobody, including anyone in AWS, can access your workloads or data running on your Amazon Elastic Compute Cloud (Amazon EC2) instances. Customers have benefited from this confidentiality and isolation from AWS operators on all Nitro-based EC2 instances since 2017.

By design, there is no mechanism for any Amazon employee to access a Nitro EC2 instance that customers use to run their workloads, or to access data that customers send to a machine learning (ML) accelerator or GPU. This protection applies to all Nitro-based instances, including instances with ML accelerators like AWS Inferentia and AWS Trainium, and instances with GPUs like P4, P5, G5, and G6.

The Nitro System enables Elastic Fabric Adapter (EFA), which uses the AWS-built AWS Scalable Reliable Datagram (SRD) communication protocol for cloud-scale elastic and large-scale distributed training, enabling the only always-encrypted Remote Direct Memory Access (RDMA) capable network. All communication through EFA is encrypted with VPC encryption without incurring any performance penalty.

The design of the Nitro System has been validated by the NCC Group, an independent cybersecurity firm. AWS delivers a high level of protection for customer workloads, and we believe this is the level of security and confidentiality that customers should expect from their cloud provider. This level of protection is so critical that we’ve added it in our AWS Service Terms to provide an additional assurance to all of our customers.

Innovating secure generative AI workloads using AWS industry-leading security capabilities

From day one, AWS AI infrastructure and services have had built-in security and privacy features to give you control over your data. As customers move quickly to implement generative AI in their organizations, you need to know that your data is being handled securely across the AI lifecycle, including data preparation, training, and inferencing. The security of model weights—the parameters that a model learns during training that are critical for its ability to make predictions—is paramount to protecting your data and maintaining model integrity.

This is why it is critical for AWS to continue to innovate on behalf of our customers to raise the bar on security across each layer of the generative AI stack. To do this, we believe that you must have security and confidentiality built in across each layer of the generative AI stack. You need to be able to secure the infrastructure to train LLMs and other FMs, build securely with tools to run LLMs and other FMs, and run applications that use FMs with built-in security and privacy that you can trust.

At AWS, securing AI infrastructure refers to zero access to sensitive AI data, such as AI model weights and data processed with those models, by any unauthorized person, either at the infrastructure operator or at the customer. It’s comprised of three key principles:

  1. Complete isolation of the AI data from the infrastructure operator – The infrastructure operator must have no ability to access customer content and AI data, such as AI model weights and data processed with models.
  2. Ability for customers to isolate AI data from themselves – The infrastructure must provide a mechanism to allow model weights and data to be loaded into hardware, while remaining isolated and inaccessible from customers’ own users and software.
  3. Protected infrastructure communications – The communication between devices in the ML accelerator infrastructure must be protected. All externally accessible links between the devices must be encrypted.

The Nitro System fulfills the first principle of Secure AI Infrastructure by isolating your AI data from AWS operators. The second principle provides you with a way to remove administrative access of your own users and software to your AI data. AWS not only offers you a way to achieve that, but we also made it straightforward and practical by investing in building an integrated solution between AWS Nitro Enclaves and AWS Key Management Service (AWS KMS). With Nitro Enclaves and AWS KMS, you can encrypt your sensitive AI data using keys that you own and control, store that data in a location of your choice, and securely transfer the encrypted data to an isolated compute environment for inferencing. Throughout this entire process, the sensitive AI data is encrypted and isolated from your own users and software on your EC2 instance, and AWS operators cannot access this data. Use cases that have benefited from this flow include running LLM inferencing in an enclave. Until today, Nitro Enclaves operate only in the CPU, limiting the potential for larger generative AI models and more complex processing.

We announced our plans to extend this Nitro end-to-end encrypted flow to include first-class integration with ML accelerators and GPUs, fulfilling the third principle. You will be able to decrypt and load sensitive AI data into an ML accelerator for processing while providing isolation from your own operators and verified authenticity of the application used for processing the AI data. Through the Nitro System, you can cryptographically validate your applications to AWS KMS and decrypt data only when the necessary checks pass. This enhancement allows AWS to offer end-to-end encryption for your data as it flows through generative AI workloads.

We plan to offer this end-to-end encrypted flow in the upcoming AWS-designed Trainium2 as well as GPU instances based on NVIDIA’s upcoming Blackwell architecture, which both offer secure communications between devices, the third principle of Secure AI Infrastructure. AWS and NVIDIA are collaborating closely to bring a joint solution to market, including NVIDIA’s new NVIDIA Blackwell GPU 21 platform, which couples NVIDIA’s GB200 NVL72 solution with the Nitro System and EFA technologies to provide an industry-leading solution for securely building and deploying next-generation generative AI applications.

Advancing the future of generative AI security

Today, tens of thousands of customers are using AWS to experiment and move transformative generative AI applications into production. Generative AI workloads contain highly valuable and sensitive data that needs the level of protection from your own operators and the cloud service provider. Customers using AWS Nitro-based EC2 instances have received this level of protection and isolation from AWS operators since 2017, when we launched our innovative Nitro System.

At AWS, we’re continuing that innovation as we invest in building performant and accessible capabilities to make it practical for our customers to secure their generative AI workloads across the three layers of the generative AI stack, so that you can focus on what you do best: building and extending the uses of the generative AI to more areas. Learn more here.


About the authors

Anthony Liguori is an AWS VP and Distinguished Engineer for EC2

Colm MacCárthaigh is an AWS VP and Distinguished Engineer for EC2

Read More

Cost-effective document classification using the Amazon Titan Multimodal Embeddings Model

Cost-effective document classification using the Amazon Titan Multimodal Embeddings Model

Organizations across industries want to categorize and extract insights from high volumes of documents of different formats. Manually processing these documents to classify and extract information remains expensive, error prone, and difficult to scale. Advances in generative artificial intelligence (AI) have given rise to intelligent document processing (IDP) solutions that can automate the document classification, and create a cost-effective classification layer capable of handling diverse, unstructured enterprise documents.

Categorizing documents is an important first step in IDP systems. It helps you determine the next set of actions to take depending on the type of document. For example, during the claims adjudication process, the accounts payable team receives the invoice, whereas the claims department manages the contract or policy documents. Traditional rule engines or ML-based classification can classify the documents, but often reach a limit on types of document formats and support for the dynamic addition of a new classes of document. For more information, see Amazon Comprehend document classifier adds layout support for higher accuracy.

In this post, we discuss document classification using the Amazon Titan Multimodal Embeddings model to classify any document types without the need for training.

Amazon Titan Multimodal Embeddings

Amazon recently introduced Titan Multimodal Embeddings in Amazon Bedrock. This model can create embeddings for images and text, enabling the creation of document embeddings to be used in new document classification workflows.

It generates optimized vector representations of documents scanned as images. By encoding both visual and textual components into unified numerical vectors that encapsulate semantic meaning, it enables rapid indexing, powerful contextual search, and accurate classification of documents.

As new document templates and types emerge in business workflows, you can simply invoke the Amazon Bedrock API to dynamically vectorize them and append to their IDP systems to rapidly enhance document classification capabilities.

Solution overview

Let’s examine the following document classification solution with the Amazon Titan Multimodal Embeddings model. For optimal performance, you should customize the solution to your specific use case and existing IDP pipeline setup.

This solution classifies documents using vector embedding semantic search by matching an input document to an already indexed gallery of documents. We use the following key components:

  • EmbeddingsEmbeddings are numerical representations of real-world objects that machine learning (ML) and AI systems use to understand complex knowledge domains like humans do.
  • Vector databasesVector databases are used to store embeddings. Vector databases efficiently index and organize the embeddings, enabling fast retrieval of similar vectors based on distance metrics like Euclidean distance or cosine similarity.
  • Semantic search – Semantic search works by considering the context and meaning of the input query and its relevance to the content being searched. Vector embeddings are an effective way to capture and retain the contextual meaning of text and images. In our solution, when an application wants to perform a semantic search, the search document is first converted into an embedding. The vector database with relevant content is then queried to find the most similar embeddings.

In the labeling process, a sample set of business documents like invoices, bank statements, or prescriptions are converted into embeddings using the Amazon Titan Multimodal Embeddings model and stored in a vector database against predefined labels. The Amazon Titan Multimodal Embedding model was trained using the Euclidean L2 algorithm and therefore for best results the vector database used should support this algorithm.

The following architecture diagram illustrates how you can use the Amazon Titan Multimodal Embeddings model with documents in an Amazon Simple Storage Service (Amazon S3) bucket for image gallery creation.

The workflow consists of the following steps:

  1. A user or application uploads a sample document image with classification metadata to a document image gallery. An S3 prefix or S3 object metadata can be used to classify gallery images.
  2. An Amazon S3 object notification event invokes the embedding AWS Lambda function.
  3. The Lambda function reads the document image and translates the image into embeddings by calling Amazon Bedrock and using the Amazon Titan Multimodal Embeddings model.
  4. Image embeddings, along with document classification, are stored in the vector database.

This is the architecture diagram which illustrates how Titan Multimodal Embeddings can be used with documents in an Amazon Simple Storage Service (Amazon S3) bucket for image gallery creation and classification.

When a new document needs classification, the same embedding model is used to convert the query document into an embedding. Then, a semantic similarity search is performed on the vector database using the query embedding. The label retrieved against the top embedding match will be the classification label for the query document.

The following architecture diagram illustrates how to use the Amazon Titan Multimodal Embeddings model with documents in an S3 bucket for image classification.

The workflow consists of the following steps:

  1. Documents that require classification are uploaded to an input S3 bucket.
  2. The classification Lambda function receives the Amazon S3 object notification.
  3. The Lambda function translates the image to an embedding by calling the Amazon Bedrock API.
  4. The vector database is searched for a matching document using semantic search. Classification of the matching document is used to classify the input document.
  5. The input document is moved to the target S3 directory or prefix using the classification retrieved from the vector database search.

This is the architecture diagram which illustrates how Titan Multimodal Embeddings can be used with documents in an Amazon Simple Storage Service (Amazon S3) bucket for image classification.

To help you test the solution with your own documents, we have created an example Python Jupyter notebook, which is available on GitHub.

Prerequisites

To run the notebook, you need an AWS account with appropriate AWS Identity and Access Management (IAM) permissions to call Amazon Bedrock. Additionally, on the Model access page of the Amazon Bedrock console, make sure that access is granted for the Amazon Titan Multimodal Embeddings model.

Implementation

In the following steps, replace each user input placeholder with your own information:

  1. Create the vector database. In this solution, we use an in-memory FAISS database, but you could use an alternative vector database. Amazon Titan’s default dimension size is 1024.
index = faiss.IndexFlatL2(1024)
indexIDMap = faiss.IndexIDMap(index)
  1. After the vector database is created, enumerate over the sample documents, creating embeddings of each and store those into the vector database
  1. Test with your documents. Replace the folders in the following code with your own folders that contain known document types:
DOC_CLASSES: list[str] = ["Closing Disclosure", "Invoices", "Social Security Card", "W4", "Bank Statement"]

getDocumentsandIndex("sampleGallery/ClosingDisclosure", DOC_CLASSES.index("Closing Disclosure"))
getDocumentsandIndex("sampleGallery/Invoices", DOC_CLASSES.index("Invoices"))
getDocumentsandIndex("sampleGallery/SSCards", DOC_CLASSES.index("Social Security Card"))
getDocumentsandIndex("sampleGallery/W4", DOC_CLASSES.index("W4"))
getDocumentsandIndex("sampleGallery/BankStatements", DOC_CLASSES.index("Bank Statement"))
  1. Using the Boto3 library, call Amazon Bedrock. The variable inputImageB64 is a base64 encoded byte array representing your document. The response from Amazon Bedrock contains the embeddings.
bedrock = boto3.client(
service_name='bedrock-runtime',
region_name='Region’
)

request_body = {}
request_body["inputText"] = None # not using any text
request_body["inputImage"] = inputImageB64
body = json.dumps(request_body)
response = bedrock.invoke_model(
body=body, 
modelId="amazon.titan-embed-image-v1", 
accept="application/json", 
contentType="application/json")
response_body = json.loads(response.get("body").read()) 
  1. Add the embeddings to the vector database, with a class ID that represents a known document type:
indexIDMap.add_with_ids(embeddings, classID)
  1. With the vector database populated with images (representing our gallery), you can uncover similarities with new documents. For example, the following is the syntax used for search. The k=1 tells FAISS to return the top 1 match.
indexIDMap.search(embeddings, k=1)

In addition, the Euclidean L2 distance between the image on hand and the found image is also returned. If the image is an exact match, this value would be 0. The larger this value is, the further apart the images are in similarity.

Additional considerations

In this section, we discuss additional considerations for using the solution effectively. This includes data privacy, security, integration with existing systems, and cost estimates.

Data privacy and security

The AWS shared responsibility model applies to data protection in Amazon Bedrock. As described in this model, AWS is responsible for protecting the global infrastructure that runs all of the AWS Cloud. Customers are responsible for maintaining control over their content that is hosted on this infrastructure. As a customer, you are responsible for the security configuration and management tasks for the AWS services that you use.

Data protection in Amazon Bedrock

Amazon Bedrock avoids using customer prompts and continuations to train AWS models or share them with third parties. Amazon Bedrock doesn’t store or log customer data in its service logs. Model providers don’t have access to Amazon Bedrock logs or access to customer prompts and continuations. As a result, the images used for generating embeddings through the Amazon Titan Multimodal Embeddings model are not stored or employed in training AWS models or external distribution. Additionally, other usage data, such as timestamps and logged account IDs, is excluded from model training.

Integration with existing systems

The Amazon Titan Multimodal Embeddings model underwent training with the Euclidean L2 algorithm, so the vector database being used should be compatible with this algorithm.

Cost estimate

At the time of writing this post, as per Amazon Bedrock Pricing for the Amazon Titan Multimodal Embeddings model, the following are the estimated costs using on-demand pricing for this solution:

  • One-time indexing cost – $0.06 for a single run of indexing, assuming a 1,000 images gallery
  • Classification cost – $6 for 100,000 input images per month

Clean up

To avoid incurring future charges, delete the resources you created, such as the Amazon SageMaker notebook instance, when not in use.

Conclusion

In this post, we explored how you can use the Amazon Titan Multimodal Embeddings model to build an inexpensive solution for document classification in the IDP workflow. We demonstrated how to create an image gallery of known documents and perform similarity searches with new documents to classify them. We also discussed the benefits of using multimodal image embeddings for document classification, including their ability to handle diverse document types, scalability, and low latency.

As new document templates and types emerge in business workflows, developers can invoke the Amazon Bedrock API to vectorize them dynamically and append to their IDP systems to rapidly enhance document classification capabilities. This creates an inexpensive, infinitely scalable classification layer that can handle even the most diverse, unstructured enterprise documents.

Overall, this post provides a roadmap for building an inexpensive solution for document classification in the IDP workflow using Amazon Titan Multimodal Embeddings.

As next steps, check out What is Amazon Bedrock to start using the service. And follow Amazon Bedrock on the AWS Machine Learning Blog to keep up to date with new capabilities and use cases for Amazon Bedrock.


About the Authors

Sumit Bhati is a Senior Customer Solutions Manager at AWS, specializes in expediting the cloud journey for enterprise customers. Sumit is dedicated to assisting customers through every phase of their cloud adoption, from accelerating migrations to modernizing workloads and facilitating the integration of innovative practices.

David Girling is a Senior AI/ML Solutions Architect with over 20 years of experience in designing, leading, and developing enterprise systems. David is part of a specialist team that focuses on helping customers learn, innovate, and utilize these highly capable services with their data for their use cases.

Ravi Avula is a Senior Solutions Architect in AWS focusing on Enterprise Architecture. Ravi has 20 years of experience in software engineering and has held several leadership roles in software engineering and software architecture working in the payments industry.

George Belsian is a Senior Cloud Application Architect at AWS. He is passionate about helping customers accelerate their modernization and cloud adoption journey. In his current role, George works alongside customer teams to strategize, architect, and develop innovative, scalable solutions.

Read More

AWS at NVIDIA GTC 2024: Accelerate innovation with generative AI on AWS

AWS at NVIDIA GTC 2024: Accelerate innovation with generative AI on AWS

AWS was delighted to present to and connect with over 18,000 in-person and 267,000 virtual attendees at NVIDIA GTC, a global artificial intelligence (AI) conference that took place March 2024 in San Jose, California, returning to a hybrid, in-person experience for the first time since 2019.

AWS has had a long-standing collaboration with NVIDIA for over 13 years. AWS was the first Cloud Service Provider (CSP) to offer NVIDIA GPUs in the public cloud, and remains among the first to deploy NVIDIA’s latest technologies.

Looking back at AWS re:Invent 2023, Jensen Huang, founder and CEO of NVIDIA, chatted with AWS CEO Adam Selipsky on stage, discussing how NVIDIA and AWS are working together to enable millions of developers to access powerful technologies needed to rapidly innovate with generative AI. NVIDIA is known for its cutting-edge accelerators and full-stack solutions that contribute to advancements in AI. The company is combining this expertise with the highly scalable, reliable, and secure AWS Cloud infrastructure to help customers run advanced graphics, machine learning, and generative AI workloads at an accelerated pace.

The collaboration between AWS and NVIDIA further expanded at GTC 2024, with the CEOs from both companies sharing their perspectives on the collaboration and state of AI in a press release:

“The deep collaboration between our two organizations goes back more than 13 years, when together we launched the world’s first GPU cloud instance on AWS, and today we offer the widest range of NVIDIA GPU solutions for customers,” says Adam Selipsky, CEO of AWS. “NVIDIA’s next-generation Grace Blackwell processor marks a significant step forward in generative AI and GPU computing. When combined with AWS’s powerful Elastic Fabric Adapter networking, Amazon EC2 UltraClusters’ hyper-scale clustering, and our unique AWS Nitro System’s advanced virtualization and security capabilities, we make it possible for customers to build and run multi-trillion parameter large language models faster, at massive scale, and more securely than anywhere else. Together, we continue to innovate to make AWS the best place to run NVIDIA GPUs in the cloud.”

“AI is driving breakthroughs at an unprecedented pace, leading to new applications, business models, and innovation across industries,” says Jensen Huang, founder and CEO of NVIDIA. “Our collaboration with AWS is accelerating new generative AI capabilities and providing customers with unprecedented computing power to push the boundaries of what’s possible.”

Joint announcements and keynote

On the first day of the NVIDIA GTC, AWS and NVIDIA made a joint announcement focused on their strategic collaboration to advance generative AI. Huang included the AWS and NVIDIA collaboration on a slide during his keynote, highlighting the following announcements. The GTC keynote had over 21 million views within the first 72 hours.

Media coverage

By March 22, AWS’s announcement with NVIDIA had generated 104 articles mentioning AWS and Amazon. The vast majority of coverage mentioned AWS’s plans to offer Blackwell-based instances. Adam Selipsky appeared on CNBC’s Mad Money to discuss the long-standing collaboration between AWS and NVIDIA, among the many other ways AWS is innovating in generative AI, stating that AWS has been the first to bring many of its GPUs to the cloud to drive efficiency and scalability for customers.

Project Ceiba has also been a focus in media coverage. Forbes referred to Project Ceiba as the “most exciting” project by AWS and NVIDIA, stating that it “should accelerate the pace of innovation in AI, making it possible to tackle more complex problems, develop more sophisticated models, and achieve previously unattainable breakthroughs.” The Next Platform ran an in-depth piece on Ceiba, stating that “the size and the aggregate compute of Ceiba cluster are both being radically expanded, which will give AWS a very large supercomputer in one of its data centers” and NVIDIA will use it to do AI research, among other things.

Live from GTC

“Live from GTC” was an on-site studio at GTC for invited speakers to have a fireside chat with tech influencers like VentureBeat. Chetan Kapoor, Director of Product Management for Amazon EC2 at AWS, was interviewed by VentureBeat at the Live from GTC studio, where he discussed AWS’s presence and highlighted key announcements at GTC.

The AWS booth and sessions

The AWS booth showcased generative AI services, like the LLMs with Anthropic and Cohere on Amazon Bedrock, PartyRock, Amazon Q, Amazon SageMaker JumpStart, and more. Highlights included:

AWS presence with partners and customers

During GTC, AWS invited 23 partner and customer solution demos to join its booth with either a dedicated demo kiosk or a 30-minute in-booth session. Such partners and customers included Ansys, Anthropic, Articul8, Bria.ai, Cohere, Deci, Deepbrain.AI, Denali Advanced Integration, Ganit, Hugging Face, Lilt, Linker Vision, Mavenir, MCE, Media.Monks, Modular, NVIDIA, Perplexity, Quantiphi, Run.ai, Salesforce, Second Spectrum, and Slalom.

Among them, high-potential early-stage startups in generative AI across the globe were showcased with a dedicated kiosk at the AWS booth. The AWS Startups team works closely with these companies by investing and supporting their growth, offering resources through programs like AWS Activate.

AWS Generative AI Competency

NVIDIA was one of the 45 launch partners for the new AWS Generative AI Competency program. The Generative AI Center of Excellence for AWS Partners team members were on site at the AWS booth, presenting this program for both existing and potential AWS partners. The program offers valuable resources along with best practices for all AWS partners to build, market, and sell generative AI solutions jointly with AWS.

Additional resources

Watch a video recap of the AWS presence at NVIDIA GTC 2024. For additional resources about the AWS and NVIDIA collaboration, refer to the AWS at NVIDIA GTC 2024 resource hub.


About the Author

Julie Tang is the Senior Global Partner Marketing Manager for Generative AI at Amazon Web Services (AWS), where she collaborates closely with NVIDIA to plan and execute partner marketing initiatives focused on generative AI. Throughout her tenure at AWS, she has held various partner marketing roles, including Global IoT Solutions, AWS Partner Solution Factory, and Sr. Campaign Manager in Americas Field Marketing. Prior to AWS, Julie served as the Marketing Director at Segway. She holds a Master’s degree in Communications Management with a focus on marketing and entertainment management from the University of Southern California, and dual Bachelor’s degrees in Law and Broadcast Journalism from Fudan University.

Read More