Amazon SageMaker inference launches faster auto scaling for generative AI models

Amazon SageMaker inference launches faster auto scaling for generative AI models

Today, we are excited to announce a new capability in Amazon SageMaker inference that can help you reduce the time it takes for your generative artificial intelligence (AI) models to scale automatically. You can now use sub-minute metrics and significantly reduce overall scaling latency for generative AI models. With this enhancement, you can improve the responsiveness of your generative AI applications as demand fluctuates.

The rise of foundation models (FMs) and large language models (LLMs) has brought new challenges to generative AI inference deployment. These advanced models often take seconds to process, while sometimes handling only a limited number of concurrent requests. This creates a critical need for rapid detection and auto scaling to maintain business continuity. Organizations implementing generative AI seek comprehensive solutions that address multiple concerns: reducing infrastructure costs, minimizing latency, and maximizing throughput to meet the demands of these sophisticated models. However, they prefer to focus on solving business problems rather than doing the undifferentiated heavy lifting to build complex inference platforms from the ground up.

SageMaker provides industry-leading capabilities to address these inference challenges. It offers endpoints for generative AI inference that reduce FM deployment costs by 50% on average and latency by 20% on average by optimizing the use of accelerators. The SageMaker inference optimization toolkit, a fully managed model optimization feature in SageMaker, can deliver up to two times higher throughput while reducing costs by approximately 50% for generative AI performance on SageMaker. Besides optimization, SageMaker inference also provides streaming support for LLMs, enabling you to stream tokens in real time rather than waiting for the entire response. This allows for lower perceived latency and more responsive generative AI experiences, which are crucial for use cases like conversational AI assistants. Lastly, SageMaker inference provides the ability to deploy a single model or multiple models using SageMaker inference components on the same endpoint using advanced routing strategies to effectively load balance to the underlying instances backing an endpoint.

Faster auto scaling metrics

To optimize real-time inference workloads, SageMaker employs Application Auto Scaling. This feature dynamically adjusts the number of instances in use and the quantity of model copies deployed, responding to real-time changes in demand. When in-flight requests surpass a predefined threshold, auto scaling increases the available instances and deploys additional model copies to meet the heightened demand. Similarly, as the number of in-flight requests decreases, the system automatically removes unnecessary instances and model copies, effectively reducing costs. This adaptive scaling makes sure resources are optimally utilized, balancing performance needs with cost considerations in real time.

With today’s launch, SageMaker real-time endpoints now emit two new sub-minute Amazon CloudWatch metrics: ConcurrentRequestsPerModel and ConcurrentRequestsPerCopy. ConcurrentRequestsPerModel is the metric used for SageMaker real-time endpoints; ConcurrentRequestsPerCopy is used when SageMaker real-time inference components are used.

These metrics provide a more direct and accurate representation of the load on the system by tracking the actual concurrency or the number of simultaneous requests being handled by the containers (in-flight requests), including the requests queued inside the containers. The concurrency-based target tracking and step scaling policies focus on monitoring these new metrics. When the concurrency levels increase, the auto scaling mechanism can respond by scaling out the deployment, adding more container copies or instances to handle the increased workload. By taking advantage of these high-resolution metrics, you can now achieve significantly faster auto scaling, reducing detection time and improving the overall scale-out time of generative AI models. You can use these new metrics for endpoints created with accelerator instances like AWS Trainium, AWS Inferentia, and NVIDIA GPUs.

In addition, you can enable streaming responses back to the client on models deployed on SageMaker. Many current solutions track a session or concurrency metric only until the first token is sent to the client and then mark the target instance as available. SageMaker can track a request until the last token is streamed to the client instead of until the first token. This way, clients can be directed to instances to GPUs that are less busy, avoiding hotspots. Additionally, tracking concurrency also helps you make sure requests that are in-flight and queued are treated alike for alerting on the need for auto scaling. With this capability, you can make sure your model deployment scales proactively, accommodating fluctuations in request volumes and maintaining optimal performance by minimizing queuing delays.

In this post, we detail how the new ConcurrentRequestsPerModel and ConcurrentRequestsPerCopy CloudWatch metrics work, explain why you should use them, and walk you through the process of implementing them for your workloads. These new metrics allow you to scale your LLM deployments more effectively, providing optimal performance and cost-efficiency as the demand for your models fluctuates.

Components of auto scaling

The following figure illustrates a typical scenario of how a SageMaker real-time inference endpoint scales out to handle an increase in concurrent requests. This demonstrates the automated and responsive nature of scaling in SageMaker. In this example, we walk through the key steps that occur when the inference traffic to a SageMaker real-time endpoint starts to increase and concurrency to the model deployed on every instance goes up. We show how the system monitors the traffic, invokes an auto scaling action, provisions new instances, and ultimately load balances the requests across the scaled-out resources. Understanding this scaling process is crucial for making sure your generative AI models can handle fluctuations in demand and provide a seamless experience for your customers. By the end of this walkthrough, you’ll have a clear picture of how SageMaker real-time inference endpoints can automatically scale to meet your application’s needs.

Let’s dive into the details of this scaling scenario using the provided figure.

The key steps are as follows:

  1. Increased inference traffic (t0) – At some point, the traffic to the SageMaker real-time inference endpoint starts to increase, indicating a potential need for additional resources. The increase in traffic leads to a higher number of concurrent requests required for each model copy or instance.
  2. CloudWatch alarm monitoring (t0 → t1) – An auto scaling policy uses CloudWatch to monitor metrics, sampling it over a few data points within a predefined time frame. This makes sure the increased traffic is a sustained change in demand, not a temporary spike.
  3. Auto scaling trigger (t1) – If the metric crosses the predefined threshold, the CloudWatch alarm goes into an InAlarm state, invoking an auto scaling action to scale up the resources.
  4. New instance provisioning and container startup (t1 → t2) – During the scale-up action, new instances are provisioned if required. The model server and container are started on the new instances. When the instance provisioning is complete, the model container initialization process begins. After the server successfully starts and passes the health checks, the instances are registered with the endpoint, enabling them to serve incoming traffic requests.
  5. Load balancing (t2) – After the container health checks pass and the container reports as healthy, the new instances are ready to serve inference requests. All requests are now automatically load balanced between the two instances using the pre-built routing strategies in SageMaker.

This approach allows the SageMaker real-time inference endpoint to react quickly and handle the increased traffic with minimal impact to the clients.

Application Auto Scaling supports target tracking and step scaling policies. Each have their own logic to handle scale-in and scale-out:

  • Target tracking works to scale out by adding capacity to reduce the difference between the metric value (ConcurrentRequestsPerModel/Copy) and the target value set. When the metric (ConcurrentRequestsPerModel/Copy) is below the target value, Application Auto Scaling scales in by removing capacity.
  • Step scaling works to scales capacity using a set of adjustments, known as step adjustments. The size of the adjustment varies based on the magnitude of the metric value (ConcurrentRequestsPerModel/Copy)/alarm breach.

By using these new metrics, auto scaling can now be invoked and scale out significantly faster compared to the older SageMakerVariantInvocationsPerInstance predefined metric type. This decrease in the time to measure and invoke a scale-out allows you to react to increased demand significantly faster than before (under 1 minute). This works especially well for generative AI models, which are typically concurrency-bound and can take many seconds to complete each inference request.

Using the new high-resolution metrics allow you to greatly decrease the time it takes to scale up an endpoint using Application Auto Scaling. These high-resolution metrics are emitted at 10-second intervals, allowing for faster invoking of scale-out procedures. For models with less than 10 billion parameters, this can be a significant percentage of the time it takes for an end-to-end scaling event. For larger model deployments, this can be up to 5 minutes shorter before a new copy of your FM or LLM is ready to service traffic.

Get started with faster auto scaling

Getting started with using the metrics is straightforward. You can use the following steps to create a new scaling policy to benefit from faster auto scaling. In this example, we deploy a Meta Llama 3 model that has 8 billion parameters on a G5 instance type, which uses NVIDIA A10G GPUs. In this example, the model can fit entirely on a single GPU and we can use auto scaling to scale up the number of inference components and G5 instances based on our traffic. The full notebook can be found on the GitHub for SageMaker Single Model Endpoints and SageMaker with inference components.

  1. After you create your SageMaker endpoint, you define a new auto scaling target for Application Auto Scaling. In the following code block, you set as_min_capacity and as_max_capacity to the minimum and maximum number of instances you want to set for your endpoint, respectively. If you’re using inference components (shown later), you can use instance auto scaling and skip this step.
    autoscaling_client = boto3.client("application-autoscaling", region_name=region)
    
    # Register scalable target
    scalable_target = autoscaling_client.register_scalable_target(
        ServiceNamespace="sagemaker",
        ResourceId=resource_id,
        ScalableDimension="sagemaker:variant:DesiredInstanceCount",
        MinCapacity=as_min_capacity,
        MaxCapacity=as_max_capacity,  # Replace with your desired maximum instances
    )

  2. After you create your new scalable target, you can define your policy. You can choose between using a target tracking policy or step scaling policy. In the following target tracking policy, we have set TargetValue to 5. This means we’re asking auto scaling to scale up if the number of concurrent requests per model is equal to or greater than five.
    # Create Target Tracking Scaling Policy
    target_tracking_policy_response = autoscaling_client.put_scaling_policy(
        PolicyName="SageMakerEndpointScalingPolicy",
        ServiceNamespace="sagemaker",
        ResourceId=resource_id,
        ScalableDimension="sagemaker:variant:DesiredInstanceCount",
        PolicyType="TargetTrackingScaling",
        TargetTrackingScalingPolicyConfiguration={
            "TargetValue": 5.0,  # Scaling triggers when endpoint receives 5 ConcurrentRequestsPerModel
            "PredefinedMetricSpecification": {
                "PredefinedMetricType": "SageMakerVariantConcurrentRequestsPerModelHighResolution"
            },
            "ScaleInCooldown": 180,  # Cooldown period after scale-in activity
            "ScaleOutCooldown": 180,  # Cooldown period after scale-out activity
        },
    )

If you would like to configure a step scaling policy, refer to the following notebook.

That’s it! Traffic now invoking your endpoint will be monitored with concurrency tracked and evaluated against the policy you specified. Your endpoint will scale up and down based on the minimum and maximum values you provided. In the preceding example, we set a cooldown period for scaling in and out to 180 seconds, but you can change this based on what works best for your workload.

SageMaker inference components

If you’re using inference components to deploy multiple generative AI models on a SageMaker endpoint, you can complete the following steps:

  1. After you create your SageMaker endpoint and inference components, you define a new auto scaling target for Application Auto Scaling:
    autoscaling_client = boto3.client("application-autoscaling", region_name=region)
    
    # Register scalable target
    scalable_target = autoscaling_client.register_scalable_target(
        ServiceNamespace="sagemaker",
        ResourceId=resource_id,
        ScalableDimension="sagemaker:inference-component:DesiredCopyCount",
        MinCapacity=as_min_capacity,
        MaxCapacity=as_max_capacity,  # Replace with your desired maximum instances
    )

  2. After you create your new scalable target, you can define your policy. In the following code, we set TargetValue to 5. By doing so, we’re asking auto scaling to scale up if the number of concurrent requests per model is equal to or greater than five.
    # Create Target Tracking Scaling Policy
    target_tracking_policy_response = autoscaling_client.put_scaling_policy(
        PolicyName="SageMakerInferenceComponentScalingPolicy",
        ServiceNamespace="sagemaker",
        ResourceId=resource_id,
        ScalableDimension="sagemaker:inference-component:DesiredCopyCount",
        PolicyType="TargetTrackingScaling",
        TargetTrackingScalingPolicyConfiguration={
            "TargetValue": 5.0,  # Scaling triggers when endpoint receives 5 ConcurrentRequestsPerCopy
            "PredefinedMetricSpecification": {
                "PredefinedMetricType": "SageMakerInferenceComponentConcurrentRequestsPerCopyHighResolution"
            },
            "ScaleInCooldown": 180,  # Cooldown period after scale-in activity
            "ScaleOutCooldown": 180,  # Cooldown period after scale-out activity
        },
    )

You can use the new concurrency-based target tracking auto scaling policies in tandem with existing invocation-based target tracking policies. When a container experiences a crash or failure, the resulting requests are typically short-lived and may be responded to with error messages. In such scenarios, the concurrency-based auto scaling policy can detect the sudden drop in concurrent requests, potentially causing an unintentional scale-in of the container fleet. However, the invocation-based policy can act as a safeguard, avoiding the scale-in if there is still sufficient traffic being directed to the remaining containers. With this hybrid approach, container-based applications can achieve a more efficient and adaptive scaling behavior. The balance between concurrency-based and invocation-based policies allows the system to respond appropriately to various operational conditions, such as container failures, sudden spikes in traffic, or gradual changes in workload patterns. This enables the container infrastructure to scale up and down more effectively, optimizing resource utilization and providing reliable application performance.

Sample runs and results

With the new metrics, we have observed improvements in the time required to invoke scale-out events. To test the effectiveness of this solution, we completed some sample runs with Meta Llama models (Llama 2 7B and Llama 3 8B). Prior to this feature, detecting the need for auto scaling could take over 6 minutes, but with this new feature, we were able to reduce that time to less than 45 seconds. For generative AI models such as Meta Llama 2 7B and Llama 3 8B, we have been able to reduce the overall end-to-end scale-out time by approximately 40%.

The following figures illustrate the results of sample runs for Meta Llama 3 8B.

The following figures illustrate the results of sample runs for Meta Llama 2 7B.

As a best practice, it’s important to optimize your container, model artifacts, and bootstrapping processes to be as efficient as possible. Doing so can help minimize deployment times and improve the responsiveness of AI services.

Conclusion

In this post, we detailed how the ConcurrentRequestsPerModel and ConcurrentRequestsPerCopy metrics work, explained why you should use them, and walked you through the process of implementing them for your workloads. We encourage you to try out these new metrics and evaluate whether they improve your FM and LLM workloads on SageMaker endpoints. You can find the notebooks on GitHub.

Special thanks to our partners from Application Auto Scaling for making this launch happen: Ankur Sethi, Vasanth Kumararajan, Jaysinh Parmar Mona Zhao, Miranda Liu, Fatih Tekin, and Martin Wang.


About the Authors

James Park is a Solutions Architect at Amazon Web Services. He works with Amazon.com to design, build, and deploy technology solutions on AWS, and has a particular interest in AI and machine learning. In h is spare time he enjoys seeking out new cultures, new experiences,  and staying up to date with the latest technology trends. You can find him on LinkedIn.

Praveen Chamarthi is a Senior AI/ML Specialist with Amazon Web Services. He is passionate about AI/ML and all things AWS. He helps customers across the Americas scale, innovate, and operate ML workloads efficiently on AWS. In his spare time, Praveen loves to read and enjoys sci-fi movies.

Dr. Changsha Ma is an AI/ML Specialist at AWS. She is a technologist with a PhD in Computer Science, a master’s degree in Education Psychology, and years of experience in data science and independent consulting in AI/ML. She is passionate about researching methodological approaches for machine and human intelligence. Outside of work, she loves hiking, cooking, hunting food, and spending time with friends and families.

Saurabh Trikande is a Senior Product Manager for Amazon SageMaker Inference. He is passionate about working with customers and is motivated by the goal of democratizing machine learning. He focuses on core challenges related to deploying complex ML applications, multi-tenant ML models, cost optimizations, and making deployment of deep learning models more accessible. In his spare time, Saurabh enjoys hiking, learning about innovative technologies, following TechCrunch and spending time with his family.

Kunal Shah is a software development engineer at Amazon Web Services (AWS) with 7+ years of industry experience. His passion lies in deploying machine learning (ML) models for inference, and he is driven by a strong desire to learn and contribute to the development of AI-powered tools that can create real-world impact. Beyond his professional pursuits, he enjoys watching historical movies, traveling and adventure sports.

Marc Karp is an ML Architect with the Amazon SageMaker Service team. He focuses on helping customers design, deploy, and manage ML workloads at scale. In his spare time, he enjoys traveling and exploring new places.

Read More

Find answers accurately and quickly using Amazon Q Business with the SharePoint Online connector

Find answers accurately and quickly using Amazon Q Business with the SharePoint Online connector

Amazon Q Business is a fully managed, generative artificial intelligence (AI)-powered assistant that helps enterprises unlock the value of their data and knowledge. With Amazon Q, you can quickly find answers to questions, generate summaries and content, and complete tasks by using the information and expertise stored across your company’s various data sources and enterprise systems. At the core of this capability are native data source connectors that seamlessly integrate and index content from multiple repositories into a unified index. This enables the Amazon Q large language model (LLM) to provide accurate, well-written answers by drawing from the consolidated data and information. The data source connectors act as a bridge, synchronizing content from disparate systems like Salesforce, Jira, and SharePoint into a centralized index that powers the natural language understanding and generative abilities of Amazon Q.

To make this integration process as seamless as possible, Amazon Q Business offers multiple pre-built connectors to a wide range of data sources, including Atlassian Jira, Atlassian Confluence, Amazon Simple Storage Service (Amazon S3), Microsoft SharePoint, Salesforce, and many more. This allows you to create your generative AI solution with minimal configuration. For a full list of Amazon Q supported data source connectors, see Supported connectors.

One of the key integrations for Amazon Q is with Microsoft SharePoint Online. SharePoint is a widely used collaborative platform that allows organizations to manage and share content, knowledge, and applications to improve productivity and decision-making. By integrating Amazon Q with SharePoint, businesses can empower their employees to access information and insights from SharePoint more efficiently and effectively.

With the Amazon Q and SharePoint Online integration, business users can do the following:

  • Get instant answers – Users can ask natural language questions and Amazon Q will provide accurate, up-to-date answers by searching and synthesizing information from across the organization’s SharePoint sites and content.
  • Accelerate research and analysis – Instead of manually searching through SharePoint documents, users can use Amazon Q to quickly find relevant information, summaries, and insights to support their research and decision-making.
  • Streamline content creation – Amazon Q can assist in generating drafts, outlines, and even complete content pieces (such as reports, articles, or presentations) by drawing on the knowledge and data stored in SharePoint.
  • Automate workflows and tasks – Amazon Q can be configured to complete routine tasks and queries (such as generating status reports, answering FAQs, or requesting information) by interacting with the relevant SharePoint data and applications.
  • Enhance collaboration – By making SharePoint content more accessible and actionable through Amazon Q, the integration facilitates better knowledge sharing, problem-solving, and collaboration across the organization.

In this post, we guide you through the process of setting up the SharePoint Online connector in Amazon Q Business. This will enable your organization to use the power of generative AI to unlock the full value of your SharePoint investment and empower your workforce to work smarter and more efficiently.

Find accurate answers from content in Microsoft SharePoint using Amazon Q Business

After you integrate Amazon Q Business with Microsoft SharePoint, users can ask questions from the body of the document. For this post, we use a SharePoint Online site named HR Policies that has information about the travel policy, state disability insurance policy, payroll taxes, and paid family leave program for California stored in document libraries. Some of the questions you can ask Amazon Q Business might include the following:

  • Is there a leave plan in California for new parents?
  • Can I claim disability insurance during this time?
  • Before applying for leave, I want to submit my submit expense report, how can I do it?
  • Is there any limit on spending on a business trip?
  • How can I calculate UI and ETT?

Overview of the data source

SharePoint is a website-based collaboration system that is used as a secure place to store, organize, share, and access information from any device. SharePoint empowers teamwork with dynamic and productive team sites for every project team, department, and division.

SharePoint is available in two options: SharePoint Server and SharePoint Online. SharePoint Server is a locally hosted platform that your company owns and operates. You’re responsible for everything from server architecture, active directory, to file storage. SharePoint Server 2016, SharePoint Server 2019, and SharePoint Server Subscription Edition are the active SharePoint Server releases. SharePoint Online is a cloud-based service provided directly from Microsoft. They take care of identity management architecture, and site management. SharePoint Sever and SharePoint Online contain pages, files, attachments, links, events, and comments that can be crawled by Amazon Q SharePoint connectors for SharePoint Server and SharePoint Online.

SharePoint Online and SharePoint Server offer a site content space where site owners can view a list of all pages, libraries, and lists for their site. The site content space also provides access to add lists, pages, document libraries, and more.

HR Policies SharePoint Site document library folder structure

Pages are the contents stored on webpages; these are meant to display information to the end-user.

SharePoint Site Pages

A document library provides a secure place to store files where you and your coworkers can find them easily. You can work on them together and access them from any device at any time.

document library with files

A list is one of the data storage mechanisms within SharePoint. It provides the UI to view the items in a list. You can add, edit, and delete items or view individual items.

SharePoint List

Overview of the SharePoint Online connector for Amazon Q Business

To crawl and index contents from SharePoint Online, you can configure the Amazon Q Business SharePoint Online connector as a data source in your Amazon Q business application. When you connect Amazon Q Business to a data source and initiate the sync process, Amazon Q Business crawls and indexes documents from the data source into its index.

Let’s look at what are considered as documents in the context of Amazon Q business SharePoint Online connector. A document is a collection of information that consists of a title, the content (or the body), metadata (data about the document), and access control list (ACL) information to make sure answers are provided from documents that the user has access to.

The following entities in SharePoint are crawled and indexed as documents along with their metadata and access control information:

  • Files
  • Events
  • Pages
  • Links
  • Attachments
  • Comments

Amazon Q Business crawls data source document attributes or metadata and maps them to fields in your Amazon Q index. Refer to Amazon Q Business SharePoint Online data source connector field mappings for more details.

Configure and prepare the Amazon Q connector

Before you index the content from Microsoft SharePoint online, your need to first establish a secure connection between the Amazon Q Business connector for SharePoint Online with your SharePoint Online instance. To establish a secure connection, you need to authenticate with the data source.

The following are the supported authentication mechanisms for the SharePoint connector:

  • Basic Authentication
  • OAuth 2.0 with Resource Owner Password Credentials Flow
  • Azure AD App-Only (OAuth 2.0 Certificate)
  • SharePoint App-Only with Client Credentials Flow
  • OAuth 2.0 with Refresh Token Flow

Secure querying with ACL crawling, identity crawling, and user store

Secure querying is when a user runs a query and is returned answers from documents that the user has access to and not from documents that the user does not have access to. To enable users to do secure querying, Amazon Q Business honors ACLs of the documents. Amazon Q Business does this by first supporting the indexing of ACLs. Indexing documents with ACLs is crucial for maintaining data security, because documents without ACLs are considered public. At query time, the user’s credentials (email address) are passed along with the query so that answers from documents that are relevant to the query and which the user is authorized to access are displayed.

A document’s ACL contains information such as the user’s email address and the local groups or federated groups (if Microsoft SharePoint is integrated with an identity provider (IdP) such as Azure Active Directory/Entra ID) that have access to the document. The SharePoint online data source can be optionally connected to an IdP such as Okta or Microsoft Entra ID. In this case, the documents in SharePoint Online can have the federated group information.

When a user logs in to a web application to conduct a search, the user’s credentials (such as an email address) need to match that’s in the ACL of the document to return results from that document. The web application that the user uses to retrieve answers would be connected to an IdP or AWS IAM Identity Center. The user’s credentials from the IdP or IAM Identity Center are referred to here as the federated user credentials. The federated user credentials such as the email address are passed along with the query so that Amazon Q can return the answers from the documents that this user has access to. However, sometimes this user’s federated credentials may not be present in the SharePoint Online data source or the SharePoint document’s ACLs. Instead, the user’s local user alias, local groups that this local user alias is a part of, or the federated groups that the federated user is a part of are available in the document’s ACL. Therefore, there is a need to map the federated user credential to the local user alias, local groups, or federated groups in the document ACL.

To map this federated user’s email address to the local user aliases, local groups, or federated groups, certain Amazon Q Business connectors, including the SharePoint Online connector, provide an identity crawler to load the identity information (local user alias, local groups, federated groups, and their mappings, along with any other mappings to a federated user) from the connected data sources into a user store. At query time, Amazon Q Business retrieves the associated local user aliases, local groups, and any federated groups from the user store and uses that along with the query for securely retrieving passages from documents that the user has access to.

If you need to index documents without ACLs, you must make sure they’re explicitly marked as public in your data source.

Refer to How Amazon Q Business connector crawls SharePoint (Online) ACLs for more details.

Amazon Q indexes the documents with ACLs and sets the user’s email address or user principal name for the user and the group name [site URL hash value | group name] for the local group in the ACL. If the SharePoint Online data source is connected to an IdP such as Azure AD/Entra ID or Okta, the AD group name visible in the SharePoint site is set as the federated group ACL. The identity crawler sets these the same as the principals along with the available mappings in the user store. Any additional mappings need to be set in the user store using the user store APIs.

Overview of solution

This post presents the steps to create a certificate and private key, configure Azure AD (either using the Azure AD console or a PowerShell script), and configure Amazon Q Business.

For this post, we use a SharePoint Online site named HR Policies that hosts policy documents in a Documents library and payroll tax documents in a Payroll Taxes library to walk you through the solution.

In one of the scenarios that we validate, a SharePoint user (Carlos Salazar) is part of the SharePoint site members group, and he has access only to policy documents in the Documents library.

SharePoint Document Library with HR Travel policy and other policy document

Carlos Salazar can receive responses for queries related to HR policies, as shown in the following example.

Amazon Q Business Web application with question and response on leave plan in California for new parents

However, for questions related to payroll tax, he did not receive any response.

Amazon Q Business Web application with question and response

Another SharePoint user (John Doe) is part of the SharePoint site owners group and has access to both the Documents and Payroll Taxes libraries.

document library with Payroll tax files

John Doe receives responses for queries related to payroll taxes, as shown in the following example.

Amazon Q Business Web application with question and response on "how can i calculate UI and ETT"

Prerequisites

You should meet the following prerequisites:

  • The user performing these steps should be a global administrator on Azure AD/Entra ID.
  • Configure Microsoft Entra ID and IAM Identity Center integration.
  • You need a Microsoft Windows instance to run PowerShell scripts and commands with PowerShell 7.4.1+. Details of the required PowerShell modules are described later in this post.
  • The user should have administrator permissions on the Windows instance.
  • Make sure that the user running these PowerShell commands has the right M365 license (for example, M365 E3).

Create the certificate and private key

In Azure AD, when configuring App-Only authentication, you typically use a certificate to request access. Anyone with the certificate’s private key can use the app and the permissions granted to the app. We create and configure a self-signed X.509 certificate that will be used to authenticate Amazon Q against Azure AD, while requesting the App-Only access token. The following steps walk you through the setup of this model.

For this post, we use Windows PowerShell to run a few PowerShell commands. You can use an existing Windows instance or spin up a Windows EC2 instance or Windows workstation to run the PowerShell commands.

You can use the following PowerShell script to create a self-signed certificate. You can also generate the self-signed certificate through the New-PnPAzureCertificate command.

  1. Run the following command:
.Create-SelfSignedCertificate.ps1 -CommonName "<amazonqbusinessdemo>" -StartDate <StartDate in yyyy-mm-dd format> -EndDate <EndDate in yyyy-mm-dd format>

You will be asked to give a password to encrypt your private key, and both the .PFX file and the .CER file will be exported to the current folder (where you ran the PowerShell script from). Verify that you now have a .cer and .pfx file.

  1. Upload this .cer file to an S3 location that your Amazon Q IAM role has GetObject permissions for. You can let Amazon Q create this role for you in future steps outlined later in this post, and the correct permissions will be added for you if you choose.

Now you extract the private key contents from the .pfx file and save it for Amazon Q connector configuration. This .pfx file will be present in the folder where you have saved the certificate.

  1. Run the following command to extract the private key:
openssl pkcs12 -in [amazonqbusinessdemo.pfx] -nocerts -out [amazonqbusinessdemo.key]

You will be prompted for the import password. Enter the password that you used to protect your key pair when you created the .pfx file (client ID, in our case). You will be prompted again to provide a new password to protect the .key file that you are creating. Store the password to your key file in a secure place to avoid misuse. (When you enter a password, the window shows nothing if you’re using the Windows CMD window. Enter your password and choose Enter.)

  1. Run the following command to decrypt the private key:
openssl rsa -in [amazonqbusinessdemo.key] -out [amazonqbusinessdemo-decrypted.key]
  1. Run the following command to extract the certificate:
openssl pkcs12 -in [amazonqbusinessdemo.pfx] -clcerts -nokeys -out [amazonqbusinessdemo.crt]

This decrypted key and certificate will be used by the connector for authentication purposes.

  1. Upload the X.509 certificate (ending with .crt) to an S3 bucket. This will be used when configuring the SharePoint Online connector for Amazon Q.
    1. Verify the contents of the file amazonqbusinessdemo-decrypted.key starts with the standard BEGIN PRIVATE KEY header.
    2. Copy and paste the contents of the amazonqbusinessdemo-decrypted.key for use later in our Amazon Q setup.

Configure Azure AD

You can configure Azure AD using either of the following methods:

  • Using the Azure AD console GUI. This is a manual step-by-step process.
  • Using the provided PowerShell script. This is an automated process that takes in the inputs and configures the required permissions.

Follow the steps for either option to complete the Azure AD configuration.

Configure Azure AD using the Azure AD console

To configure Azure AD using the GUI, you first register an Azure AD application in the Azure AD tenant that is linked to the SharePoint Online/O365 tenant. For more details, see Granting access via Azure AD App-Only.

  1. Open the Office 365 Admin Center using the account of a user member of the Tenant Global Admins group.
  2. Navigate to Microsoft Azure Portal.
  3. Search for and choose App registrations.

Azure Portal for App registration

  1. Choose New registration.

Azure Portal for App registration step

  1. Enter a name for your application, select who can use this application, and choose Register.

Azure Portal for App registration with name and account types field

An application will be created. You will see a page like the following screenshot.

  1. Note the application (client) ID and the directory (tenant) ID.

These IDs will be different than what is shown in the screenshot.

Azure App wiht client ID and Tenant ID

Now you can configure the newly registered application for SharePoint permissions.

  1. Choose API permissions in the navigation pane.
  2. Choose Add a permission to add the permissions to your application.

Azure App API Permission tab

  1. Choose SharePoint from the list of applications.

Azure App API Permission tab Request API Permission

  1. Configure permissions.

There are two different ways to configure SharePoint permissions.

To configure permissions to access multiple SharePoint Site collections (using Azure AD App-Only permissions), select Site.FullControl.All to allow full control permissions to all the SharePoint site collections and to read the ACLs from these site collections.

azure app registration request API permission tab

This permission requires admin consent in a tenant before it can be used. To do so, choose Grant admin consent for <organization name> and choose Yes to confirm.

azure app Grant Admin consent confirmation

Alternatively, to configure permissions to access specific SharePoint site collections, select Sites.Selected to allow access to a subset of site collections without a signed-in user. The specific site collections and the permissions granted will be configured in SharePoint Online.

Request API permission for Sites and Add permission

This permission requires admin consent in a tenant before it can be used. To do so, choose Grant admin consent for <organization name> and choose Yes to confirm.

Azure App Grand Admin consent confirmation page

Next, you grant Azure AD app permissions to one or more SharePoint site collections. Make sure the following prerequisites are in place:

  • You must have Windows Server/Workstation with PowerShell 7.4.1+.
  • The user running these PowerShell commands must have the right M365 license (for example, M365 E3).
  • Install the PowerShell modules using Install-Module -Name PnP.PowerShell -AllPreRelease.
  • If this is your first-time running PowerShell commands, run the Connect-PnPOnline -Url <site collection url> -PnPManagementShell PowerShell command and complete the consent process to use PnP cmdlets. Alternatively, run the Register-PnPManagementShellAccess cmdlet, which grants access to the tenant for the PnP management shell multi-tenant Azure AD application.
  1. Open PowerShell and connect to SharePoint Online using the Connect-PnPOnline command:
Connect-PnPOnline -Url <sitecollectionUrl> -PnPManagementShell
  1. Add the Azure AD app to one or more specific site collection permissions using Grant-PnPAzureADAppSitePermission:
Grant-PnPAzureADAppSitePermission -AppId <app-id> -DisplayName <displayname> -Site [<sitecollectionurl>] -Permissions <FullControl> 

If you want to configure permissions to more than one SharePoint Online site collection, then you must repeat the preceding PowerShell commands for every collection.

Now you’re ready to connect the certificate.

  1. Choose Certificates & secrets in the navigation pane.
  2. On the Certificates tab, choose Upload certificate.

Azure App registration Certificate and Secrets page

  1. Choose the .cer file you generated earlier and choose Add to upload it.

Upload Certificate by Add option

This completes the configuration on the Azure AD side.

Configure Azure AD using the provided PowerShell script

The user running this PowerShell script should be an Azure AD tenant admin or have tenant admin permissions. Additionally, as a prerequisite, install the MS Graph PowerShell SDK.

Complete the following steps to run the PowerShell script:

  1. Run the PowerShell script and follow the instructions.

This script will do the following:

  • Register a new application in Azure AD/Entra ID
  • Configure the required SharePoint permissions
  • Provide admin consent for the permissions

The output from the PowerShell script will look like the following screenshot.

PowerShell Script for Certificate

  1. If you chose Selected as the permission to target a specific SharePoint Site collection, continue with the steps to configure a specific SharePoint Site collection as mentioned earlier.
  2. If you have more than one SharePoint site collection to be crawled, repeat the previous step to configure each collection.

Configure Amazon Q

Make sure you have set up Amazon Q Business with Entra ID as IdP as mentioned in the prerequisites. Also, make sure the email ID is in lowercase letters while creating the users in Entra ID.

Follow the instructions in Connecting Amazon Q Business to SharePoint (Online) using the console.

For Step 9 (Authentication), we choose Azure AD App-Only authentication and configure it as follows:

  • For Tenant ID, enter the tenant ID of your SharePoint account. This will be directory (tenant) ID in your registered Azure application, in the Azure Portal, as shown in the following screenshot (the IDs will be different for your setup).

Azure App for Application client id and Tenant ID

  • For Certificate path, enter the full S3 path to your certificate (for example, s3://certBucket/azuread.crt). This is the Azure AD self-signed X.509 certificate to authenticate the connector for Azure AD. This certificate was created earlier.
  • For AWS Secrets Manager secret, create a secret in AWS Secrets Manager to store your SharePoint authentication credentials:
    • For Secret name, enter a name for your secret.
    • For Client ID, enter the Azure AD client ID generated when you registered SharePoint in Azure AD. This is the application (client) ID created in the Azure Portal when registering the SharePoint application in Azure, as described earlier.
    • For Private key, enter a private key to authenticate the connector for Azure AD. This is the contents of the .pfx file you created when registering your Azure SharePoint application, as described earlier. Enter the decrypted contents of that .pfx file in its entirety. Choose Show private key to verify it matches the contents for your .pfx file.

Secret created in AWS Secret Manager

Continue with the rest of the steps in Connecting Amazon Q Business to SharePoint (Online) using the console.

Access the web experience on Amazon Q

To access the web experience, complete the following steps:

  1. On the Amazon Q Business console, choose Applications in the navigation pane.
  2. Choose the application you created.
  3. Choose the link under Web experience URL to browse Amazon Q.

Getting Web Application URL from Amazon Q Business Application page

  1. When prompted, authenticate with Entra ID/Azure AD.

After you’re authenticated, you can access Amazon Q. You can ask Amazon Q a question and get a response based on the permissions of the logged-in user.

References

param(
  [Parameter(Mandatory=$true,
  HelpMessage="The friendly name of the app registration")]
  [String]
  $AppName,

  [Parameter(Mandatory=$true,
  HelpMessage="The file path to your public key file")]
  [String]
  $CertPath,

  [Parameter(Mandatory=$false,
  HelpMessage="Your Azure Active Directory tenant ID")]
  [String]
  $TenantId,

  [Parameter(Mandatory=$false)]
  [Switch]
  $StayConnected = $false
)

# Display the options for permission
$validOptions = @('R', 'F', 'S')
Write-Host "Select the permissions: [F]-sites.FullControl.All [S]-sites.Selected"

# Loop to prompt the user until a valid option is selected
do {
    foreach ($option in $validOptions) {
        Write-Host "[$option]"
    }
    $selectedPermission = Read-Host "Enter your choice (F or S)"
} while ($selectedPermission -notin $validOptions)

# Map user input to corresponding permissions
$permissionMapping = @{
    'F' = '678536fe-1083-478a-9c59-b99265e6b0d3'
    'S' = '20d37865-089c-4dee-8c41-6967602d4ac8'
}

$selectedPermissionValue = $permissionMapping[$selectedPermission]

# Requires an admin
if ($TenantId)
{
  Connect-MgGraph -Scopes "Application.ReadWrite.All User.Read AppRoleAssignment.ReadWrite.All" -TenantId $TenantId
}
else
{
  Connect-MgGraph -Scopes "Application.ReadWrite.All User.Read AppRoleAssignment.ReadWrite.All"
}

# Graph permissions constants
$sharePointResourceId = "00000003-0000-0ff1-ce00-000000000000"
$SitePermission = @{
  Id=$selectedPermissionValue
  Type="Role"
}

# Get context for access to tenant ID
$context = Get-MgContext

# Load cert
$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2($CertPath)
Write-Host -ForegroundColor Cyan "Certificate loaded"

# Create app registration
$appRegistration = New-MgApplication -DisplayName $AppName -SignInAudience "AzureADMyOrg" `
 -Web @{ RedirectUris="http://localhost"; } `
 -RequiredResourceAccess @{ ResourceAppId=$sharePointResourceId; ResourceAccess=$UserReadAll, $GroupReadAll, $SitePermission } `
 -AdditionalProperties @{} -KeyCredentials @(@{ Type="AsymmetricX509Cert"; Usage="Verify"; Key=$cert.RawData })
Write-Host -ForegroundColor Cyan "App registration created with app ID" $appRegistration.AppId

# Create corresponding service principal
$servicePrincipal= New-MgServicePrincipal -AppId $appRegistration.AppId -AdditionalProperties @{} | Out-Null
Write-Host -ForegroundColor Cyan "Service principal created"
Write-Host
Write-Host -ForegroundColor Green "Success"
Write-Host

# Providing admin consent
$scp = Get-MgServicePrincipal -Filter "DisplayName eq '$($AppName)'" 
$app = Get-MgServicePrincipal -Filter "AppId eq '$sharePointResourceId'" 
New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId $scp.id -PrincipalId $scp.Id -ResourceId $app.Id -AppRoleId $selectedPermissionValue  

# Generate Connect-MgGraph command
$connectGraph = "Connect-MgGraph -ClientId """ + $appRegistration.AppId + """ -TenantId """`
 + $context.TenantId + """ -CertificateName """ + $cert.SubjectName.Name + """"
Write-Host $connectGraph

if ($StayConnected -eq $false)
{
  Disconnect-MgGraph
  Write-Host "Disconnected from Microsoft Graph"
}
else
{
  Write-Host
  Write-Host -ForegroundColor Yellow "The connection to Microsoft Graph is still active. To disconnect, use Disconnect-MgGraph"
  • You can test if the Grant-PnPAzureADAppSitePermission cmdlet worked by connecting to the SharePoint site using the Azure AD app that has the SharePoint.Sites.Selected permission and run a few SharePoint API calls:
    1. Make a note of the certificate thumbprint as shown earlier.
    2. Install the certificate for the current user in the Windows Certificate Management Store.
    3. Run the following PowerShell cmdlet to connect to the SharePoint site collection using PnPOnline:
Connect-PnPOnline -Url “<SharePoint site collection url> -ClientId "<client id>" -Thumbprint "<certificate thumbprint>" -Tenant "<tenant id>
    1. Run Get-PnPList to list all the SharePoint lists in the site collection and confirm that the permissions are configured correctly:
Get-PnPList

Troubleshooting

For troubleshooting guidance, refer to Troubleshooting your SharePoint (Online) connector.

Clean up

Complete the following steps to clean up your resources:

  1. Open the Office 365 Admin Center using the account of a user member of the Tenant Global Admins group.
  2. Navigate to the Microsoft Azure Portal.
  3. Search for and choose App registrations.
  4. Select the app you created earlier, then choose Delete.
  5. On the Amazon Q Business console, choose Applications in the navigation pane.
  6. Select the application you created, and on the Actions menu, choose Delete.

Conclusion

In this post, we explored how Amazon Q Business can seamlessly integrate with SharePoint Online to help enterprises unlock the value of their data and knowledge. With the SharePoint Online connector, organizations can empower their employees to find answers quickly, accelerate research and analysis, streamline content creation, automate workflows, and enhance collaboration.

We walked you through the process of setting up the SharePoint Online connector, including configuring the necessary Azure AD integration and authentication mechanisms. With these foundations in place, you can start unlocking the full potential of your SharePoint investment and drive greater productivity, efficiency, and innovation across your business.

Now that you’ve learned how to integrate Amazon Q Business with your Microsoft SharePoint Online content, it’s time to unlock the full potential of your organization’s knowledge and data. To get started, sign up for an Amazon Q Business account and follow the steps in this post to set up the SharePoint Online connector. Then you can start asking Amazon Q natural language questions and watch as it surfaces the most relevant information from your company’s SharePoint sites and documents.

Don’t miss out on the transformative power of generative AI and the Amazon Q Business platform. Sign up today and experience the difference that Amazon Q can make for your organization’s SharePoint-powered knowledge and content management.


About the Authors

Vijai Gandikota is a Principal Product Manager on the Amazon Q and Amazon Kendra team of Amazon Web Services. He is responsible for the Amazon Q and Amazon Kendra connectors, ingestion, security, and other aspects of Amazon Q and Amazon Kendra.

Satveer Khurpa is a Senior Solutions Architect on the GenAI Labs team at Amazon Web Services. In this role, he uses his expertise in cloud-based architectures to develop innovative generative AI solutions for clients across diverse industries. Satveer’s deep understanding of generative AI technologies enables him to design scalable, secure, and responsible applications that unlock new business opportunities and drive tangible value.

Vijai Anand Ramalingam is a Senior Modernization Architect at Amazon Web Services, specialized in enabling and accelerating customers’ application modernization, transitioning from legacy monolith applications to microservices.

Ramesh Jatiya is a Senior Solutions Architect in the Independent Software Vendor (ISV) team at Amazon Web Services. He is passionate about working with ISV customers to design, deploy, and scale their applications in the cloud to derive business value. He is also pursuing an MBA in Machine Learning and Business Analytics from Babson College, Boston. Outside of work, he enjoys running, playing tennis, and cooking.

Neelam Rana is a Software Development Engineer on the Amazon Q and Amazon Kendra engineering team. She works on Amazon Q connector design, development, integration, and test operations.

Dipti Kulkarni is a Software Development Manager on the Amazon Q and Amazon Kendra engineering team of Amazon Web Services, where she manages the connector development and integration teams.

Read More

Evaluate conversational AI agents with Amazon Bedrock

Evaluate conversational AI agents with Amazon Bedrock

As conversational artificial intelligence (AI) agents gain traction across industries, providing reliability and consistency is crucial for delivering seamless and trustworthy user experiences. However, the dynamic and conversational nature of these interactions makes traditional testing and evaluation methods challenging. Conversational AI agents also encompass multiple layers, from Retrieval Augmented Generation (RAG) to function-calling mechanisms that interact with external knowledge sources and tools. Although existing large language model (LLM) benchmarks like MT-bench evaluate model capabilities, they lack the ability to validate the application layers. The following are some common pain points in developing conversational AI agents:

  • Testing an agent is often tedious and repetitive, requiring a human in the loop to validate the semantics meaning of the responses from the agent, as shown in the following figure.
  • Setting up proper test cases and automating the evaluation process can be difficult due to the conversational and dynamic nature of agent interactions.
  • Debugging and tracing how conversational AI agents route to the appropriate action or retrieve the desired results can be complex, especially when integrating with external knowledge sources and tools.Testing an agent is a repetitive work

Agent Evaluation, an open source solution using LLMs on Amazon Bedrock, addresses this gap by enabling comprehensive evaluation and validation of conversational AI agents at scale.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.

Agent Evaluation provides the following:

  • Built-in support for popular services, including Agents for Amazon Bedrock, Knowledge Bases for Amazon Bedrock, Amazon Q Business, and Amazon SageMaker endpoints
  • Orchestration of concurrent, multi-turn conversations with your agent while evaluating its responses
  • Configurable hooks to validate actions triggered by your agent
  • Integration into continuous integration and delivery (CI/CD) pipelines to automate agent testing
  • A generated test summary for performance insights including conversation history, test pass rate, and reasoning for pass/fail results
  • Detailed traces to enable step-by-step debugging of the agent interactions

In this post, we demonstrate how to streamline virtual agent testing at scale using Amazon Bedrock and Agent Evaluation.

Solution overview

To use Agent Evaluation, you need to create a test plan, which consists of three configurable components:

  • Target – A target represents the agent you want to test
  • Evaluator – An evaluator represents the workflow and logic to evaluate the target on a test
  • Test – A test defines the target’s functionality and how you want your end-user to interact with the target, which includes:
    • A series of steps representing the interactions between the agent and the end-user
    • Your expected results of the conversation

The following figure illustrates how Agent Evaluation works on a high level. The framework implements an LLM agent (evaluator) that will orchestrate conversations with your own agent (target) and evaluate the responses during the conversation.

How Agent Evaluation works on a high level

The following figure illustrates the evaluation workflow. It shows how the evaluator reasons and assesses responses based on the test plan. You can either provide an initial prompt or instruct the evaluator to generate one to initiate the conversation. At each turn, the evaluator engages the target agent and evaluates its response. This process continues until the expected results are observed or the maximum number of conversation turns is reached.

Agent Evaluation evaluator workflow

By understanding this workflow logic, you can create a test plan to thoroughly assess your agent’s capabilities.

Use case overview

To illustrate how Agent Evaluation can accelerate the development and deployment of conversational AI agents at scale, let’s explore an example scenario: developing an insurance claim processing agent using Agents for Amazon Bedrock. This insurance claim processing agent is expected to handle various tasks, such as creating new claims, sending reminders for pending documents related to open claims, gathering evidence for claims, and searching for relevant information across existing claims and customer knowledge repositories.

For this use case, the goal is to test the agent’s capability to accurately search and retrieve relevant information from existing claims. You want to make sure the agent provides correct and reliable information about existing claims to end-users. Thoroughly evaluating this functionality is crucial before deployment.

Begin by creating and testing the agent in your development account. During this phase, you interact manually with the conversational AI agent using sample prompts to do the following:

  • Engage the agent in multi-turn conversations on the Amazon Bedrock console
  • Validate the responses from the agent
  • Validate all the actions invoked by the agent
  • Debug and check traces for any routing failures

With Agent Evaluation, the developer can streamline this process through the following steps:

  1. Configure a test plan:
    1. Choose an evaluator from the models provided by Amazon Bedrock.
    2. Configure the target, which should be a type that Agent Evaluation supports. For this post, we use an Amazon Bedrock agent.
    3. Define the test steps and expected results. In the following example test plan, you have a claim with the ID claim-006 in your test system. You want to confirm that your agent can accurately answer questions about this specific claim.
    evaluator:
      type: bedrock-claude
      model: claude-haiku
    target:
      type: bedrock-agent
      bedrock_agent_alias_id:xxxxxxx
      bedrock_agent_id:xxxxxxx
    tests:
      - name: GetOpenClaimsWithDetails
        steps:
          - Ask the agent which claims are open.
          - Ask the agent for details on claim-006.
        expected_results:
          - The agent returns a list of open claims.
          - The agent returns the details on claim-006.
    

  2. Run the test plan from the command line:
    agenteval run

    The Agent Evaluation test runner will automatically orchestrate the test based on the test plan, and use the evaluator to determine if the responses from the target match the expected results.

  3. View the result summary.
    A result summary will be provided in markdown format. In the following example, the summary indicates that the test failed because the agent was unable to provide accurate information about the existing claim claim-006.Agent Evaluation test summary
  4.  Debug with the trace files of the failed tests.
    Agent Evaluation provides detailed trace files for the tests. Each trace file meticulously records every prompt and interaction between the target and the evaluator.For instance, in the _invoke_target step, you can gain valuable insights into the rationale behind the Amazon Bedrock agent’s responses, allowing you to delve deeper into the decision-making process:

    {
        "timestamp": "2024-04-17 12:52:11.231906+00:00",
        "step_name": "_invoke_target",
        "preProcessingTrace": {
            "modelInvocationOutput": {
                "parsedResponse": {
                    "isValid": false,
                    "rationale": "Let's analyze the user's input step-by-step: 1. The user is asking for details on a specific claim with ID 'claim-006'. 2. This does not seem malicious or harmful in any way, so we can rule out Category A. 3. The user is not trying to get information about the agent's functions or manipulate its behavior. So we can rule out Category B. 4. We don't have enough information yet to determine if the agent can actually answer this question. We need to check the conversation history and available functions first. 5. Looking at the conversation history, the agent has access to a function called 'get: :send-reminder: :getAllOpenClaims' that returns a list of open claim IDs. 6. However, 'claim-006' is not in the list of open claim IDs returned. So the agent does not have direct access to details on this specific claim. 7. Therefore, this question falls into Category C - the agent will be unable to answer or assist with this question using only the functions it has access to. Given this analysis, I categorize this input as: <category>C</category>"
                }
            }
        }
    }

    The trace shows that after reviewing the conversation history, the evaluator concludes, “the agent will be unable to answer or assist with this question using only the functions it has access to.” Consequently, it ends the conversation with the target agent and proceeds to generate the test status.

    In the _generate_test_status step, the evaluator generates the test status with reasoning based on the responses from the target.

    { 
        "timestamp": "2024-04-17 12:52:12.976985+00:00", 
        "step_name": "_generate_test_status", 
        "system_prompt": "You are a quality assurance engineer evaluating a conversation between an USER and an AGENT. You will be given an ordered list of steps wrapped in <steps> tags. Each step represents a task that the USER wants to perform when interacting with the AGENT. Your job is analyze the running conversation in <conversation> tags and classify it into the following categories: - A: The USER has attempted all the steps. - B: The USER has not yet attempted all the steps. Please think hard about the response in <thinking> tags before providing only the category letter within <category> tags.", 
        "prompt": "Here are the steps and conversation: <steps> 1. Ask the agent which claims are open. 2. Ask the agent for details on claim-006. <steps> <conversation> USER: Which claims are currently open? AGENT: The open claims are: 2s34w-8x, 5t16u-7v, 3b45c-9d USER: Can you please provide me with the details on claim-006? AGENT: Sorry, I don't have enough information to answer that. </conversation>", 
        "test_status": "B", 
        "reasoning": "The user has attempted the first step of asking which claims are open, and the agent has provided a list of open claims. However, the user has not yet attempted the second step of asking for details on claim-006, as the agent has indicated that they do not have enough information to provide those details." 
    }

    The test plan defines the expected result as the target agent accurately providing details about the existing claim claim-006. However, after testing, the target agent’s response doesn’t meet the expected result, and the test fails.

  5. After identifying and addressing the issue, you can rerun the test to validate the fix. In this example, it’s evident that the target agent lacks access to the claim claim-006. From there, you can continue investigating and verify if claim-006 exists in your test system.

Integrate Agent Evaluation with CI/CD pipelines

After validating the functionality in the development account, you can commit the code to the repository and initiate the deployment process for the conversational AI agent to the next stage. Seamless integration with CI/CD pipelines is a crucial aspect of Agent Evaluation, enabling comprehensive integration testing to make sure no regressions are introduced during new feature development or updates. This rigorous testing approach is vital for maintaining the reliability and consistency of conversational AI agents as they progress through the software delivery lifecycle.

By incorporating Agent Evaluation into CI/CD workflows, organizations can automate the testing process, making sure every code change or update undergoes thorough evaluation before deployment. This proactive measure minimizes the risk of introducing bugs or inconsistencies that could compromise the conversational AI agent’s performance and the overall user experience.

A standard agent CI/CD pipeline includes the following steps:

  1.  The source repository stores the agent configuration, including agent instructions, system prompts, model configuration, and so on. You should always commit your changes to provide quality and reproducibility.
  2. When you commit your changes, a build step is invoked. This is where unit tests should run and validate the changes, including typo and syntax checks.
  3. When the changes are deployed to the staging environment, Agent Evaluation runs with a series of test cases for runtime validation.
  4. The runtime validation on the staging environment can help build confidence to deploy the fully tested agent to production.

The following figure illustrates this pipeline.

Conversational AI agent CICD pipeline

In the following sections, we provide step-by-step instructions to set up Agent Evaluation with GitHub Actions.

Prerequisites

Complete the following prerequisite steps:

  1. Follow the GitHub user guide to get started with GitHub.
  2. Follow the GitHub Actions user guide to understand GitHub workflows and Actions.
  3. Follow the insurance claim processing agent using Agents for Amazon Bedrock example to set up an agent.

Set up GitHub Actions

Complete the following steps to deploy the solution:

  1. Write a series of test cases following the agent-evaluation test plan syntax and store test plans in the GitHub repository. For example, a test plan to test an Amazon Bedrock agent target is written as follows, with BEDROCK_AGENT_ALIAS_ID and BEDROCK_AGENT_ID as placeholders:
    evaluator:
      model: claude-3
    target:
      bedrock_agent_alias_id: BEDROCK_AGENT_ALIAS_ID
      bedrock_agent_id: BEDROCK_AGENT_ID
      type: bedrock-agent
    tests:
      InsuranceClaimQuestions:
        ...

  2. Create an AWS Identity and Access Management (IAM) user with the proper permissions:
    1. The principal must have InvokeModel permission to the model specified in the configuration.
    2. The principal must have the permissions to call the target agent. Depending on the target type, different permissions are required. Refer to the agent-evaluation target documentation for details.
  3. Store the IAM credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) in GitHub Actions secrets.
  4. Configure a GitHub workflow as follows:
    name: Update Agents for Bedrock
    
    on:
      push:
        branches: [ "main" ]
    
    env:
      AWS_REGION: <Deployed AWS region>                   
      
    
    permissions:
      contents: read
    
    jobs:
      build:
        runs-on: ubuntu-latest
    
        steps:
        - name: Checkout
          uses: actions/checkout@v4
    
        - name: Configure AWS credentials
          uses: aws-actions/configure-aws-credentials@v4
          with:
            aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
            aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
            aws-region: ${{ env.AWS_REGION }}
    
        - name: Install agent-evaluation
          run: |
            pip install agent-evaluation
            agenteval --help
    
        - name: Test Bedrock Agent
          id: test-bedrock-agent
          env:
            BEDROCK_AGENT_ALIAS_ID: ${{ vars.BEDROCK_AGENT_ALIAS_ID }}
            BEDROCK_AGENT_ID: ${{ vars.BEDROCK_AGENT_ID }}
          run: |
            sed -e "s/BEDROCK_AGENT_ALIAS_ID/$BEDROCK_AGENT_ALIAS_ID/g" -e "s/BEDROCK_AGENT_ID/$BEDROCK_AGENT_ID/g" test_plans/agenteval.yml > agenteval.yml
            agenteval run
    
        - name: Test Summary
          if: always()
          id: test-summary
          run: |
            cat agenteval_summary.md >> $GITHUB_STEP_SUMMARY
    

    When you push new changes to the repository, it will invoke the GitHub Action, and an example workflow output is displayed, as shown in the following screenshot.

    GitHub Action Agent Evaluation test output

    A test summary like the following screenshot will be posted to the GitHub workflow page with details on which tests have failed.

    GitHub Action Agent Evaluation test summary

    The summary also provides the reasons for the test failures.

    GitHub Action Agent Evaluation test details

Clean up

Complete the following steps to clean up your resources:

  1. Delete the IAM user you created for the GitHub Action.
  2. Follow the insurance claim processing agent using Agents for Amazon Bedrock example to delete the agent.

Evaluator considerations

By default, evaluators use the InvokeModel API with On-Demand mode, which will incur AWS charges based on input tokens processed and output tokens generated. For the latest pricing details for Amazon Bedrock, refer to Amazon Bedrock pricing.

The cost of running an evaluator for a single test is influenced by the following:

  • The number and length of the steps
  • The number and length of expected results
  • The length of the target agent’s responses

You can view the total number of input tokens processed and output tokens generated by the evaluator using the --verbose flag when you perform a run (agenteval run --verbose).

Conclusion

This post introduced Agent Evaluation, an open source solution that enables developers to seamlessly integrate agent evaluation into their existing CI/CD workflows. By taking advantage of the capabilities of LLMs on Amazon Bedrock, Agent Evaluation enables you to comprehensively evaluate and debug your agents, achieving reliable and consistent performance. With its user-friendly test plan configuration, Agent Evaluation simplifies the process of defining and orchestrating tests, allowing you to focus on refining your agents’ capabilities. The solution’s built-in support for popular services makes it a versatile tool for testing a wide range of conversational AI agents. Moreover, Agent Evaluation’s seamless integration with CI/CD pipelines empowers teams to automate the testing process, making sure every code change or update undergoes rigorous evaluation before deployment. This proactive approach minimizes the risk of introducing bugs or inconsistencies, ultimately enhancing the overall user experience.

The following are some recommendations to consider:

  • Don’t use the same model to evaluate the results that you use to power the agent. Doing so may introduce biases and lead to inaccurate evaluations.
  • Block your pipelines on accuracy failures. Implement strict quality gates to help prevent deploying agents that fail to meet the expected accuracy or performance thresholds.
  • Continuously expand and refine your test plans. As your agents evolve, regularly update your test plans to cover new scenarios and edge cases, and provide comprehensive coverage.
  • Use Agent Evaluation’s logging and tracing capabilities to gain insights into your agents’ decision-making processes, facilitating debugging and performance optimization.

Agent Evaluation unlocks a new level of confidence in your conversational AI agents’ performance by streamlining your development workflows, accelerating time-to-market, and delivering exceptional user experiences. To further explore the best practices of building and testing conversational AI agent evaluation at scale, get started by trying Agent Evaluation and provide your feedback.


About the Authors

Sharon Li is an AI/ML Specialist Solutions Architect at Amazon Web Services (AWS) based in Boston, Massachusetts. With a passion for leveraging cutting-edge technology, Sharon is at the forefront of developing and deploying innovative generative AI solutions on the AWS cloud platform.

Bobby Lindsey is a Machine Learning Specialist at Amazon Web Services. He’s been in technology for over a decade, spanning various technologies and multiple roles. He is currently focused on combining his background in software engineering, DevOps, and machine learning to help customers deliver machine learning workflows at scale. In his spare time, he enjoys reading, research, hiking, biking, and trail running.

Tony Chen is a Machine Learning Solutions Architect at Amazon Web Services, helping customers design scalable and robust machine learning capabilities in the cloud. As a former data scientist and data engineer, he leverages his experience to help tackle some of the most challenging problems organizations face with operationalizing machine learning.

Suyin Wang is an AI/ML Specialist Solutions Architect at AWS. She has an interdisciplinary education background in Machine Learning, Financial Information Service and Economics, along with years of experience in building Data Science and Machine Learning applications that solved real-world business problems. She enjoys helping customers identify the right business questions and building the right AI/ML solutions. In her spare time, she loves singing and cooking.

Curt Lockhart is an AI/ML Specialist Solutions Architect at AWS. He comes from a non-traditional background of working in the arts before his move to tech, and enjoys making machine learning approachable for each customer. Based in Seattle, you can find him venturing to local art museums, catching a concert, and wandering throughout the cities and outdoors of the Pacific Northwest.

Read More

Node problem detection and recovery for AWS Neuron nodes within Amazon EKS clusters

Node problem detection and recovery for AWS Neuron nodes within Amazon EKS clusters

Implementing hardware resiliency in your training infrastructure is crucial to mitigating risks and enabling uninterrupted model training. By implementing features such as proactive health monitoring and automated recovery mechanisms, organizations can create a fault-tolerant environment capable of handling hardware failures or other issues without compromising the integrity of the training process.

In the post, we introduce the AWS Neuron node problem detector and recovery DaemonSet for AWS Trainium and AWS Inferentia on Amazon Elastic Kubernetes Service (Amazon EKS). This component can quickly detect rare occurrences of issues when Neuron devices fail by tailing monitoring logs. It marks the worker nodes in a defective Neuron device as unhealthy, and promptly replaces them with new worker nodes. By accelerating the speed of issue detection and remediation, it increases the reliability of your ML training and reduces the wasted time and cost due to hardware failure.

This solution is applicable if you’re using managed nodes or self-managed node groups (which use Amazon EC2 Auto Scaling groups) on Amazon EKS. At the time of writing this post, automatic recovery of nodes provisioned by Karpenter is not yet supported.

Solution overview

The solution is based on the node problem detector and recovery DaemonSet, a powerful tool designed to automatically detect and report various node-level problems in a Kubernetes cluster.

The node problem detector component will continuously monitor the kernel message (kmsg) logs on the worker nodes. If it detects error messages specifically related to the Neuron device (which is the Trainium or AWS Inferentia chip), it will change NodeCondition to NeuronHasError on the Kubernetes API server.

The node recovery agent is a separate component that periodically checks the Prometheus metrics exposed by the node problem detector. When it finds a node condition indicating an issue with the Neuron device, it will take automated actions. First, it will mark the affected instance in the relevant Auto Scaling group as unhealthy, which will invoke the Auto Scaling group to stop the instance and launch a replacement. Additionally, the node recovery agent will publish Amazon CloudWatch metrics for users to monitor and alert on these events.

The following diagram illustrates the solution architecture and workflow.

In the following walkthrough, we create an EKS cluster with Trn1 worker nodes, deploy the Neuron plugin for the node problem detector, and inject an error message into the node. We then observe the failing node being stopped and replaced with a new one, and find a metric in CloudWatch indicating the error.

Prerequisites

Before you start, make sure you have installed the following tools on your machine:

Deploy the node problem detection and recovery plugin

Complete the following steps to configure the node problem detection and recovery plugin:

  1. Create an EKS cluster using the data on an EKS Terraform module:
    git clone https://github.com/awslabs/data-on-eks.git
    
    export TF_VAR_region=us-east-2
    export TF_VAR_trn1_32xl_desired_size=4
    export TF_VAR_trn1_32xl_min_size=4
    cd data-on-eks/ai-ml/trainium-inferentia/ && chmod +x install.sh
    ./install.sh
    
    aws eks --region us-east-2 describe-cluster --name trainium-inferentia
    
    # Creates k8s config file to authenticate with EKS
    aws eks --region us-east-2 update-kubeconfig --name trainium-inferentia
    
    kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    ip-100-64-161-213.us-east-2.compute.internal Ready 31d v1.29.0-eks-5e0fdde
    ip-100-64-227-31.us-east-2.compute.internal Ready 31d v1.29.0-eks-5e0fdde
    ip-100-64-70-179.us-east-2.compute.internal Ready 31d v1.29.0-eks-5e0fdde

  2. Install the required AWS Identity and Access Management (IAM) role for the service account and the node problem detector plugin.
  3. Create a policy as shown below. Update the Resource key value to match your node group ARN that contains the Trainium and AWS Inferentia nodes, and update the ec2:ResourceTag/aws:autoscaling:groupName key value to match the Auto Scaling group name.

You can get these values from the Amazon EKS console. Choose Clusters in the navigation pane, open the trainium-inferentia cluster, choose Node groups, and locate your node group.

# To create the policy, aws cli can be used as shown below where npd-policy-trimmed.json is the policy json constructed from the template above.

# Create npd-policy-trimmed.json
cat << EOF > npd-policy-trimmed.json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "autoscaling:SetInstanceHealth",
                "autoscaling:DescribeAutoScalingInstances"
            ],
            "Effect": "Allow",
            "Resource": <arn of the Auto Scaling group corresponding to the Neuron nodes for the cluster>
        },
        {
            "Action": [
                "ec2:DescribeInstances"
            ],
            "Effect": "Allow",
            "Resource": "*",
            "Condition": {
                "ForAllValues:StringEquals": {
                    "ec2:ResourceTag/aws:autoscaling:groupName": <name of the Auto Scaling group corresponding to the Neuron nodes for the cluster>
                }
            }
        },
        {
            "Action": [
                "cloudwatch:PutMetricData"
            ],
            "Effect": "Allow",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "cloudwatch:Namespace": "NeuronHealthCheck"
                }
            }
        }
    ]
}
EOF

This component will be installed as a DaemonSet in your EKS cluster.

# To create the policy, aws cli can be used as shown below where npd-policy-trimmed.json is the policy json constructed from the template above.

aws iam create-policy  
--policy-name NeuronProblemDetectorPolicy 
--policy-document file://npd-policy-trimmed.json

# Note the ARN

CLUSTER_NAME=trainium-inferentia # Your EKS Cluster Name 
AWS_REGION=us-east-2
ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)
POLICY_ARN=arn:aws:iam::$ACCOUNT_ID:policy/NeuronProblemDetectorPolicy

eksctl create addon --cluster $CLUSTER_NAME --name eks-pod-identity-agent 
  --region $AWS_REGION

eksctl create podidentityassociation 
    --cluster $CLUSTER_NAME 
    --namespace neuron-healthcheck-system 
    --service-account-name node-problem-detector 
    --permission-policy-arns="$POLICY_ARN" 
    --region $AWS_REGION
    
# Install the Neuron NPD and recovery plugin 

kubectl create ns neuron-healthcheck-system
curl https://raw.githubusercontent.com/aws-neuron/aws-neuron-sdk/215b421ac448d85f89be056e27e29842a6b03c9c/src/k8/neuron-problem-detector/k8s-neuron-problem-detector-and-recovery.yml | kubectl apply -f - 
curl https://raw.githubusercontent.com/aws-neuron/aws-neuron-sdk/215b421ac448d85f89be056e27e29842a6b03c9c/src/k8/neuron-problem-detector/k8s-neuron-problem-detector-and-recovery-rbac.yml | kubectl apply -f - 
curl https://raw.githubusercontent.com/aws-neuron/aws-neuron-sdk/215b421ac448d85f89be056e27e29842a6b03c9c/src/k8/neuron-problem-detector/k8s-neuron-problem-detector-and-recovery-config.yml | kubectl apply -f -

# Expected result (with 4 Neuron nodes in cluster):

kubectl get pod -n neuron-healthcheck-system
NAME READY STATUS RESTARTS AGE
node-problem-detector-49p6w 2/2 Running 0 31s
node-problem-detector-j7wct 2/2 Running 0 31s
node-problem-detector-qr6jm 2/2 Running 0 31s
node-problem-detector-vwq8x 2/2 Running 0 31s

The container images in the Kubernetes manifests are stored in public repository such as registry.k8s.io and public.ecr.aws. For production environments, it’s recommended that customers limit external dependencies that impact these areas and host container images in a private registry and sync from images public repositories. For detailed implementation, please refer to the blog post: Announcing pull through cache for registry.k8s.io in Amazon Elastic Container Registry.

By default, the node problem detector will not take any actions on failed node. If you would like the EC2 instance to be terminated automatically by the agent, update the DaemonSet as follows:

kubectl edit -n neuron-healthcheck-system ds/node-problem-detector

...
   env:
   - name: ENABLE_RECOVERY
     value: "true"

Test the node problem detector and recovery solution

After the plugin is installed, you can see Neuron conditions show up by running kubectl describe node. We simulate a device error by injecting error logs in the instance:

# Verify node conditions on any node. Neuron conditions should show up.

kubectl describe node ip-100-64-58-151.us-east-2.compute.internal | grep Conditions: -A7

Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  NeuronHealth     False   Fri, 29 Mar 2024 15:52:08 +0800   Thu, 28 Mar 2024 13:59:19 +0800   NeuronHasNoError             Neuron has no error
  MemoryPressure   False   Fri, 29 Mar 2024 15:51:03 +0800   Thu, 28 Mar 2024 13:58:39 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 29 Mar 2024 15:51:03 +0800   Thu, 28 Mar 2024 13:58:39 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 29 Mar 2024 15:51:03 +0800   Thu, 28 Mar 2024 13:58:39 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Fri, 29 Mar 2024 15:51:03 +0800   Thu, 28 Mar 2024 13:59:08 +0800   KubeletReady                 kubelet is posting ready status
# To get provider id
kubectl describe node ip-100-64-58-151.us-east-2.compute.internal | grep -i provider | sed -E 's/.*/([^/]+)$/1/'

i-0381404aa69eae3f6

# SSH into to the worker node and simulate the hardware error on the neuron device
aws ssm start-session --target i-0381404aa69eae3f6 --region us-east-2

Starting session with SessionId: lindarr-0069460593240662a

sh-4.2$
sh-4.2$ sudo bash
[root@ip-192-168-93-211 bin]# echo "test NEURON_HW_ERR=DMA_ERROR test" >> /dev/kmsg

Around 2 minutes later, you can see that the error has been identified:

kubectl describe node ip-100-64-58-151.us-east-2.compute.internal | grep 'Conditions:' -A7
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  NeuronHealth     True    Fri, 29 Mar 2024 17:42:43 +0800   Fri, 29 Mar 2024 17:42:38 +0800   NeuronHasError_DMA_ERROR     test NEURON_HW_ERR=DMA_ERROR test

...

Events:
  Type     Reason                    Age   From            Message
  ----     ------                    ----  ----            -------
  Warning  NeuronHasError_DMA_ERROR  36s   kernel-monitor  Node condition NeuronHealth is now: True, reason: NeuronHasError_DMA_ERROR, message: "test NEURON_HW_ERR=DMA_ERROR test"

Now that the error has been detected by the node problem detector, and the recovery agent has automatically taken the action to set the node as unhealthy, Amazon EKS will cordon the node and evict the pods on the node:

# Verify the Node scheduling is disabled.
kubectl get node 
NAME                                           STATUS                        ROLES    AGE    VERSION
ip-100-64-1-48.us-east-2.compute.internal      Ready                         <none>   156m   v1.29.0-eks-5e0fdde
ip-100-64-103-26.us-east-2.compute.internal    Ready                         <none>   94s    v1.29.0-eks-5e0fdde
ip-100-64-239-245.us-east-2.compute.internal   Ready                         <none>   154m   v1.29.0-eks-5e0fdde
ip-100-64-52-40.us-east-2.compute.internal     Ready                         <none>   156m   v1.29.0-eks-5e0fdde
ip-100-64-58-151.us-east-2.compute.internal    NotReady,SchedulingDisabled   <none>   27h    v1.29.0-eks-5e0fdde

You can open the CloudWatch console and verify the metric for NeuronHealthCheck. You can see the CloudWatch NeuronHasError_DMA_ERROR metric has the value 1.

After replacement, you can see a new worker node has been created:

# The new node with age 28s is the new node

kubectl get node 
NAME                                           STATUS   ROLES    AGE   VERSION
ip-192-168-65-77.us-east-2.compute.internal    Ready    <none>   28s   v1.29.0-eks-5e0fddev1.28.5-eks-5e0fdde
ip-192-168-81-176.us-east-2.compute.internal   Ready    <none>   9d    v1.29.5-eks-5e0fdde
ip-192-168-91-218.us-east-2.compute.internal   Ready    <none>   9d    v1.29.0-eks-5e0fdde
ip-192-168-94-83.us-east-2.compute.internal    Ready    <none>   9d    v1.29.0-eks-5e0fdde

Let’s look at a real-world scenario, in which you’re running a distributed training job, using an MPI operator as outlined in Llama-2 on Trainium, and there is an irrecoverable Neuron error in one of the nodes. Before the plugin is deployed, the training job will become stuck, resulting in wasted time and computational costs. With the plugin deployed, the node problem detector will proactively remove the problem node from the cluster. In the training scripts, it saves checkpoints periodically so that the training will resume from the previous checkpoint.

The following screenshot shows example logs from a distributed training job.

The training has been started. (You can ignore loss=nan for now; it’s a known issue and will be removed. For immediate use, refer to the reduced_train_loss metric.)

The following screenshot shows the checkpoint created at step 77.

Training stopped after one of the nodes has a problem at step 86. The error was injected manually for testing.

After the faulty node was detected and replaced by the Neuron plugin for node problem and recovery, the training process resumed at step 77, which was the last checkpoint.

Although Auto Scaling groups will stop unhealthy nodes, they may encounter issues preventing the launch of replacement nodes. In such cases, training jobs will stall and require manual intervention. However, the stopped node will not incur further charges on the associated EC2 instance.

If you want to take custom actions in addition to stopping instances, you can create CloudWatch alarms watching the metrics NeuronHasError_DMA_ERROR,NeuronHasError_HANG_ON_COLLECTIVES, NeuronHasError_HBM_UNCORRECTABLE_ERROR, NeuronHasError_SRAM_UNCORRECTABLE_ERROR, and NeuronHasError_NC_UNCORRECTABLE_ERROR, and use a CloudWatch Metrics Insights query like SELECT AVG(NeuronHasError_DMA_ERROR) FROM NeuronHealthCheck to sum up these values to evaluate the alarms. The following screenshots show an example.

Clean up

To clean up all the provisioned resources for this post, run the cleanup script:

# neuron-problem-detector-role-$CLUSTER_NAME
eksctl delete podidentityassociation 
--service-account-name node-problem-detector 
--namespace neuron-healthcheck-system 
--cluster $CLUSTER_NAME 
--region $AWS_REGION

# delete the EKS Cluster
cd data-on-eks/ai-ml/trainium-inferentia
./cleanup.sh

Conclusion

In this post, we showed how the Neuron problem detector and recovery DaemonSet for Amazon EKS works for EC2 instances powered by Trainium and AWS Inferentia. If you’re running Neuron based EC2 instances and using managed nodes or self-managed node groups, you can deploy the detector and recovery DaemonSet in your EKS cluster and benefit from improved reliability and fault tolerance of your machine learning training workloads in the event of node failure.


About the authors

Harish Rao is a senior solutions architect at AWS, specializing in large-scale distributed AI training and inference. He empowers customers to harness the power of AI to drive innovation and solve complex challenges. Outside of work, Harish embraces an active lifestyle, enjoying the tranquility of hiking, the intensity of racquetball, and the mental clarity of mindfulness practices.

Ziwen Ning is a software development engineer at AWS. He currently focuses on enhancing the AI/ML experience through the integration of AWS Neuron with containerized environments and Kubernetes. In his free time, he enjoys challenging himself with badminton, swimming and other various sports, and immersing himself in music.

Geeta Gharpure is a senior software developer on the Annapurna ML engineering team. She is focused on running large scale AI/ML workloads on Kubernetes. She lives in Sunnyvale, CA and enjoys listening to Audible in her free time.

Darren Lin is a Cloud Native Specialist Solutions Architect at AWS who focuses on domains such as Linux, Kubernetes, Container, Observability, and Open Source Technologies. In his spare time, he likes to work out and have fun with his family.

Read More

Mistral Large 2 is now available in Amazon Bedrock

Mistral Large 2 is now available in Amazon Bedrock

Mistral AI’s Mistral Large 2 (24.07) foundation model (FM) is now generally available in Amazon Bedrock. Mistral Large 2 is the newest version of Mistral Large, and according to Mistral AI offers significant improvements across multilingual capabilities, math, reasoning, coding, and much more.

In this post, we discuss the benefits and capabilities of this new model with some examples.

Overview of Mistral Large 2

Mistral Large 2 is an advanced large language model (LLM) with state-of-the-art reasoning, knowledge, and coding capabilities according to Mistral AI. It is multi-lingual by design, supporting dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, Polish, Arabic, and Hindi. Per Mistral AI, a significant effort was also devoted to enhancing the model’s reasoning capabilities. One of the key focuses during training was to minimize the model’s tendency to hallucinate, or generate plausible-sounding but factually incorrect or irrelevant information. This was achieved by fine-tuning the model to be more cautious and discerning in its responses, making sure it provides reliable and accurate outputs. Additionally, the new Mistral Large 2 is trained to acknowledge when it can’t find solutions or doesn’t have sufficient information to provide a confident answer.

According to Mistral AI, the model is also proficient in coding, trained on over 80 programming languages such as Python, Java, C, C++, JavaScript, Bash, Swift, and Fortran. With its best-in-class agentic capabilities, it can natively call functions and output JSON, enabling seamless interaction with external systems, APIs, and tools. Additionally, Mistral Large 2 (24.07) boasts advanced reasoning and mathematical capabilities, making it a powerful asset for tackling complex logical and computational challenges.

Mistral Large 2 also offers an increased context window of 128,000 tokens. At the time of writing, the model (mistral.mistral-large-2407-v1:0) is available in the us-west-2 AWS Region.

Get started with Mistral Large 2 on Amazon Bedrock

If you’re new to using Mistral AI models, you can request model access on the Amazon Bedrock console. For more details, see Manage access to Amazon Bedrock foundation models.

To test Mistral Large 2 on the Amazon Bedrock console, choose Text or Chat under Playgrounds in the navigation pane. Then choose Select model and choose Mistral as the category and Mistral Large 24.07 as the model.

By choosing View API request, you can also access the model using code examples in the AWS Command Line Interface (AWS CLI) and AWS SDKs. You can use model IDs such as mistral.mistral-large-2407-v1:0, as shown in the following code:

$ aws bedrock-runtime invoke-model  
--model-id mistral.mistral-large-2407-v1:0 
--body "{"prompt":"<s>[INST] this is where you place your input text [/INST]", "max_tokens":200, "temperature":0.5, "top_p":0.9, "top_k":50}"  
--cli-binary-format raw-in-base64-out 
--region us-west-2  
invoke-model-output.txt

In the following sections, we dive into the capabilities of Mistral Large 2.

Increased context window

Mistral Large 2 supports a context window of 128,000 tokens, compared to Mistral Large (24.02), which had a 32,000-token context window. This larger context window is important for developers because it allows the model to process and understand longer pieces of text, such as entire documents or code files, without losing context or coherence. This can be particularly useful for tasks like code generation, documentation analysis, or any application that requires understanding and processing large amounts of text data.

Generating JSON and tool use

Mistral Large 2 now offers a native JSON output mode. This feature allows developers to receive the model’s responses in a structured, easy-to-read format that can be readily integrated into various applications and systems. With JSON being a widely adopted data exchange standard, this capability simplifies the process of working with the model’s outputs, making it more accessible and practical for developers across different domains and use cases. To learn more about how to generate JSON with the Converse API, refer to Generating JSON with the Amazon Bedrock Converse API.

To generate JSON with the Converse API, you need to define a toolSpec. In the following code, we present an example for a travel agent company that will take passenger information and requests and convert them to JSON:

# Define the tool configuration
import json
tool_list = [
    {
        "toolSpec": {
            "name": "travel_agent",
            "description": "Converts trip details as a json structure.",
            "inputSchema": {
                "json": {
                    "type": "object",
                    "properties": {
                        "origin_airport": {
                            "type": "string",
                            "description": "Origin airport (IATA code)"
                        },
                        "destination_airport": {
                            "type": "boolean",
                            "description": "Destination airport (IATA code)"
                        },
                        "departure_date": {
                            "type": "string",
                            "description": "Departure date",
                        }, 
                        "return_date": {
                            "type": "string",
                            "description": "Return date",
                        }
                    },
                    "required": [
                        "origin_airport",
                        "destination_airport",
                        "departure_date",
                        "return_date"
                    ]
                }
            }
        }
    }
]
content = """
I would like to book a flight from New York (JFK) to London (LHR) for a round-trip.
The departure date is June 15, 2023, and the return date is June 25, 2023.

For the flight preferences, I would prefer to fly with Delta or United Airlines. 
My preferred departure time range is between 8 AM and 11 AM, and my preferred arrival time range is between 9 AM and 1 PM (local time in London). 
I am open to flights with one stop, but no more than that.
Please include non-stop flight options if available.
"""

message = {
    "role": "user",
    "content": [
        { "text": f"<content>{content}</content>" },
        { "text": "Please create a well-structured JSON object representing the flight booking request, ensuring proper nesting and organization of the data. Include sample data for better understanding. Create the JSON based on the content within the <content> tags." }
    ],
}
# Bedrock client configuration
response = bedrock_client.converse(
    modelId=model_id,
    messages=[message],
    inferenceConfig={
        "maxTokens": 500,
        "temperature": 0.1
    },
    toolConfig={
        "tools": tool_list
    }
)

response_message = response['output']['message']
response_content_blocks = response_message['content']
content_block = next((block for block in response_content_blocks if 'toolUse' in block), None)
tool_use_block = content_block['toolUse']
tool_result_dict = tool_use_block['input']

print(json.dumps(tool_result_dict, indent=4))

We get the following response:

{
    "origin_airport": "JFK",
    "destination_airport": "LHR",
    "departure_date": "2023-06-15",
    "return_date": "2023-06-25"
}

Mistral Large 2 was able to correctly take our user query and convert the appropriate information to JSON.

Mistral Large 2 also supports the Converse API and tool use. You can use the Amazon Bedrock API to give a model access to tools that can help it generate responses for messages that you send to the model. For example, you might have a chat application that lets users find the most popular song played on a radio station. To answer a request for the most popular song, a model needs a tool that can query and return the song information. The following code shows an example for getting the correct train schedule:

# Define the tool configuration
toolConfig = {
    "tools": [
        {
            "toolSpec": {
                "name": "shinkansen_schedule",
                "description": "Fetches Shinkansen train schedule departure times for a specified station and time.",
                "inputSchema": {
                    "json": {
                        "type": "object",
                        "properties": {
                            "station": {
                                "type": "string",
                                "description": "The station name."
                            },
                            "departure_time": {
                                "type": "string",
                                "description": "The departure time in HH:MM format."
                            }
                        },
                        "required": ["station", "departure_time"]
                    }
                }
            }
        }
    ]
}
# Define shikansen schedule tool
def shinkansen_schedule(station, departure_time):
    schedule = {
        "Tokyo": {"09:00": "Hikari", "12:00": "Nozomi", "15:00": "Kodama"},
        "Osaka": {"10:00": "Nozomi", "13:00": "Hikari", "16:00": "Kodama"}
    }
    return schedule.get(station, {}).get(departure_time, "No train found")
def prompt_mistral(prompt):
    messages = [{"role": "user", "content": [{"text": prompt}]}]
    converse_api_params = {
        "modelId": model_id,
        "messages": messages,
        "toolConfig": toolConfig,  
        "inferenceConfig": {"temperature": 0.0, "maxTokens": 400},
    }

    response = bedrock_client.converse(**converse_api_params)
    
    if response['output']['message']['content'][0].get('toolUse'):
        tool_use = response['output']['message']['content'][0]['toolUse']
        tool_name = tool_use['name']
        tool_inputs = tool_use['input']

        if tool_name == "shinkansen_schedule":
            print("Mistral wants to use the shinkansen_schedule tool")
            station = tool_inputs["station"]
            departure_time = tool_inputs["departure_time"]
            
            try:
                result = shinkansen_schedule(station, departure_time)
                print("Train schedule result:", result)
            except ValueError as e:
                print(f"Error: {str(e)}")

    else:
        print("Mistral responded with:")
        print(response['output']['message']['content'][0]['text'])
prompt_mistral("What train departs Tokyo at 9:00?")

We get the following response:

Mistral wants to use the shinkansen_schedule tool
Train schedule result: Hikari

Mistral Large 2 was able to correctly identify the shinkansen tool and demonstrate its use.

Multilingual support

Mistral Large 2 now supports a large number of character-based languages such as Chinese, Japanese, Korean, Arabic, and Hindi. This expanded language support allows developers to build applications and services that can cater to users from diverse linguistic backgrounds. With multilingual capabilities, developers can create localized UIs, provide language-specific content and resources, and deliver a seamless experience for users regardless of their native language.

In the following example, we translate customer emails generated by the author into different languages such as Hindi and Japanese:

emails= """
"I recently bought your RGB gaming keyboard and absolutely love the customizable lighting features! Can you guide me on how to set up different profiles for each game I play?"
"I'm trying to use the macro keys on the gaming keyboard I just purchased, but they don't seem to be registering my inputs. Could you help me figure out what might be going wrong?"
"I'm considering buying your gaming keyboard and I'm curious about the key switch types. What options are available and what are their main differences?"
"I wanted to report a small issue where my keyboard's space bar is a bit squeaky. However, your quick-start guide was super helpful and I fixed it easily by following the lubrication tips. Just thought you might want to know!"
"My new gaming keyboard stopped working within a week of purchase. None of the keys respond, and the lights don't turn on. I need a solution or a replacement as soon as possible."
"I've noticed that the letters on the keys of my gaming keyboard are starting to fade after several months of use. Is this covered by the warranty?"
"I had an issue where my keyboard settings would reset every time I restarted my PC. I figured out it was due to a software conflict and resolved it by updating the firmware. Just wanted to ask if there are any new updates coming soon?"
"I've been having trouble with the keyboard software not saving my configurations, and it's starting to get frustrating. What can be done to ensure my settings are saved permanently?"
"""
def converse(prompt, inference_config):
    messages = [{"role": "user", "content": [{"text": prompt}]}]
    response = bedrock_client.converse(
        messages=messages,
        modelId=model_id,
        inferenceConfig=inference_config
    )
    generated_text = response['output']['message']['content'][0]['text']
    print(generated_text)
    return generated_text

prompt=f"""

emails={emails}

Translate the following customer emails into these languages:

1. Hindi
2. Japanese

Label each language section accordingly""".format(emails=emails)

inference_config = {"temperature": 0.0, "maxTokens": 4000, "topP": 0.1}

response = converse(prompt, inference_config) 

We get the following response:

1. "मैंने हाल ही में आपका RGB गेमिंग कीबोर्ड खरीदा और कस्टमाइजेबल लाइटिंग फीचर्स से बहुत प्रेम करता हूँ! क्या आप मुझे बता सकते हैं कि मैं हर गेम के लिए अलग-अलग प्रोफाइल कैसे सेट कर सकता हूँ?"
2. "मैं अपने नए गेमिंग कीबोर्ड पर मैक्रो कीज का उपयोग करने की कोशिश कर रहा हूँ, लेकिन वे मेरे इनपुट्स को रजिस्टर नहीं कर रहे हैं। क्या आप मुझे बता सकते हैं कि क्या गलत हो सकता है?"
3. "मैं आपका गेमिंग कीबोर्ड खरीदने के बारे में सोच रहा हूँ और मुझे की स्विच टाइप्स के बारे में जानना है। क्या विकल्प उपलब्ध हैं और उनके मुख्य अंतर क्या हैं?"
4. "मैं यह रिपोर्ट करना चाहता था कि मेरे कीबोर्ड का स्पेस बार थोड़ा सा चरमरा रहा है। हालाँकि, आपका क्विक-स्टार्ट गाइड बहुत मददगार था और मैंने लुब्रिकेशन टिप्स का पालन करके इसे आसानी से ठीक कर दिया। बस यह जानना चाहता था कि शायद आपको पता चलना चाहिए!"
5. "मेरा नया गेमिंग कीबोर्ड खरीद के एक सप्ताह के भीतर काम करना बंद हो गया। कोई भी की जवाब नहीं दे रहा है, और लाइट्स भी नहीं चालू हो रहे हैं। मुझे एक समाधान या एक रिप्लेसमेंट जितनी जल्दी हो सके चाहिए।"
6. "मैंने नोट किया है कि मेरे गेमिंग कीबोर्ड के कीज पर अक्षर कुछ महीनों के उपयोग के बाद फेड होने लगे हैं। क्या यह वारंटी के तहत कवर है?"
7. "मेरे कीबोर्ड सेटिंग्स हर बार मेरे पीसी को रीस्टार्ट करने पर रीसेट हो जाती थीं। मैंने पता लगाया कि यह एक सॉफ्टवेयर कॉन्फ्लिक्ट के कारण था और फर्मवेयर अपडेट करके इसे सुलझा दिया। बस पूछना चाहता था कि क्या कोई नए अपडेट आने वाले हैं?"
8. "मेरे कीबोर्ड सॉफ्टवेयर मेरी कॉन्फ़िगरेशन को सेव नहीं कर रहे हैं, और यह अब परेशान करने लगा है। मेरे सेटिंग्स को स्थायी रूप से सेव करने के लिए क्या किया जा सकता है?"

### Japanese

1. "最近、あなたのRGBゲーミングキーボードを購入し、カスタマイズ可能なライティング機能が大好きです! 各ゲームごとに異なるプロファイルを設定する方法を教えていただけますか?"
2. "新しく購入したゲーミングキーボードのマクロキーを使おうとしていますが、入力が認識されません。何が問題か教えていただけますか?"
3. "あなたのゲーミングキーボードを購入しようと考えていますが、キースイッチの種類について知りたいです。どのようなオプションがあり、その主な違いは何ですか?"
4. "キーボードのスペースバーが少しきしむようになりました。ただし、クイックスタートガイドが非常に役立ち、潤滑のヒントに従って簡単に修理できました。ただ、知っておいてほしいと思いました!"
5. "新しいゲーミングキーボードが購入後1週間で動作しなくなりました。どのキーも反応せず、ライトも点きません。できるだけ早く解決策や交換をお願いします。"
6. "ゲーミングキーボードのキーの文字が数ヶ月の使用後に消え始めました。これは保証に含まれますか?"
7. "キーボードの設定がPCを再起動するたびにリセットされていました。ソフトウェアの競合が原因であることがわかり、ファームウェアを更新して解決しました。新しいアップデートが近々来るのか教えていただけますか?"
8. "キーボードソフトウェアが設定を保存しないため、非常に困っています。設定を永続的に保存するために何ができますか?"

Coding tasks

Mistral Large 2 has been trained on over 80 coding languages, including popular ones like Python, Java, C, C++, JavaScript, and Bash, as well as more specialized languages such as Swift and Fortran. This comprehensive language support empowers developers to tackle a wide range of coding tasks and projects across various domains and platforms. Whether you’re working on web development, mobile applications, scientific computing, or system programming, Mistral Large 2 can assist you with code generation, debugging, refactoring, and other coding-related tasks. For example, the following code requests the model to generate a Python function:

prompt = """
Write a Python function called palindrome_prime_finder that finds all prime numbers within a given range that are also palindromes when written in base 10 (decimal system).
"""

inference_config = {"temperature": 0.0, "maxTokens": 1000, "topP": 0.1}

response = converse(prompt, inference_config)

We get the following response:

Certainly! Below is a Python function called `palindrome_prime_finder` that finds all prime numbers within a given range that are also palindromes when written in base 10.

```python
def is_prime(n):
    """Check if a number is prime."""
    if n <= 1:
        return False
    if n <= 3:
        return True
    if n % 2 == 0 or n % 3 == 0:
        return False
    i = 5
    while i * i <= n:
        if n % i == 0 or n % (i + 2) == 0:
            return False
        i += 6
    return True

def is_palindrome(n):
    """Check if a number is a palindrome."""
    return str(n) == str(n)[::-1]

def palindrome_prime_finder(start, end):
    """Find all prime numbers within a given range that are also palindromes."""
    palindrome_primes = []
    for num in range(start, end + 1):
        if is_prime(num) and is_palindrome(num):
            palindrome_primes.append(num)
    return palindrome_primes

# Example usage:
start_range = 1
end_range = 1000
print(palindrome_prime_finder(start_range, end_range))
```

### Explanation:
1. **is_prime(n)**: This function checks if a number `n` is prime. It uses a basic primality test that eliminates even numbers and numbers divisible by 3 early, then checks divisibility up to the square root of `n`.

2. **is_palindrome(n)**: This function checks if a number `n` is a palindrome by converting it to a string and comparing it to its reverse.

3. **palindrome_prime_finder(start, end)**: This function iterates through all numbers in the given range `[start, end]`. For each number, it checks if it is both a prime and a palindrome. If both conditions are met, the number is added to the list `palindrome_primes`.

### Example Usage:
The example usage finds all palindromic prime numbers between 1 and 1000. You can adjust the `start_range` and `end_range` variables to find palindromic primes in any desired range.

Conclusion

Mistral AI’s Mistral Large 2 FM is now available on Amazon Bedrock in the US West (Oregon) Region. To get started with Mistral Large 2 in Amazon Bedrock, visit the Amazon Bedrock console.

Interested in diving deeper? Check out the Mistral-on-AWS repo. For more information about Mistral AI on Amazon Bedrock, refer to Mistral AI models now available on Amazon Bedrock.


About the Authors

Niithiyn Vijeaswaran is a Solutions Architect at AWS. His area of focus is generative AI and AWS AI Accelerators. He holds a Bachelor’s degree in Computer Science and Bioinformatics. Niithiyn works closely with the Generative AI GTM team to enable AWS customers on multiple fronts and accelerate their adoption of generative AI. He’s an avid fan of the Dallas Mavericks and enjoys collecting sneakers.

Armando Diaz is a Solutions Architect at AWS. He focuses on generative AI, AI/ML, and Data Analytics. At AWS, Armando helps customers integrating cutting-edge generative AI capabilities into their systems, fostering innovation and competitive advantage. When he’s not at work, he enjoys spending time with his wife and family, hiking, and traveling the world.

Preston Tuggle is a Sr. Specialist Solutions Architect working on generative AI.

Read More

LLM experimentation at scale using Amazon SageMaker Pipelines and MLflow

LLM experimentation at scale using Amazon SageMaker Pipelines and MLflow

Large language models (LLMs) have achieved remarkable success in various natural language processing (NLP) tasks, but they may not always generalize well to specific domains or tasks. You may need to customize an LLM to adapt to your unique use case, improving its performance on your specific dataset or task. You can customize the model using prompt engineering, Retrieval Augmented Generation (RAG), or fine-tuning. Evaluation of a customized LLM against the base LLM (or other models) is necessary to make sure the customization process has improved the model’s performance on your specific task or dataset.

In this post, we dive into LLM customization using fine-tuning, exploring the key considerations for successful experimentation and how Amazon SageMaker with MLflow can simplify the process using Amazon SageMaker Pipelines.

LLM selection and fine-tuning journeys

When working with LLMs, customers often have different requirements. Some may be interested in evaluating and selecting the most suitable pre-trained foundation model (FM) for their use case, while others might need to fine-tune an existing model to adapt it to a specific task or domain. Let’s explore two customer journeys:

LLM evaluation and selection journey

  • Fine-tuning an LLM for a specific task or domain adaptation – In this user journey, you need to customize an LLM for a specific task or domain data. This requires fine-tuning the model. The fine-tuning process may involve one or more experiment, each requiring multiple iterations with different combinations of datasets, hyperparameters, prompts, and fine-tuning techniques, such as full or Parameter-Efficient Fine-Tuning (PEFT). Each iteration can be considered a run within an experiment.

Fine-tuning an LLM can be a complex workflow for data scientists and machine learning (ML) engineers to operationalize. To simplify this process, you can use Amazon SageMaker with MLflow and SageMaker Pipelines for fine-tuning and evaluation at scale. In this post, we describe the step-by-step solution and provide the source code in the accompanying GitHub repository.

Solution overview

Running hundreds of experiments, comparing the results, and keeping a track of the ML lifecycle can become very complex. This is where MLflow can help streamline the ML lifecycle, from data preparation to model deployment. By integrating MLflow into your LLM workflow, you can efficiently manage experiment tracking, model versioning, and deployment, providing reproducibility. With MLflow, you can track and compare the performance of multiple LLM experiments, identify the best-performing models, and deploy them to production environments with confidence.

You can create workflows with SageMaker Pipelines that enable you to prepare data, fine-tune models, and evaluate model performance with simple Python code for each step.

Now you can use SageMaker managed MLflow to run LLM fine-tuning and evaluation experiments at scale. Specifically:

  • MLflow can manage tracking of fine-tuning experiments, comparing evaluation results of different runs, model versioning, deployment, and configuration (such as data and hyperparameters)
  • SageMaker Pipelines can orchestrate multiple experiments based on the experiment configuration

The following figure shows the overview of the solution.

Solution overview with MLflow

Prerequisites

Before you begin, make sure you have the following prerequisites in place:

  • Hugging Face login token – You need a Hugging Face login token to access the models and datasets used in this post. For instructions to generate a token, see User access tokens.
  • SageMaker access with required IAM permissions – You need to have access to SageMaker with the necessary AWS Identity and Access Management (IAM) permissions to create and manage resources. Make sure you have the required permissions to create notebooks, deploy models, and perform other tasks outlined in this post. To get started, see Quick setup to Amazon SageMaker. Please follow this post to make sure you have proper IAM role confugured for MLflow.

Set up an MLflow tracking server

MLflow is directly integrated in Amazon SageMaker Studio. To create an MLflow tracking server to track experiments and runs, complete the following steps:

  1. On the SageMaker Studio console, choose MLflow under Applications in the navigation pane.

MLflow in SageMaker Studio

  1. For Name, enter an appropriate server name.
  2. For Artifact storage location (S3 URI), enter the location of an Amazon Simple Storage Service (Amazon S3) bucket.
  3. Choose Create.

MLflow setup

The tracking server may require up to 20 minutes to initialize and become operational. When it’s running, you can note its ARN to use in the llm_fine_tuning_experiments_mlflow.ipynb notebook. The ARN will have the following format:

arn:aws:sagemaker:<region>:<account_id>:mlflow-tracking-server/<tracking_server_name>

For subsequent steps, you can refer to the detailed description provided in this post, as well as the step-by-step instructions outlined in the llm_fine_tuning_experiments_mlflow.ipynb notebook. You can Launch the notebook in Amazon SageMaker Studio Classic or SageMaker JupyterLab.

Overview of SageMaker Pipelines for experimentation at scale

We use SageMaker Pipelines to orchestrate LLM fine-tuning and evaluation experiments. With SageMaker Pipelines, you can:

  • Run multiple LLM experiment iterations simultaneously, reducing overall processing time and cost
  • Effortlessly scale up or down based on changing workload demands
  • Monitor and visualize the performance of each experiment run with MLflow integration
  • Invoke downstream workflows for further analysis, deployment, or model selection

MLflow integration with SageMaker Pipelines requires the tracking server ARN. You also need to add the mlflow and sagemaker-mlflow Python packages as dependencies in the pipeline setup. Then you can use MLflow in any pipeline step with the following code snippet:

mlflow_arn="" #get the tracking ARN from step 1
experiment_name="" #experiment name of your choice
mlflow.set_tracking_uri(mlflow_arn)
mlflow.set_experiment(experiment_name)

with mlflow.start_run(run_name=run_name) as run:
        #code for the corresponding step

Log datasets with MLflow

With MLflow, you can log your dataset information alongside other key metrics, such as hyperparameters and model evaluation. This enables tracking and reproducibility of experiments across different runs, allowing for more informed decision-making about which models perform best on specific tasks or domains. By logging your datasets with MLflow, you can store metadata, such as dataset descriptions, version numbers, and data statistics, alongside your MLflow runs.

In the preproccess step, you can log training data and evaluation data. In this example, we download the data from a Hugging Face dataset. We are using HuggingFaceH4/no_robots for fine-tuning and evaluation. First, you need to set the MLflow tracking ARN and experiment name to log data. After you process the data and select the required number of rows, you can log the data using the log_input API of MLflow. See the following code:

mlflow.set_tracking_uri(mlflow_arn)
mlflow.set_experiment(experiment_name)
    
dataset = load_dataset(dataset_name, split="train")
# Data processing implementation

# Data logging with MLflow
df_train = pd.DataFrame(dataset)
training_data = mlflow.data.from_pandas(df_train, source=training_input_path)
mlflow.log_input(training_data, context="training")      
df_evaluate = pd.DataFrame(eval_dataset)
evaluation_data = mlflow.data.from_pandas(df_evaluate, source=eval_input_path)
mlflow.log_input(evaluation_data, context="evaluation")

Fine-tune a Llama model with LoRA and MLflow

To streamline the process of fine-tuning LLM with Low-Rank Adaption (LoRA), you can use MLflow to track hyperparameters and save the resulting model. You can experiment with different LoRA parameters for training and log these parameters along with other key metrics, such as training loss and evaluation metrics. This enables tracking of your fine-tuning process, allowing you to identify the most effective LoRA parameters for a given dataset and task.

For this example, we use the PEFT library from Hugging Face to fine-tune a Llama 3 model. With this library, we can perform LoRA fine-tuning, which offers faster training with reduced memory requirements. It can also work well with less training data.

We use the HuggingFace class from the SageMaker SDK to create a training step in SageMaker Pipelines. The actual implementation of training is defined in llama3_fine_tuning.py. Just like the previous step, we need to set the MLflow tracking URI and use the same run_id:

mlflow.set_tracking_uri(args.mlflow_arn)
mlflow.set_experiment(args.experiment_name)

with mlflow.start_run(run_id=args.run_id) as run:
# implementation

While using the Trainer class from Transformers, you can mention where you want to report the training arguments. In our case, we want to log all the training arguments to MLflow:

trainer = transformers.Trainer(
        model=model,
        train_dataset=lm_train_dataset,
        eval_dataset=lm_test_dataset,
        args=transformers.TrainingArguments(
            per_device_train_batch_size=per_device_train_batch_size,
            per_device_eval_batch_size=per_device_eval_batch_size,
            gradient_accumulation_steps=gradient_accumulation_steps,
            gradient_checkpointing=gradient_checkpointing,
            logging_steps=2,
            num_train_epochs=num_train_epochs,
            learning_rate=learning_rate,
            bf16=True,
            save_strategy="no",
            output_dir="outputs",
            report_to="mlflow",
            run_name="llama3-peft",
        ),
        data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
    )

When the training is complete, you can save the full model, so you need to merge the adapter weights to the base model:

model = PeftModel.from_pretrained(base_model, new_model)
model = model.merge_and_unload()
save_dir = "/opt/ml/model/"
model.save_pretrained(save_dir, safe_serialization=True, max_shard_size="2GB")
# Reload tokenizer to save it
tokenizer = AutoTokenizer.from_pretrained(args.model_id, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
tokenizer.save_pretrained(save_dir)

The merged model can be logged to MLflow with the model signature, which defines the expected format for model inputs and outputs, including any additional parameters needed for inference:

params = {
        "top_p": 0.9,
        "temperature": 0.9,
        "max_new_tokens": 200,
    }

signature = infer_signature("inputs","generated_text", params=params)

mlflow.transformers.log_model(
    transformers_model={"model": model, "tokenizer": tokenizer},
    signature=signature,
    artifact_path="model", 
    model_config = params
)

Evaluate the model

Model evaluation is the key step to select the most optimal training arguments for fine-tuning the LLM for a given dataset. In this example, we use the built-in evaluation capability of MLflow with the mlflow.evaluate() API. For question answering models, we use the default evaluator logs exact_match, token_count, toxicity, flesch_kincaid_grade_level, and ari_grade_level.

MLflow can load the model that was logged in the fine-tuning step. The base model is downloaded from Hugging Face and adapter weights are downloaded from the logged model. See the following code:

logged_model = f"runs:/{preprocess_step_ret['run_id']}/model"
loaded_model = mlflow.pyfunc.load_model(model_uri=logged_model)
results = mlflow.evaluate(
    model=loaded_model,
    data=df,
    targets="answer",
    model_type="question-answering",
    evaluator_config={"col_mapping": {"inputs": "question"}},
)

These evaluation results are logged in MLflow in the same run that logged the data processing and fine-tuning step.

Create the pipeline

After you have the code ready for all the steps, you can create the pipeline:

from sagemaker import get_execution_role

pipeline = Pipeline(name=pipeline_name, steps=[evaluate_finetuned_llama7b_instruction_mlflow], parameters=[lora_config])

You can run the pipeline using the SageMaker Studio UI or using the following code snippet in the notebook:

execution1 = pipeline.start()

Compare experiment results

After you start the pipeline, you can track the experiment in MLflow. Each run will log details of the preprocessing, fine-tuning, and evaluation steps. The preprocessing step will log training and evaluation data, and the fine-tuning step will log all training arguments and LoRA parameters. You can select these experiments and compare the results to find the optimal training parameters and best fine-tuned model.

You can open the MLflow UI from SageMaker Studio.

Open MLflow UI

Then you can select the experiment to filter out runs for that experiment. You can select multiple runs to make the comparison.

Select runs in experiment to compare

When you compare, you can analyze the evaluation score against the training arguments.

Compare training arguments and results

Register the model

After you analyze the evaluation results of different fine-tuned models, you can select the best model and register it in MLflow. This model will be automatically synced with Amazon SageMaker Model Registry.

MLflow model card

Model in SageMaker model registry

Deploy the model

You can deploy the model through the SageMaker console or SageMaker SDK. You can pull the model artifact from MLflow and use the ModelBuilder class to deploy the model:

from sagemaker.serve import ModelBuilder
from sagemaker.serve.mode.function_pointers import Mode
from sagemaker.serve import SchemaBuilder

model_builder = ModelBuilder(
    mode=Mode.SAGEMAKER_ENDPOINT,
    role_arn="<role_arn>",
    model_metadata={
        # both model path and tracking server ARN are required if you use an mlflow run ID or mlflow model registry path as input
        "MLFLOW_MODEL_PATH": "runs:/<run_id>/model",
        "MLFLOW_TRACKING_ARN": "<MLFLOW_TRACKING_ARN>",
    },
    instance_type="ml.g5.12xlarge"
)
model = model_builder.build()
predictor = model.deploy( initial_instance_count=1, instance_type="ml.g5.12xlarge" )

Clean up

In order to not incur ongoing costs, delete the resources you created as part of this post:

  1. Delete the MLflow tracking server.
  2. Run the last cell in the notebook to delete the SageMaker pipeline:
sagemaker_client = boto3.client('sagemaker')
response = sagemaker_client.delete_pipeline(
    PipelineName=pipeline_name,
)

Conclusion

In this post, we focused on how to run LLM fine-tuning and evaluation experiments at scale using SageMaker Pipelines and MLflow. You can use managed MLflow from SageMaker to compare training parameters and evaluation results to select the best model and deploy that model in SageMaker. We also provided sample code in a GitHub repository that shows the fine-tuning, evaluation, and deployment workflow for a Llama3 model.

You can start taking advantage of SageMaker with MLflow for traditional MLOps or to run LLM experimentation at scale.


About the Authors

Jagdeep Singh Soni is a Senior Partner Solutions Architect at AWS based in the Netherlands. He uses his passion for Generative AI to help customers and partners build GenAI applications using AWS services. Jagdeep has 15 years of experience in innovation, experience engineering, digital transformation, cloud architecture and ML applications.

SokratisDr. Sokratis Kartakis is a Principal Machine Learning and Operations Specialist Solutions Architect for Amazon Web Services. Sokratis focuses on enabling enterprise customers to industrialize their ML and generative AI solutions by exploiting AWS services and shaping their operating model, such as MLOps/FMOps/LLMOps foundations, and transformation roadmap using best development practices. He has spent over 15 years inventing, designing, leading, and implementing innovative end-to-end production-level ML and AI solutions in the domains of energy, retail, health, finance, motorsports, and more.

Kirit Thadaka is a Senior Product Manager at AWS focused on generative AI experimentation on Amazon SageMaker. Kirit has extensive experience working with customers to build scalable workflows for MLOps to make them more efficient at bringing models to production.

Piyush Kadam is a Senior Product Manager for Amazon SageMaker, a fully managed service for generative AI builders. Piyush has extensive experience delivering products that help startups and enterprise customers harness the power of foundation models.

Read More

Discover insights from Amazon S3 with Amazon Q S3 connector 

Discover insights from Amazon S3 with Amazon Q S3 connector 

Amazon Q is a fully managed, generative artificial intelligence (AI) powered assistant that you can configure to answer questions, provide summaries, generate content, gain insights, and complete tasks based on data in your enterprise. The enterprise data required for these generative-AI powered assistants can reside in varied repositories across your organization. One common repository to store data is Amazon Simple Storage Service (Amazon S3), which is an object storage service that stores data as objects within storage buckets. Customers of all sizes and industries can securely index data from a variety of data sources such as document repositories, web sites, content management systems, customer relationship management systems, messaging applications, database, and so on.

To build a generative AI-based conversational application that’s integrated with the data sources that contain the relevant content an enterprise needs to invest time, money, and people, you need to build connectors to the data sources. Next you need to index the data to make it available for a Retrieval Augmented Generation (RAG) approach where relevant passages are delivered with high accuracy to a large language model (LLM). To do this you need to select an index that provides the capabilities to index the content for semantic and vector search, build the infrastructure to retrieve the data, rank the answers, and build a feature rich web application. You also need to hire and staff a large team to build, maintain and manage such a system.

Amazon Q Business is a fully managed generative AI-powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Amazon Q business can help you get fast, relevant answers to pressing questions, solve problems, generate content, and take actions using the data and expertise found in your company’s information repositories, code, and enterprise systems such as Atlassian Jira and others. To do this, Amazon Q provides native data source connectors that can index content into a built-in retriever and uses an LLM to provide accurate, well written answers. A data source connector within Amazon Q helps to integrate and synchronize data from multiple repositories into one index.

Amazon Q Business offers multiple prebuilt connectors to a large number of data sources, including Atlassian Jira, Atlassian Confluence, Amazon S3, Microsoft SharePoint, Salesforce, and many more and can help you create your generative AI solution with minimal configuration. For a full list of Amazon Q supported data source connectors, see Amazon Q connectors.

Now you can use the Amazon Q S3 connector to index your data on S3 and build a generative AI assistant that can derive insights from the data stored. Amazon Q generates comprehensive responses to natural language queries from users by analyzing information across content that it has access to. Amazon Q also supports access control for your data so that the right users can access the right content. Its responses to questions are based on the content that your end user has permissions to access.

This post shows how to configure the Amazon Q S3 connector and derive insights by creating a generative-AI powered conversation experience on AWS using Amazon Q while using access control lists (ACLs) to restrict access to documents based on user permissions.

Finding accurate answers from content in S3 using Amazon Q Business

After you integrate Amazon Q Business with Amazon S3, users can ask questions about the content stored in S3. For example, a user might ask about the main points discussed in a blog post on cloud security, the installation steps outlined in a user guide, findings from a case study on hybrid cloud usage, market trends noted in an analyst report, or key takeaways from a whitepaper on data encryption. This integration helps users to quickly find the specific information they need, improving their understanding and ability to make informed business decisions.

Secure querying with ACL crawling and identity crawling

Secure querying is when a user runs a query and is returned answers from documents that the user has access to and not from documents that the user does not have access to. To enable users to do secure querying, Amazon Q Business honors ACLs of the documents. Amazon Q Business does this by first supporting the indexing of ACLs. Indexing documents with ACLs is crucial for maintaining data security, because documents without ACLs are treated as public. Second, at query time the user’s credentials (email address) are passed along with the query so that only answers from documents that are relevant to the query and that the user is authorized to access are displayed.

A document’s ACL, included in the metadata.json or acl.json files alongside the document in the S3 bucket, contains details such as the user’s email address and local groups.

When a user signs in to a web application to conduct a search, their credentials (such as an email address) need to match what’s in the ACL of the document to return results from that document. The web application that the user uses to retrieve answers would be connected to an identity provider (IdP) or the AWS IAM Identity Center. The user’s credentials from the IdP or IAM Identity Center are referred to here as the federated user credentials. The federated user credentials are passed along with the query so that Amazon Q can return the answers from the documents that this user has access to. However, there are occasions when a user’s federated credentials might be absent from the S3 bucket ACLs. In these instances, only the user’s local alias and local groups are specified in the document’s ACL. Therefore, it’s necessary to map these federated user credentials to the corresponding local user alias and local group in the document’s ACL.

Any document or folder without an explicit ACL Deny clause is treated as public.

Solution overview

As an administrator user of Amazon Q, the high-level steps to set up a generative AI chat application are to create an Amazon Q application, connect to different data sources, and finally deploy your web experience. An Amazon Q web experience is the chat interface that you create using your Amazon Q application. Then, your users can chat with your organization’s Amazon Q web experience, and it can be integrated with IAM Identity Center. You can configure and customize your Amazon Q web experience using either the AWS Management Console for Amazon Q or the Amazon Q API.

Amazon Q understands and respects your existing identities, roles, and permissions and uses this information to personalize its interactions. If a user doesn’t have permission to access data without Amazon Q, they can’t access it using Amazon Q either. The following table outlines which documents each user is authorized to access for our use case. The documents being used in this example are a subset of AWS public documents. In this blog post, we will focus on users Arnav (Guest), Mary, and Pat and their assigned groups.

First name Last name Group Document type authorized for access
1 Arnav Desai Blogs
2 Pat Candella Customer Blogs, user guides
3 Jane Doe Sales Blogs, user guides, and case studies
4 John Stiles Marketing Blogs, user guides, case studies, and analyst reports
5 Mary Major Solutions architect Blogs, user guides, case studies, analyst reports, and whitepapers

Architecture diagram

The following diagram illustrates the solution architecture. Amazon S3 is the data source and documents along with the ACL information are passed to Amazon Q from S3. The user submits a query to the Amazon Q application. Amazon Q retrieves the user and group information and provides answers based on the documents that the user has access to.

Architecture Diagram

In the upcoming sections, we will show you how to implement this architecture.

Prerequisites

For this walkthrough, you should have the following prerequisites:

Prepare your S3 bucket as a data source

In the AWS Region list, choose US East (N. Virginia) as the Region. You can choose any Region that Amazon Q is available in but ensure that you remain in the same Region when creating all other resources. To prepare an S3 bucket as a data source, create an S3 bucket. Note the name of the S3 bucket. Replace <REPLACE-WITH-NAME-OF-S3-BUCKET> with the name of the bucket in the commands below. In a terminal with the AWS Command Line Interface (AWS CLI) or AWS CloudShell, run the following commands to upload the documents to the data source bucket:

aws s3 cp s3://aws-ml-blog/artifacts/building-a-secure-search-application-with-access-controls-kendra/docs.zip .

unzip docs.zip

aws s3 cp Data/ s3://<REPLACE-WITH-NAME-OF-S3-BUCKET>/Data/ --recursive

aws s3 cp Meta/ s3://<REPLACE-WITH-NAME-OF-S3-BUCKET>/Meta/ --recursive

The documents being queried are stored in an S3 bucket. Each document type has a separate folder: blogs, case-studies, analyst reports, user guides, and white papers. This folder structure is contained in a folder named Data as shown below:

S3 Bucket Structure

Each object in S3 is considered a single document. Any <object-name>.metadata.json file and access control list (ACL) file is considered metadata for the object it’s associated with and not treated as a separate document. In this example, metadata files including the ACLs are in a folder named Meta. We use the Amazon Q S3 connector to configure this S3 bucket as the data source. When the data source is synced with the Amazon Q index, it crawls and indexes all documents and collects the ACLs and document attributes from the metadata files. To learn more about ACLs using metadata files, see Amazon S3 document metadata. Here’s the sample metadata JSON file:

{
   "Attributes": {
      "DocumentType": "user-guides"
   },
   "AccessControlList": [
      { "Access": "ALLOW", "Name": "customer", "Type": "GROUP" },
      { "Access": "ALLOW", "Name": "AWS-Sales", "Type": "GROUP" },
      { "Access": "ALLOW", "Name": "AWS-Marketing", "Type": "GROUP" },
      { "Access": "ALLOW", "Name": "AWS-SA", "Type": "GROUP" }
   ]
}

Create users and groups in IAM Identity Center

In this section, you create the following mapping for demonstration:

User Group name
1 Arnav
2 Pat customer
3 Mary AWS-SA

To create users:

  1. Open the AWS IAM Identity Center
  2. If you haven’t enabled IAM Identity Center, choose Enable. If there’s a pop-up, choose how you want to enable IAM Identity Center. For this example, select Enable only in this AWS account. Choose Continue.Enable IAM Identity Center
  3. In the IAM Identity Center dashboard, choose Users in the navigation pane.
  4. Choose Add User.
  5. Enter the user details for Mary:
    1. Username: mary_major
    2. Email address: mary_major@example.com
      Note: Use or create a real email address for each user to use in a later step.
    3. First name: Mary
    4. Last name: Major
    5. Display name: Mary MajorAdd user in IDC
  6. Skip the optional fields and choose Next to create the user.
  7. In the Add user to groups page, choose Next and then choose Add user. Follow the same steps to create users for Pat and Arnav (Guest user).
    (You will assign users to groups at a later step.)

To create groups:

  1. Now, you will create two groups: AWS-SA and customer. Choose Groups on the navigation pane and choose Create group.

Create group

  1. For the group name, enter AWS-SA, add user Mary to the group,and choose Create group.Steps for creating group
  2. Similarly, create a group name customer, add user Pat, and choose Create group.
  3. Now, add multi-factor authentication to the users following the instructions sent to the user email. For more details, see Multi-factor authentication for Identity Center users. When done, you will have the users and groups set up on IAM Identity Center.

Create and configure your Amazon Q application

In this step, you create an Amazon Q application that powers the conversation web experience:

  1. On the AWS Management Console for Amazon Q, in the Region list, choose US East (N. Virginia).
  2. On the Getting started page, select Enable identity-aware sessions. Once enabled, Amazon Q connected to IAM Identity Center should be displayed. Choose Subscribe in Q Business.Amazon Q Console
  3. On the Amazon Q Business console, choose Get started.Get started with Amazon Q
  4. On the Applications page, choose Create application.Create application
  5. On the Create application page, enter Application name and leave everything else with default values. Application page in Amazon Q
  6. Choose Create.
  7. On the Select retriever page, for Retrievers, select Use native retriever.Retrievers page
  8. Choose Next. This will take you to the Connect data sources

Configure Amazon S3 as the data source

In this section, you walk through an example of adding an S3 connector. The S3 connector consists of blogs, user guides, case studies, analyst reports, and whitepapers.

To add the S3 connector:

  1. On the Connect data sources page, select Amazon S3 connector.Select Amazon S3 connector
  2. For Data source name, enter a name for your data source.
  3. In the IAM role section, select Create new service role (Recommended).Create S3 service role
  1. In Sync scope section, browse to your S3 bucket containing the data files.
  2. Under Advanced settings, for Metadata files prefix folder location, enter Meta/
  3. Choose Filter patterns. Under Include patterns, enter Data/ as the prefix and choose Add.Sync scope
  4. For Frequency under Sync run schedule, choose Run on demand.
  5. Leave the rest as default and choose Add data source. Wait until the data source is added.
  6. On the Connect data sources page, choose Next. This will take you to the Add users and groups

Add users and groups in Amazon Q

In this section, you set up users and groups to showcase how access can be managed based on the permissions.

  1. On the Add users and groups page, choose Assign existing users and groups and choose Next.Assign users and groups
  2. Enter the users and groups you want to add and choose Assign. You will have to enter the user names and groups in the search box and select the user or group. Verify that users and groups are correctly displayed under the Users and Groups tabs respectively.
    Assign user
  3. Select the Current subscription. In this example, we selected choose Q Business Lite for groups. Choose the same subscription for users under the Users tab. You can also update subscriptions after creating the application.Add groups
  4. Leave the Service role name as default and choose Create application.

Sync S3 data source

With your application created, you will crawl and index the documents in the S3 bucket created at the beginning of the process.

  1. Select the name of the application

Select name of application

  1. Go to the Data sources Select the radio button next to the S3 data source and choose Sync now.

Sync now

  1. The sync can take from a few minutes to a few hours. Wait for the sync to complete. Verify the sync is complete and documents have been added.

Wait for sync to complete

Run queries with Amazon Q

Now that you have configured the Amazon Q application and integrated it with IAM Identity Center, you can test queries from different users based on their group permissions. This will demonstrate how Amazon Q respects the access control rules set up in the Amazon S3 data source.

You have three users for testing—Pat from the Customer group, Mary from the AWS-SA group, and Arnav who isn’t part of any group. According to the access control list (ACL) configuration, Pat should have access to blogs and user guides, Mary should have access to blogs, user guides, case studies, analyst reports, and whitepapers, and Arnav should have access only to blogs.

In the following steps, you will sign in as each user and ask various questions to see what responses Amazon Q provides based on the permitted document types for their respective groups. You will also test edge cases where users try to access information from restricted sources to validate the access control functionality.

  • In the Amazon Q Business console, choose Applications on the navigation pane and copy the Web experience URL.

Web experience URL

Sign in as Pat to the Amazon Q chat interface.

Pat is part of the Customer group and has access to blogs and user guides

When asked a question like “What is AWS?” Amazon Q will provide a summary pulling information from blogs and user guides, highlighting the sources at the end of each excerpt.

What is AWS?

Try asking a question that requires information from user guides, such as “How do I set up an AWS account?” Amazon Q will summarize relevant details from the permitted user guide sources for Pat’s group.

How do I set up an AWS account?

However, if you, as Pat, ask a question that requires information from whitepapers, analyst reports, or case studies, Amazon Q will indicate that it could not find any relevant information from the sources she has access to.

Ask a question such as “What are the strategic planning assumptions for the year 2025?” to see this.

Strategic planning

Sign in as Mary to the Amazon Q chat interface.

Sign out as user Pat. Start a new incognito browser session or use a different browser. Copy the web experience URL and sign in as user Mary. Repeat these steps each time you need to sign in as a different user.

Mary is part of the AWS-SA group, so she has access to blogs, case studies, analyst reports, and whitepapers.

When Mary asks the same question about strategic planning, Amazon Q will provide a comprehensive summary pulling information from all the permitted sources.

Mary strategic planning

With Mary’s sign-in, you can ask various other questions related to AWS services, architectures, or solutions, and Amazon Q will effectively summarize information from across all the content types Mary’s group has access to.

Key benefits of AWS

Sign in as Arnav to the Amazon Q chat interface

Arnav is not part of any group and is able to access only blogs. If Arnav asks a question about Amazon Polly, Amazon Q will return blog posts.

Amazon Polly

When Arnav tries to get information from the user guides, access is restricted. If they ask about something like how to set up an AWS account, Amazon Q responds that it could not find relevant information.

Set up an AWS account

This shows how Amazon Q respects the data access rules configured in the Amazon S3 data source, allowing users to gain insights only from the content their group has permissions to view, while still providing comprehensive answers when possible within those boundaries.

Troubleshooting

Troubleshooting your Amazon S3 connector provides information about error codes you might see for the Amazon S3 connector and suggested troubleshooting actions. If you encounter an HTTP status code 403 (Forbidden) error when you open your Amazon Q Business application, it means that the user is unable to access the application. See Troubleshooting Amazon Q Business and identity provider integration for common causes and how to address them.

Frequently asked questions

Q. Why isn’t Amazon Q Business answering any of my questions?

A. Verify that you have synced your data source on the Amazon Q console. Also, check the ACLs to ensure you have the required permissions to retrieve answers from Amazon Q.

Q. How can I sync documents without ACLs?

A. When configuring the Amazon S3 connector, under Sync scope, you can optionally choose not to include the metadata or ACL configuration file location in Advanced settings. This will allow you to sync documents without ACLs.

Sync scope

Q. I updated the contents of my S3 data source but Amazon Q business answers using old data.

A. After content has been updated in your S3 data source location, you must re-sync the contents for the updated data to be picked up by Amazon Q. Go to the Data sources Select the radio button next to the S3 data source and choose Sync now. After the sync is complete, verify that the updated data is reflected by running queries on Amazon Q.

Sync now

Q. I am unable to sign in as a new user through the web experience URL.

A. Clear your browser cookies and sign in as a new user.

Q. I keep trying to sign in but am getting this error:

Error

A. Try signing in from a different browser or clear browser cookies and try again.

Q. What are the supported document formats and what is considered a document in Amazon S3?

A. See Supported document types and What is a document? to learn more.

Call to action

Explore other features in Amazon Q Business such as:

  • The Amazon Q Business document enrichment feature helps you control both what documents and document attributes are ingested into your index and also how they’re ingested. Using document enrichment, you can create, modify, or delete document attributes and document content when you ingest them into your Amazon Q Business index. For example, you can scrub personally identifiable information (PII) by choosing to delete any document attributes related to PII.
  • Amazon Q Business features
    • Filtering using metadata – Use document attributes to customize and control users’ chat experience. Currently supported only if you use the Amazon Q Business API.
    • Source attribution with citations – Verify responses using Amazon Q Business source attributions.
    • Upload files and chat – Let users upload files directly into chat and use uploaded file data to perform web experience tasks.
    • Quick prompts – Feature sample prompts to inform users of the capabilities of their Amazon Q Business web experience.
  • To improve retrieved results and customize the user chat experience, you can map document attributes from your data sources to fields in your Amazon Q index. Learn more by exploring Amazon Q Business Amazon S3 data source connector field mappings.

Clean up

To avoid incurring future charges and to clean out unused roles and policies, delete the resources you created: the Amazon Q application, data sources, and corresponding IAM roles.

  1. To delete the Amazon Q application, go to the Amazon Q console and, on the Applications page, select your application.
  2. On the Actions drop-down menu, choose Delete.
  3. To confirm deletion, enter delete in the field and choose Delete. Wait until you get the confirmation message; the process can take up to 15 minutes.
  4. To delete the S3 bucket created in Prepare your S3 bucket as a data source, empty the bucket and then follow the steps to delete the bucket.
  5. Delete your IAM Identity Center instance.

Conclusion

This blog post has walked you through the steps to build a secure, permissions-based generative AI solution using Amazon Q and Amazon S3 as the data source. By configuring user groups and mapping their access privileges to different document folders in S3, it demonstrated that Amazon Q respects these access control rules. When users query the AI assistant, it provides comprehensive responses by analyzing only the content their group has permission to view, preventing unauthorized access to restricted information. This solution allows organizations to safely unlock insights from their data repositories using generative AI while ensuring data access governance.

Don’t let your data’s potential go untapped. Continue exploring how Amazon Q can transform your enterprise data to gain actionable insights. Join the conversation and share your thoughts or questions in the comments section below.


About the Author

Kruthi Jayasimha Rao is a Partner Solutions Architect with a focus in AI and ML. She provides technical guidance to AWS Partners in following best practices to build secure, resilient, and highly available solutions in the AWS Cloud.


Keagan Mirazee is a Partner Solutions Architect specializing in Generative AI to assist AWS Partners in engineering reliable and scalable cloud solutions.


Dipti Kulkarni is a Sr. Software Development Engineer for Amazon Q. Dipti is a passionate engineer building connectors for Amazon Q.

Read More

Boosting Salesforce Einstein’s code generating model performance with Amazon SageMaker

Boosting Salesforce Einstein’s code generating model performance with Amazon SageMaker

This post is a joint collaboration between Salesforce and AWS and is being cross-published on both the Salesforce Engineering Blog and the AWS Machine Learning Blog.

Salesforce, Inc. is an American cloud-based software company headquartered in San Francisco, California. It provides customer relationship management (CRM) software and applications focused on sales, customer service, marketing automation, ecommerce, analytics, and application development. Salesforce is building toward artificial general intelligence (AGI) for business, enabling predictive and generative functions within their flagship software-as-a-service (SaaS) CRM, and working toward intelligent automations using artificial intelligence (AI) as well as agents.

Salesforce Einstein is a set of AI technologies that integrate with Salesforce’s Customer Success Platform to help businesses improve productivity and client engagement. Einstein has a list of over 60 features, unlocked at different price points and segmented into four main categories: machine learning (ML), natural language processing (NLP), computer vision, and automatic speech recognition. Einstein delivers advanced AI capabilities into sales, service, marketing, and other functions, empowering companies to deliver more personalized and predictive customer experiences. Einstein has out-of-the-box AI features such as sales email generation in Sales Cloud and service replies in Service Cloud. They also have tools such as Copilot, Prompt, and Model Builder, three tools contained in the Einstein 1 Studio, that allow organizations to build custom AI functionality and roll it out to their users.

The Salesforce Einstein AI Platform team is the group supporting development of Einstein applications. They are committed to enhancing the performance and capabilities of AI models, with a particular focus on large language models (LLMs) for use with Einstein product offerings. These models are designed to provide advanced NLP capabilities for various business applications. Their mission is to continuously refine these LLMs and AI models by integrating state-of-the-art solutions and collaborating with leading technology providers, including open source communities and public cloud services like AWS and building it into a unified AI platform. This helps make sure Salesforce customers receive the most advanced AI technology available.

In this post, we share how the Salesforce Einstein AI Platform team boosted latency and throughput of their code generation LLM using Amazon SageMaker.

The challenge with hosting LLMs

In the beginning of 2023, the team started looking at solutions to host CodeGen, Salesforce’s in-house open source LLM for code understanding and code generation. The CodeGen model allows users to translate natural language, such as English, into programming languages, such as Python. Because they were already using AWS for inference for their smaller predictive models, they were looking to extend the Einstein platform to help them host CodeGen. Salesforce developed an ensemble of CodeGen models (Inline for automatic code completion, BlockGen for code block generation, and FlowGPT for process flow generation) specifically tuned for the Apex programming language. Salesforce Apex is a certified framework for building SaaS apps on top of Salesforce’s CRM functionality. They were looking for a solution that can securely host their model and help them handle a large volume of inference requests as well as multiple concurrent requests at scale. They also needed to be able to meet their throughput and latency requirements for their co-pilot application (EinsteinGPT for Developers). EinsteinGPT for Developers simplifies the start of development by creating smart Apex based on natural language prompts. Developers can accelerate coding tasks by scanning for code vulnerabilities and getting real-time code suggestions within the Salesforce integrated development environment (IDE), as shown in the following screenshot.

style="margin:

The Einstein team conducted a comprehensive evaluation of various tools and services, including open source options and paid solutions. After assessing these options, they found that SageMaker provided the best access to GPUs, scalability, flexibility, and performance optimizations for a wide range of scenarios, particularly in addressing their challenges with latency and throughput.

Why Salesforce Einstein chose SageMaker

SageMaker offered several specific features that proved essential to meeting Salesforce’s requirements:

  • Multiple serving engines – SageMaker includes specialized deep learning containers (DLCs), libraries, and tooling for model parallelism and large model inference (LMI) containers. LMI containers are a set of high-performance Docker Containers purpose built for LLM inference. With these containers, you can use high performance open source inference libraries like FasterTransformer, TensorRT-LLM, vLLM and Transformers NeuronX. These containers bundle together a model server with open source inference libraries to deliver an all-in-one LLM serving solution. The Einstein team liked how SageMaker provided quick-start notebooks that get them deploying these popular open source models in minutes.
  • Advanced batching strategies – The SageMaker LMI allows customers to optimize performance of their LLMs by enabling features like batching, which groups multiple requests together before they hit the model. Dynamic batching instructs the server to wait a predefined amount of time and batch up all requests that occur in that window with a maximum of 64 requests, while paying attention to a configured preferred size. This optimizes the use of GPU resources and balances throughput with latency, ultimately reducing the latter. The Einstein team liked how they were able to use dynamic batching through the LMI to increase throughput for their Codegen models while minimizing latency.
  • Efficient routing strategy – By default, SageMaker endpoints have a random routing strategy. SageMaker also supports a least outstanding requests (LOR) strategy, which allows SageMaker to optimally route requests to the instance that’s best suited to serve that request. SageMaker makes this possible by monitoring the load of the instances behind your endpoint and the models or inference components that are deployed on each instance. Customers have the flexibility to choose either algorithm depending on their workload needs. Along with the capability to handle multiple model instances across several GPUs, the Einstein team liked how the SageMaker routing strategy ensures that traffic is evenly and efficiently distributed to model instances, preventing any single instance from becoming a bottleneck.
  • Access to high-end GPUs – SageMaker provides access to top-end GPU instances, which are essential for running LLMs efficiently. This is particularly valuable given the current market shortages of high-end GPUs. SageMaker allowed the Einstein team to use auto-scaling of these GPUs to meet demand without manual intervention.
  • Rapid iteration and deployment – While not directly related to latency, the ability to quickly test and deploy changes using SageMaker notebooks helps in reducing the overall development cycle, which can indirectly impact latency by accelerating the implementation of performance improvements. The use of notebooks enabled the Einstein team to shorten their overall deployment time and get their models hosted in production much faster.

These features collectively help optimize the performance of LLMs by reducing latency and improving throughput, making Amazon SageMaker a robust solution for managing and deploying large-scale machine learning models.

One of the key capabilities was how using SageMaker LMI provided a blueprint of model performance optimization parameters for NVIDIA’s FasterTransformer library to use with CodeGen. When the team initially deployed CodeGen 2.5, a 7B parameter model on Amazon Elastic Compute Cloud (Amazon EC2), the model wasn’t performing well for inference. Initially, for a code block generation task, it could only handle six requests per minute, with each request taking over 30 seconds to process. This was far from efficient and scalable. However, after using the SageMaker FasterTransformer LMI notebook and referencing the advanced SageMaker-provided guides to understand how to optimize the different endpoint parameters provided, there was a significant improvement in model performance. The system now handles around 400 requests per minute with a reduced latency of approximately seven seconds per request, each containing about 512 tokens. This represents an over 6,500 percent increase in throughput after optimization. This enhancement was a major breakthrough, demonstrating how the capabilities of SageMaker were instrumental in optimizing the throughput of the LLM and reducing cost. (The FasterTransformer backend has been deprecated by NVIDIA; the team is working toward migrating to the TensorRT (TRT-LLM) LMI.)

To assess the performance of LLMs, the Einstein team focuses on two key metrics:

  • Throughput – Measured by the number of tokens an LLM can generate per second
  • Latency – Determined by the time it takes to generate these tokens for individual requests

Extensive performance testing and benchmarking was conducted to track these metrics. Before using SageMaker, CodeGen models had a lower token-per-second rate and higher latencies. With SageMaker optimization, the team observed significant improvements in both throughput and latency, as shown in the following figure.

Codegen Latency Graph

Latency and throughput changes with different techniques for CodeGen1 and CodeGen2.5 models. CodeGen1 is the original version of CodeGen, which is a 16B model. CodeGen2.5 is the optimized version, which is a 7B model. For more information about CodeGen 2.5, refer to CodeGen2.5: Small, but mighty.

New challenges and opportunities

The primary challenge that the team faced when integrating SageMaker was enhancing the platform to include specific functionalities that were essential for their projects. For instance, they needed additional features for NVIDIA’s FasterTransformer to optimize their model performance. Through a productive collaboration with the SageMaker team, they successfully integrated this support, which initially was not available.

Additionally, the team identified an opportunity to improve resource efficiency by hosting multiple LLMs on a single GPU instance. Their feedback helped develop the inference component feature, which now allows Salesforce and other SageMaker users to utilize GPU resources more effectively. These enhancements were crucial in tailoring the platform to Salesforce’s specific needs.

Key takeaways

The team took away the following key lessons from optimizing models in SageMaker for future projects:

  • Stay updated – It’s crucial to keep up with the latest inferencing engines and optimization techniques because these advancements significantly influence model optimization.
  • Tailor optimization strategies – Model-specific optimization strategies like batching and quantization require careful handling and coordination, because each model might require a tailored approach.
  • Implement cost-effective model hosting – You can optimize the allocation of limited GPU resources to control expenses. Techniques such as virtualization can be used to host multiple models on a single GPU, reducing costs.
  • Keep pace with innovations – The field of model inferencing is rapidly evolving with technologies like Amazon SageMaker JumpStart and Amazon Bedrock. Developing strategies for adopting and integrating these technologies is imperative for future optimization efforts.

Conclusion

In this post, we shared how the Salesforce Einstein AI Platform team boosted latency and throughput of their code generation LLM using SageMaker, and saw an over 6,500 percent increase in throughput after optimization.

Looking to host your own LLMs on SageMaker? To get started, see this guide.

_______________________________________________________________________

About the Authors

Pawan Agarwal is the Senior Director of Software Engineering at Salesforce. He leads efforts in Generative and Predictive AI, focusing on inferencing, training, fine-tuning, and notebooking technologies that power the Salesforce Einstein suite of applications.

Rielah De Jesus is a Principal Solutions Architect at AWS who has successfully helped various enterprise customers in the DC, Maryland, and Virginia area move to the cloud. In her current role she acts as a customer advocate and technical advisor focused on helping organizations like Salesforce achieve success on the AWS platform. She is also a staunch supporter of Women in IT and is very passionate about finding ways to creatively use technology and data to solve everyday challenges.

Read More

Detect and protect sensitive data with Amazon Lex and Amazon CloudWatch Logs

Detect and protect sensitive data with Amazon Lex and Amazon CloudWatch Logs

In today’s digital landscape, the protection of personally identifiable information (PII) is not just a regulatory requirement, but a cornerstone of consumer trust and business integrity. Organizations use advanced natural language detection services like Amazon Lex for building conversational interfaces and Amazon CloudWatch for monitoring and analyzing operational data.

One risk many organizations face is the inadvertent exposure of sensitive data through logs, voice chat transcripts, and metrics. This risk is exacerbated by the increasing sophistication of cyber threats and the stringent penalties associated with data protection violations. Dealing with massive datasets is not just about identifying and categorizing PII. The challenge also lies in implementing robust mechanisms to obfuscate and redact this sensitive data. At the same time, it’s crucial to make sure these security measures don’t undermine the functionality and analytics critical to business operations.

This post addresses this pressing pain point, offering prescriptive guidance on safeguarding PII through detection and masking techniques specifically tailored for environments using Amazon Lex and CloudWatch Logs.

Solution overview

To address this critical challenge, our solution uses the slot obfuscation feature in Amazon Lex and the data protection capabilities of CloudWatch Logs, tailored specifically for detecting and protecting PII in logs.

In Amazon Lex, slots are used to capture and store user input during a conversation. Slots are placeholders within an intent that represent an action the user wants to perform. For example, in a flight booking bot, slots might include departure city, destination city, and travel dates. Slot obfuscation makes sure any information collected through Amazon Lex conversational interfaces, such as names, addresses, or any other PII entered by users, is obfuscated at the point of capture. This method reduces the risk of sensitive data exposure in chat logs and playbacks.

In CloudWatch Logs, data protection and custom identifiers add an additional layer of security by enabling the masking of PII within session attributes, input transcripts, and other sensitive log data that is specific to your organization.

This approach minimizes the footprint of sensitive information across these services and helps with compliance with data protection regulations.

In the following sections, we demonstrate how to identify and classify your data, locate your sensitive data, and finally monitor and protect it, both in transit and at rest, especially in areas where it may inadvertently appear. The following are the four ways to do this:

  • Amazon Lex – Monitor and protect data with Amazon Lex using slot obfuscation and selective conversation log capture
  • CloudWatch Logs – Monitor and protect data with CloudWatch Logs using playbacks and log group policies
  • Amazon S3 – Monitor and protect data with Amazon Simple Storage Service (Amazon S3) using bucket security and encryption
  • Service Control Policies Monitor and protect with data governance controls and risk management policies using Service Control Policies (SCPs) to prevent changes to Amazon Lex chatbots and CloudWatch Logs groups, and restrict unmasked data viewing in CloudWatch Logs Insights

Identify and classify your data

The first step is to identify and classify the data flowing through your systems. This involves understanding the types of information processed and determining their sensitivity level.

To determine all the slots in an intent in Amazon Lex, complete the following steps:

  1. On the Amazon Lex console, choose Bots in the navigation pane.
  2. Choose your preferred bot.
  3. In the navigation pane, choose the locale under All Languages and choose Intents.
  4. Choose the required intent from the list.
  5. In the Slots section, make note of all the slots within the intent.

Lex bot slots

After you identify the slots within the intent, it’s important to classify them according to their sensitivity level and the potential impact of unauthorized access or disclosure. For example, you may have the following data types:

  • Name
  • Address
  • Phone number
  • Email address
  • Account number

Email address and physical mailing address are often considered a medium classification level. Sensitive data, such as name, account number, and phone number, should be tagged with a high classification level, indicating the need for stringent security measures. These guidelines can help with systematically evaluating data.

Locate your data stores

After you classify the data, the next step is to locate where this data resides or is processed in your systems and applications. For services involving Amazon Lex and CloudWatch, it’s crucial to identify all data stores and their roles in handling PII.

CloudWatch captures logs generated by Amazon Lex, including interaction logs that might contain PII. Regular audits and monitoring of these logs are essential to detect any unauthorized access or anomalies in data handling.

Amazon S3 is often used in conjunction with Amazon Lex for storing call recordings or transcripts, which may contain sensitive information. Making sure these storage buckets are properly configured with encryption, access controls, and lifecycle policies are vital to protect the stored data.

Organizations can create a robust framework for protection by identifying and classifying data, along with pinpointing the data stores (like CloudWatch and Amazon S3). This framework should include regular audits, access controls, and data encryption to prevent unauthorized access and comply with data protection laws.

Monitor and protect data with Amazon Lex

In this section, we demonstrate how to protect your data with Amazon Lex using slot obfuscation and selective conversation log capture.

Slot obfuscation in Amazon Lex

Sensitive information can appear in the input transcripts of conversation logs. It’s essential to implement mechanisms that detect and mask or redact PII in these transcripts before they are stored or logged.

In the development of conversational interfaces using Amazon Lex, safeguarding PII is crucial to maintain user privacy and comply with data protection regulations. Slot obfuscation provides a mechanism to automatically obscure PII within conversation logs, making sure sensitive information is not exposed. When configuring an intent within an Amazon Lex bot, developers can mark specific slots—placeholders for user-provided information—as obfuscated. This setting tells Amazon Lex to replace the actual user input for these slots with a placeholder in the logs. For instance, enabling obfuscation for slots designed to capture sensitive information like account numbers or phone numbers makes sure any matching input is masked in the conversation log. Slot obfuscation allows developers to significantly reduce the risk of inadvertently logging sensitive information, thereby enhancing the privacy and security of the conversational application. It’s a best practice to identify and mark all slots that could potentially capture PII during the bot design phase to provide comprehensive protection across the conversation flow.

To enable obfuscation for a slot from the Amazon Lex console, complete the following steps:

  1. On the Amazon Lex console, choose Bots in the navigation pane.
  2. Choose your preferred bot.
  3. In the navigation pane, choose the locale under All Languages and choose Intents.
  4. Choose your preferred intent from the list.
  5. In the Slots section, expand the slot details.
  6. Choose Advanced options to access additional settings.
  7. Select Enable slot obfuscation.
  8. Choose Update slot to save the changes.

Lex enable slot obfuscation

Selective conversation log capture

Amazon Lex offers capabilities to select how conversation logs are captured with text and audio data from live conversations by enabling the filtering of certain types of information from the conversation logs. Through selective capture of necessary data, businesses can minimize the risk of exposing private or confidential information. Additionally, this feature can help organizations comply with data privacy regulations, because it gives more control over the data collected and stored. There is a choice between text, audio, or text and audio logs.

When selective conversation log capture is enabled for text and audio logs, it disables logging for all intents and slots in the conversation. To generate text and audio logs for particular intents and slots, set the text and audio selective conversation log capture session attributes for those intents and slots to “true”. When selective conversation log capture is enabled, any slot values in SessionState, Interpretations, and Transcriptions for which logging is not enabled using session attributes will be obfuscated in the generated text log.

To enable selective conversation log capture, complete the following steps:

  1. On the Amazon Lex console, choose Bots in the navigation pane.
  2. Choose your preferred bot.
  3. Choose Aliases under Deployment and choose the bot’s alias.
  4. Choose Manage conversation logs.
  5. Select Selectively log utterances.
    1. For text logs, choose a CloudWatch log group.
    2. For audio logs, choose an S3 bucket to store the logs and assign an AWS Key Management Service (AWS KMS) key for added security.
  6. Save the changes.

Lex enable selective text logging

Lex enable selective audio logging

Now selective conversation log capture for a slot is activated.

  1. Choose Intents in the navigation pane and choose your intent.
  2. Under Initial responses, choose Advanced options and expand Set values.
  3. For Session attributes, set the following attributes based on the intents and slots for which you want to enable selective conversation log capture. This will capture utterances that contain only a specific slot in the conversation.
    1. x-amz-lex:enable-audio-logging:<intent>:<slot> = "true"
    2. x-amz-lex:enable-text-logging:<intent>:<slot> = "true"
  4. Choose Update options and rebuild the bot.

Replace <intent> and <slot> with respective intent and slot names.

Lex selective conversation log capture

Monitor and protect data with CloudWatch Logs

In this section, we demonstrate how to protect your data with CloudWatch using playbacks and log group policies.

Playbacks in CloudWatch Logs

When Amazon Lex engages in interactions, delivering prompts or messages from the bot to the customer, there’s a potential risk for PII to be inadvertently included in these communications. This risk extends to CloudWatch Logs, where these interactions are recorded for monitoring, debugging, and analysis purposes. The playback of prompts or messages designed to confirm or clarify user input can inadvertently expose sensitive information if not properly handled. To mitigate this risk and protect PII within these interactions, a strategic approach is necessary when designing and deploying Amazon Lex bots.

The solution lies in carefully structuring how slot values, which may contain PII, are referenced and used in the bot’s response messages. Adopting a prescribed format for passing slot values, specifically by encapsulating them within curly braces (for example, {slotName}), allows developers to control how this information is presented back to the user and logged in CloudWatch. This method makes sure that when the bot constructs a message, it refers to the slot by its name rather than its value, thereby preventing any sensitive information from being directly included in the message content. For example, instead of the bot saying, “Is your phone number 123-456-7890? ” it would use a generic placeholder, “Is your phone number {PhoneNumber}? ” with {PhoneNumber} being a reference to the slot that captured the user’s phone number. This approach allows the bot to confirm or clarify information without exposing the actual data.

When these interactions are logged in CloudWatch, the logs will only contain the slot name references, not the actual PII. This technique significantly reduces the risk of sensitive information being exposed in logs, enhancing privacy and compliance with data protection regulations. Organizations should make sure all personnel involved in bot design and deployment are trained on these practices to consistently safeguard user information across all interactions.

The following is a sample AWS Lambda function code in Python for referencing the slot value of a phone number provided by the user. SML tags are used to format the slot value to provide slow and clear speech output, and returning a response to confirm the correctness of the captured phone number:

def lambda_handler(event, context):
    # Extract the intent name from the event
    intent_name = event['sessionState']['intent']['name']
    # Extract the slots from the event
    slots = event['sessionState']['intent']['slots']

    # Check if the intent name is 'INTENT_NAME'
     if intent_name == 'INTENT_NAME':
         # Retrieve the phone number from the 'SLOT_NAME' slot
         phone_number = slots['SLOT_NAME']['value']['interpretedValue']
        
        # Create an SSML-formatted message with the phone number
        msg = f'''<speak>
                Thank you for providing your phone number. Is 
                <prosody rate="slow">
                <say-as interpret-as="telephone">{phone_number}</say-as>
                </prosody> correct?
                </speak>'''
        
        # Create a message array
        message_array = [
            {
                'contentType': 'SSML',
                'content': msg
            }
        ]
        
        # Response with the dialog action, intent state, and the message array
        response = {
            'sessionState': {
                'dialogAction': {
                    'type': 'Close'
                },
                'intent': {
                    'name': intent_name,
                    'state': 'Fulfilled'
                }
            },
            'messages': message_array
        }
    else:
        # Generic response for unhandled intents
        response = {
            'sessionState': {
                'dialogAction': {
                    'type': 'Close'
                },
                'intent': {
                    'name': intent_name,
                    'state': 'Fulfilled'
                }
            },
            'messages': [
                {
                    'contentType': 'PlainText',
                    'content': 'I apologize, but I am unable to assist.'
                }
            ]
        }
    return response

Replace INTENT_NAME and SLOT_NAME with your preferred intent and slot names, respectively.

CloudWatch data protection log group policies for data identifiers

Sensitive data that’s ingested by CloudWatch Logs can be safeguarded by using log group data protection policies. These policies allow to audit and mask sensitive data that appears in log events ingested by the log groups in your account.

CloudWatch Logs supports both managed and custom data identifiers.

Managed data identifiers offer preconfigured data types to protect financial data, personal health information (PHI), and PII. For some types of managed data identifiers, the detection depends on also finding certain keywords in proximity with the sensitive data.

Each managed data identifier is designed to detect a specific type of sensitive data, such as name, email address, account numbers, AWS secret access keys, or passport numbers for a particular country or region. When creating a data protection policy, you can configure it to use these identifiers to analyze logs ingested by the log group, and take actions when they are detected.

CloudWatch Logs data protection can detect the categories of sensitive data by using managed data identifiers.

To configure managed data identifiers on the CloudWatch console, complete the following steps:

  1. On the CloudWatch console, under Logs in the navigation pane, choose Log groups.
  2. Select your log group and on the Actions menu, choose Create data protection policy.
  3. Under Auditing and masking configuration, for Managed data identifiers, select all the identifiers for which data protection policy should be applied.
  4. Choose the data store to apply the policy to and save the changes.

Cloudwatch managed data identifiers

Custom data identifiers let you define your own custom regular expressions that can be used in your data protection policy. With custom data identifiers, you can target business-specific PII use cases that managed data identifiers don’t provide. For example, you can use custom data identifiers to look for a company-specific account number format.

To create a custom data identifier on the CloudWatch console, complete the following steps:

  1. On the CloudWatch console, under Logs in the navigation pane, choose Log groups.
  2. Select your log group and on the Actions menu, choose Create data protection policy.
  3. Under Custom Data Identifier configuration, choose Add custom data identifier.
  4. Create your own regex patterns to identify sensitive information that is unique to your organization or specific use case.
  5. After you add your data identifier, choose the data store to apply this policy to.
  6. Choose Activate data protection.

Cloudwatch custom data identifier

For details about the types of data that can be protected, refer to Types of data that you can protect.

Monitor and protect data with Amazon S3

In this section, we demonstrate how to protect your data in S3 buckets.

Encrypt audio recordings in S3 buckets

PII can often be captured in audio recordings, especially in sectors like customer service, healthcare, and financial services, where sensitive information is frequently exchanged over voice interactions. To comply with domain-specific regulatory requirements, organizations must adopt stringent measures for managing PII in audio files.

One approach is to disable the recording feature entirely if it poses too high a risk of non-compliance or if the value of the recordings doesn’t justify the potential privacy implications. However, if audio recordings are essential, streaming the audio data in real time using Amazon Kinesis provides a scalable and secure method to capture, process, and analyze audio data. This data can then be exported to a secure and compliant storage solution, such as Amazon S3, which can be configured to meet specific compliance needs including encryption at rest. You can use AWS KMS or AWS CloudHSM to manage encryption keys, offering robust mechanisms to encrypt audio files at rest, thereby securing the sensitive information they might contain. Implementing these encryption measures makes sure that even if data breaches occur, the encrypted PII remains inaccessible to unauthorized parties.

Configuring these AWS services allows organizations to balance the need for audio data capture with the imperative to protect sensitive information and comply with regulatory standards.

S3 bucket security configurations

You can use an AWS CloudFormation template to configure various security settings for an S3 bucket that stores Amazon Lex data like audio recordings and logs. For more information, see Creating a stack on the AWS CloudFormation console. See the following example code:

AWSTemplateFormatVersion: '2010-09-09'
Description: Create a secure S3 bucket with KMS encryption to store Lex Data
Resources:
  S3Bucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: YOUR_LEX_DATA_BUCKET
      AccessControl: Private
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true
      BucketEncryption:
        ServerSideEncryptionConfiguration:
          - ServerSideEncryptionByDefault:
              SSEAlgorithm: aws:kms
              KMSMasterKeyID: alias/aws/s3 
      VersioningConfiguration:
        Status: Enabled
      ObjectLockConfiguration:
        ObjectLockEnabled: Enabled
        Rule:
          DefaultRetention:
            Mode: GOVERNANCE
            Years: 5
      LoggingConfiguration:
        DestinationBucketName: !Ref YOUR_SERVER_ACCESS_LOG_BUCKET
        LogFilePrefix: lex-bucket-logs/

The template defines the following properties:

  • BucketName– Specifies your bucket. Replace YOUR_LEX_DATA_BUCKET with your preferred bucket name.
  • AccessControl – Sets the bucket access control to Private, denying public access by default.
  • PublicAccessBlockConfiguration – Explicitly blocks all public access to the bucket and its objects
  • BucketEncryption – Enables server-side encryption using the default KMS encryption key ID, alias/aws/s3, managed by AWS for Amazon S3. You can also create custom KMS keys. For instructions, refer to Creating symmetric encryption KMS keys
  • VersioningConfiguration – Enables versioning for the bucket, allowing you to maintain multiple versions of objects.
  • ObjectLockConfiguration – Enables object lock with a governance mode retention period of 5 years, preventing objects from being deleted or overwritten during that period.
  • LoggingConfiguration – Enables server access logging for the bucket, directing log files to a separate logging bucket for auditing and analysis purposes. Replace YOUR_SERVER_ACCESS_LOG_BUCKET with your preferred bucket name.

This is just an example; you may need to adjust the configurations based on your specific requirements and security best practices.

Monitor and protect with data governance controls and risk management policies

In this section, we demonstrate how to protect your data with using a Service Control Policy (SCP). To create an SCP, see Creating an SCP.

Prevent changes to an Amazon Lex chatbot using an SCP

To prevent changes to an Amazon Lex chatbot using an SCP, create one that denies the specific actions related to modifying or deleting the chatbot. For example, you could use the following SCP:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Deny",
      "Action": [
        "lex:DeleteBot",
        "lex:DeleteBotAlias",
        "lex:DeleteBotChannelAssociation",
        "lex:DeleteBotVersion",
        "lex:DeleteIntent",
        "lex:DeleteSlotType",
        "lex:DeleteUtterances",
        "lex:PutBot",
        "lex:PutBotAlias",
        "lex:PutIntent",
        "lex:PutSlotType"
      ],
      "Resource": [
        "arn:aws:lex:*:YOUR_ACCOUNT_ID:bot:YOUR_BOT_NAME",
        "arn:aws:lex:*:YOUR_ACCOUNT_ID:intent:YOUR_BOT_NAME:*",
        "arn:aws:lex:*:YOUR_ACCOUNT_ID:slottype:YOUR_BOT_NAME:*"
      ],
      "Condition": {
        "StringEquals": {
          "aws:PrincipalArn": "arn:aws:iam::YOUR_ACCOUNT_ID:role/YOUR_IAM_ROLE"
        }
      }
    }
  ]
}

The code defines the following:

  • Effect – This is set to Deny, which means that the specified actions will be denied.
  • Action – This contains a list of actions related to modifying or deleting Amazon Lex bots, bot aliases, intents, and slot types.
  • Resource – This lists the Amazon Resource Names (ARNs) for your Amazon Lex bot, intents, and slot types. Replace YOUR_ACCOUNT_ID with your AWS account ID and YOUR_BOT_NAME with the name of your Amazon Lex bot.
  • Condition – This makes sure the policy only applies to actions performed by a specific IAM role. Replace YOUR_ACCOUNT_ID with your AWS account ID and YOUR_IAM_ROLE with the name of the AWS Identity and Access Management (IAM) provisioned role you want this policy to apply to.

When this SCP is attached to an AWS Organizations organizational unit (OU) or an individual AWS account, it will allow only the specified provisioning role while preventing all other IAM entities (users, roles, or groups) within that OU or account from modifying or deleting the specified Amazon Lex bot, intents, and slot types.

This SCP only prevents changes to the Amazon Lex bot and its components. It doesn’t restrict other actions, such as invoking the bot or retrieving its configuration. If more actions need to be restricted, you can add them to the Action list in the SCP.

Prevent changes to a CloudWatch Logs log group using an SCP

To prevent changes to a CloudWatch Logs log group using an SCP, create one that denies the specific actions related to modifying or deleting the log group. The following is an example SCP that you can use:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Deny",
      "Action": [
        "logs:DeleteLogGroup",
        "logs:PutRetentionPolicy"
      ],
      "Resource": "arn:aws:logs:*:YOUR_ACCOUNT_ID:log-group:/aws/YOUR_LOG_GROUP_NAME*",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalArn": "arn:aws:iam::YOUR_ACCOUNT_ID:role/YOUR_IAM_ROLE"
        }
      }
    }
  ]
}

The code defines the following:

  • Effect – This is set to Deny, which means that the specified actions will be denied.
  • Action – This includes logs:DeleteLogGroup and logs:PutRetentionPolicy actions, which prevent deleting the log group and modifying its retention policy, respectively.
  • Resource – This lists the ARN for your CloudWatch Logs log group. Replace YOUR_ACCOUNT_ID with your AWS account ID and YOUR_LOG_GROUP_NAME with the name of your log group.
  • Condition – This makes sure the policy only applies to actions performed by a specific IAM role. Replace YOUR_ACCOUNT_ID with your AWS account ID and YOUR_IAM_ROLE with the name of the IAM provisioned role you want this policy to apply to.

Similar to the preceding chatbot SCP, when this SCP is attached to an Organizations OU or an individual AWS account, it will allow only the specified provisioning role to delete the specified CloudWatch Logs log group or modify its retention policy, while preventing all other IAM entities (users, roles, or groups) within that OU or account from performing these actions.

This SCP only prevents changes to the log group itself and its retention policy. It doesn’t restrict other actions, such as creating or deleting log streams within the log group or modifying other log group configurations. To restrict additional actions, add it to the Action list in the SCP.

Also, this SCP will apply to all log groups that match the specified resource ARN pattern. To target a specific log group, modify the Resource value accordingly.

Restrict viewing of unmasked sensitive data in CloudWatch Logs Insights using an SCP

When you create a data protection policy, by default, any sensitive data that matches the data identifiers you’ve selected is masked at all egress points, including CloudWatch Logs Insights, metric filters, and subscription filters. Only users who have the logs:Unmask IAM permission can view unmasked data. The following is an SCP you can use:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "RestrictUnmasking",
      "Effect": "Deny",
      "Action": "logs:Unmask",
      "Resource": "arn:aws:logs:*:YOUR_ACCOUNT_ID:log-group:YOUR_LOG_GROUP:*",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalArn": "arn:aws:iam::YOUR_ACCOUNT_ID:role/YOUR_IAM_ROLE"
        }
      }
    }
  ]
}

It defines the following:

  • Effect – This is set to Deny, which means that the specified actions will be denied.
  • Action – This includes logs:Unmask, which prevents viewing of masked data.
  • Resource – This lists the ARN for your CloudWatch Logs log group. Replace YOUR_ACCOUNT_ID with your AWS account ID and YOUR_LOG_GROUP_NAME with the name of your log group.
  • Condition – This makes sure the policy only applies to actions performed by a specific IAM role. Replace YOUR_ACCOUNT_ID with your AWS account ID and YOUR_IAM_ROLE with the name of the IAM provisioned role you want this policy to apply to.

Similar to the previous SCPs, when this SCP is attached to an Organizations OU or an individual AWS account, it will allow only the specified provisioning role while preventing all other IAM entities (users, roles, or groups) within that OU or account from unmasking sensitive data from the CloudWatch Logs log group.

Similar to the previous log group service control policy, this SCP only prevents changes to the log group itself and its retention policy. It doesn’t restrict other actions such as creating or deleting log streams within the log group or modifying other log group configurations. To restrict additional actions, add them to the Action list in the SCP.

Also, this SCP will apply to all log groups that match the specified resource ARN pattern. To target a specific log group, modify the Resource value accordingly.

Clean up

To avoid incurring additional charges, clean up your resources:

  1. Delete the Amazon Lex bot:
    1. On the Amazon Lex console, choose Bots in the navigation pane.
    2. Select the bot to delete and on the Action menu, choose Delete.
  2. Delete the associated Lambda function:
    1. On the Lambda console, choose Functions in the navigation pane.
    2. Select the function associated with the bot and on the Action menu, choose Delete.
  3. Delete the account-level data protection policy. For instructions, see DeleteAccountPolicy.
  4. Delete the CloudFormation log group policy:
    1. On the CloudWatch console, under Logs in the navigation pane, choose Log groups.
    2. Choose your log group.
    3. On the Data protection tab, under Log group policy, choose the Actions menu and choose Delete policy.
  5. Delete the S3 bucket that stores the Amazon Lex data:
    1. On the Amazon S3 console, choose Buckets in the navigation pane.
    2. Select the bucket you want to delete, then choose Delete.
    3. To confirm that you want to delete the bucket, enter the bucket name and choose Delete bucket.
  6. Delete the CloudFormation stack. For instructions, see Deleting a stack on the AWS CloudFormation console.
  7. Delete the SCP. For instructions, see Deleting an SCP.
  8. Delete the KMS key. For instructions, see Deleting AWS KMS keys.

Conclusion

Securing PII within AWS services like Amazon Lex and CloudWatch requires a comprehensive and proactive approach. By following the steps in this post—identifying and classifying data, locating data stores, monitoring and protecting data in transit and at rest, and implementing SCPs for Amazon Lex and Amazon CloudWatch—organizations can create a robust security framework. This framework not only protects sensitive data, but also complies with regulatory standards and mitigates potential risks associated with data breaches and unauthorized access.

Emphasizing the need for regular audits, continuous monitoring, and updating security measures in response to emerging threats and technological advancements is crucial. Adopting these practices allows organizations to safeguard their digital assets, maintain customer trust, and build a reputation for strong data privacy and security in the digital landscape.


About the Authors

rashmicg-bioRashmica Gopinath is a software development engineer with Amazon Lex. Rashmica is responsible for developing new features, improving the service’s performance and reliability, and ensuring a seamless experience for customers building conversational applications. Rashmica is dedicated to creating innovative solutions that enhance human-computer interaction. In her free time, she enjoys winding down with the works of Dostoevsky or Kafka.

Dipkumar Mehta is a Principal Consultant with the Amazon ProServe Natural Language AI team. He focuses on helping customers design, deploy, and scale end-to-end Conversational AI solutions in production on AWS. He is also passionate about improving customer experience and driving business outcomes by leveraging data. Additionally, Dipkumar has a deep interest in Generative AI, exploring its potential to revolutionize various industries and enhance AI-driven applications.

David Myers is a Sr. Technical Account Manager with AWS Enterprise Support . With over 20 years of technical experience observability has been part of his career from the start. David loves improving customers observability experiences at Amazon Web Services.

Sam Patel is a Security Consultant specializing in safeguarding Generative AI (GenAI), Artificial Intelligence systems, and Large Language Models (LLM) for Fortune 500 companies. Serving as a trusted advisor, he invents and spearheads the development of cutting-edge best practices for secure AI deployment, empowering organizations to leverage transformative AI capabilities while maintaining stringent security and privacy standards.

Read More