Discover insights from Amazon S3 with Amazon Q S3 connector 

Discover insights from Amazon S3 with Amazon Q S3 connector 

Amazon Q is a fully managed, generative artificial intelligence (AI) powered assistant that you can configure to answer questions, provide summaries, generate content, gain insights, and complete tasks based on data in your enterprise. The enterprise data required for these generative-AI powered assistants can reside in varied repositories across your organization. One common repository to store data is Amazon Simple Storage Service (Amazon S3), which is an object storage service that stores data as objects within storage buckets. Customers of all sizes and industries can securely index data from a variety of data sources such as document repositories, web sites, content management systems, customer relationship management systems, messaging applications, database, and so on.

To build a generative AI-based conversational application that’s integrated with the data sources that contain the relevant content an enterprise needs to invest time, money, and people, you need to build connectors to the data sources. Next you need to index the data to make it available for a Retrieval Augmented Generation (RAG) approach where relevant passages are delivered with high accuracy to a large language model (LLM). To do this you need to select an index that provides the capabilities to index the content for semantic and vector search, build the infrastructure to retrieve the data, rank the answers, and build a feature rich web application. You also need to hire and staff a large team to build, maintain and manage such a system.

Amazon Q Business is a fully managed generative AI-powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Amazon Q business can help you get fast, relevant answers to pressing questions, solve problems, generate content, and take actions using the data and expertise found in your company’s information repositories, code, and enterprise systems such as Atlassian Jira and others. To do this, Amazon Q provides native data source connectors that can index content into a built-in retriever and uses an LLM to provide accurate, well written answers. A data source connector within Amazon Q helps to integrate and synchronize data from multiple repositories into one index.

Amazon Q Business offers multiple prebuilt connectors to a large number of data sources, including Atlassian Jira, Atlassian Confluence, Amazon S3, Microsoft SharePoint, Salesforce, and many more and can help you create your generative AI solution with minimal configuration. For a full list of Amazon Q supported data source connectors, see Amazon Q connectors.

Now you can use the Amazon Q S3 connector to index your data on S3 and build a generative AI assistant that can derive insights from the data stored. Amazon Q generates comprehensive responses to natural language queries from users by analyzing information across content that it has access to. Amazon Q also supports access control for your data so that the right users can access the right content. Its responses to questions are based on the content that your end user has permissions to access.

This post shows how to configure the Amazon Q S3 connector and derive insights by creating a generative-AI powered conversation experience on AWS using Amazon Q while using access control lists (ACLs) to restrict access to documents based on user permissions.

Finding accurate answers from content in S3 using Amazon Q Business

After you integrate Amazon Q Business with Amazon S3, users can ask questions about the content stored in S3. For example, a user might ask about the main points discussed in a blog post on cloud security, the installation steps outlined in a user guide, findings from a case study on hybrid cloud usage, market trends noted in an analyst report, or key takeaways from a whitepaper on data encryption. This integration helps users to quickly find the specific information they need, improving their understanding and ability to make informed business decisions.

Secure querying with ACL crawling and identity crawling

Secure querying is when a user runs a query and is returned answers from documents that the user has access to and not from documents that the user does not have access to. To enable users to do secure querying, Amazon Q Business honors ACLs of the documents. Amazon Q Business does this by first supporting the indexing of ACLs. Indexing documents with ACLs is crucial for maintaining data security, because documents without ACLs are treated as public. Second, at query time the user’s credentials (email address) are passed along with the query so that only answers from documents that are relevant to the query and that the user is authorized to access are displayed.

A document’s ACL, included in the metadata.json or acl.json files alongside the document in the S3 bucket, contains details such as the user’s email address and local groups.

When a user signs in to a web application to conduct a search, their credentials (such as an email address) need to match what’s in the ACL of the document to return results from that document. The web application that the user uses to retrieve answers would be connected to an identity provider (IdP) or the AWS IAM Identity Center. The user’s credentials from the IdP or IAM Identity Center are referred to here as the federated user credentials. The federated user credentials are passed along with the query so that Amazon Q can return the answers from the documents that this user has access to. However, there are occasions when a user’s federated credentials might be absent from the S3 bucket ACLs. In these instances, only the user’s local alias and local groups are specified in the document’s ACL. Therefore, it’s necessary to map these federated user credentials to the corresponding local user alias and local group in the document’s ACL.

Any document or folder without an explicit ACL Deny clause is treated as public.

Solution overview

As an administrator user of Amazon Q, the high-level steps to set up a generative AI chat application are to create an Amazon Q application, connect to different data sources, and finally deploy your web experience. An Amazon Q web experience is the chat interface that you create using your Amazon Q application. Then, your users can chat with your organization’s Amazon Q web experience, and it can be integrated with IAM Identity Center. You can configure and customize your Amazon Q web experience using either the AWS Management Console for Amazon Q or the Amazon Q API.

Amazon Q understands and respects your existing identities, roles, and permissions and uses this information to personalize its interactions. If a user doesn’t have permission to access data without Amazon Q, they can’t access it using Amazon Q either. The following table outlines which documents each user is authorized to access for our use case. The documents being used in this example are a subset of AWS public documents. In this blog post, we will focus on users Arnav (Guest), Mary, and Pat and their assigned groups.

First name Last name Group Document type authorized for access
1 Arnav Desai Blogs
2 Pat Candella Customer Blogs, user guides
3 Jane Doe Sales Blogs, user guides, and case studies
4 John Stiles Marketing Blogs, user guides, case studies, and analyst reports
5 Mary Major Solutions architect Blogs, user guides, case studies, analyst reports, and whitepapers

Architecture diagram

The following diagram illustrates the solution architecture. Amazon S3 is the data source and documents along with the ACL information are passed to Amazon Q from S3. The user submits a query to the Amazon Q application. Amazon Q retrieves the user and group information and provides answers based on the documents that the user has access to.

Architecture Diagram

In the upcoming sections, we will show you how to implement this architecture.

Prerequisites

For this walkthrough, you should have the following prerequisites:

Prepare your S3 bucket as a data source

In the AWS Region list, choose US East (N. Virginia) as the Region. You can choose any Region that Amazon Q is available in but ensure that you remain in the same Region when creating all other resources. To prepare an S3 bucket as a data source, create an S3 bucket. Note the name of the S3 bucket. Replace <REPLACE-WITH-NAME-OF-S3-BUCKET> with the name of the bucket in the commands below. In a terminal with the AWS Command Line Interface (AWS CLI) or AWS CloudShell, run the following commands to upload the documents to the data source bucket:

aws s3 cp s3://aws-ml-blog/artifacts/building-a-secure-search-application-with-access-controls-kendra/docs.zip .

unzip docs.zip

aws s3 cp Data/ s3://<REPLACE-WITH-NAME-OF-S3-BUCKET>/Data/ --recursive

aws s3 cp Meta/ s3://<REPLACE-WITH-NAME-OF-S3-BUCKET>/Meta/ --recursive

The documents being queried are stored in an S3 bucket. Each document type has a separate folder: blogs, case-studies, analyst reports, user guides, and white papers. This folder structure is contained in a folder named Data as shown below:

S3 Bucket Structure

Each object in S3 is considered a single document. Any <object-name>.metadata.json file and access control list (ACL) file is considered metadata for the object it’s associated with and not treated as a separate document. In this example, metadata files including the ACLs are in a folder named Meta. We use the Amazon Q S3 connector to configure this S3 bucket as the data source. When the data source is synced with the Amazon Q index, it crawls and indexes all documents and collects the ACLs and document attributes from the metadata files. To learn more about ACLs using metadata files, see Amazon S3 document metadata. Here’s the sample metadata JSON file:

{
   "Attributes": {
      "DocumentType": "user-guides"
   },
   "AccessControlList": [
      { "Access": "ALLOW", "Name": "customer", "Type": "GROUP" },
      { "Access": "ALLOW", "Name": "AWS-Sales", "Type": "GROUP" },
      { "Access": "ALLOW", "Name": "AWS-Marketing", "Type": "GROUP" },
      { "Access": "ALLOW", "Name": "AWS-SA", "Type": "GROUP" }
   ]
}

Create users and groups in IAM Identity Center

In this section, you create the following mapping for demonstration:

User Group name
1 Arnav
2 Pat customer
3 Mary AWS-SA

To create users:

  1. Open the AWS IAM Identity Center
  2. If you haven’t enabled IAM Identity Center, choose Enable. If there’s a pop-up, choose how you want to enable IAM Identity Center. For this example, select Enable only in this AWS account. Choose Continue.Enable IAM Identity Center
  3. In the IAM Identity Center dashboard, choose Users in the navigation pane.
  4. Choose Add User.
  5. Enter the user details for Mary:
    1. Username: mary_major
    2. Email address: mary_major@example.com
      Note: Use or create a real email address for each user to use in a later step.
    3. First name: Mary
    4. Last name: Major
    5. Display name: Mary MajorAdd user in IDC
  6. Skip the optional fields and choose Next to create the user.
  7. In the Add user to groups page, choose Next and then choose Add user. Follow the same steps to create users for Pat and Arnav (Guest user).
    (You will assign users to groups at a later step.)

To create groups:

  1. Now, you will create two groups: AWS-SA and customer. Choose Groups on the navigation pane and choose Create group.

Create group

  1. For the group name, enter AWS-SA, add user Mary to the group,and choose Create group.Steps for creating group
  2. Similarly, create a group name customer, add user Pat, and choose Create group.
  3. Now, add multi-factor authentication to the users following the instructions sent to the user email. For more details, see Multi-factor authentication for Identity Center users. When done, you will have the users and groups set up on IAM Identity Center.

Create and configure your Amazon Q application

In this step, you create an Amazon Q application that powers the conversation web experience:

  1. On the AWS Management Console for Amazon Q, in the Region list, choose US East (N. Virginia).
  2. On the Getting started page, select Enable identity-aware sessions. Once enabled, Amazon Q connected to IAM Identity Center should be displayed. Choose Subscribe in Q Business.Amazon Q Console
  3. On the Amazon Q Business console, choose Get started.Get started with Amazon Q
  4. On the Applications page, choose Create application.Create application
  5. On the Create application page, enter Application name and leave everything else with default values. Application page in Amazon Q
  6. Choose Create.
  7. On the Select retriever page, for Retrievers, select Use native retriever.Retrievers page
  8. Choose Next. This will take you to the Connect data sources

Configure Amazon S3 as the data source

In this section, you walk through an example of adding an S3 connector. The S3 connector consists of blogs, user guides, case studies, analyst reports, and whitepapers.

To add the S3 connector:

  1. On the Connect data sources page, select Amazon S3 connector.Select Amazon S3 connector
  2. For Data source name, enter a name for your data source.
  3. In the IAM role section, select Create new service role (Recommended).Create S3 service role
  1. In Sync scope section, browse to your S3 bucket containing the data files.
  2. Under Advanced settings, for Metadata files prefix folder location, enter Meta/
  3. Choose Filter patterns. Under Include patterns, enter Data/ as the prefix and choose Add.Sync scope
  4. For Frequency under Sync run schedule, choose Run on demand.
  5. Leave the rest as default and choose Add data source. Wait until the data source is added.
  6. On the Connect data sources page, choose Next. This will take you to the Add users and groups

Add users and groups in Amazon Q

In this section, you set up users and groups to showcase how access can be managed based on the permissions.

  1. On the Add users and groups page, choose Assign existing users and groups and choose Next.Assign users and groups
  2. Enter the users and groups you want to add and choose Assign. You will have to enter the user names and groups in the search box and select the user or group. Verify that users and groups are correctly displayed under the Users and Groups tabs respectively.
    Assign user
  3. Select the Current subscription. In this example, we selected choose Q Business Lite for groups. Choose the same subscription for users under the Users tab. You can also update subscriptions after creating the application.Add groups
  4. Leave the Service role name as default and choose Create application.

Sync S3 data source

With your application created, you will crawl and index the documents in the S3 bucket created at the beginning of the process.

  1. Select the name of the application

Select name of application

  1. Go to the Data sources Select the radio button next to the S3 data source and choose Sync now.

Sync now

  1. The sync can take from a few minutes to a few hours. Wait for the sync to complete. Verify the sync is complete and documents have been added.

Wait for sync to complete

Run queries with Amazon Q

Now that you have configured the Amazon Q application and integrated it with IAM Identity Center, you can test queries from different users based on their group permissions. This will demonstrate how Amazon Q respects the access control rules set up in the Amazon S3 data source.

You have three users for testing—Pat from the Customer group, Mary from the AWS-SA group, and Arnav who isn’t part of any group. According to the access control list (ACL) configuration, Pat should have access to blogs and user guides, Mary should have access to blogs, user guides, case studies, analyst reports, and whitepapers, and Arnav should have access only to blogs.

In the following steps, you will sign in as each user and ask various questions to see what responses Amazon Q provides based on the permitted document types for their respective groups. You will also test edge cases where users try to access information from restricted sources to validate the access control functionality.

  • In the Amazon Q Business console, choose Applications on the navigation pane and copy the Web experience URL.

Web experience URL

Sign in as Pat to the Amazon Q chat interface.

Pat is part of the Customer group and has access to blogs and user guides

When asked a question like “What is AWS?” Amazon Q will provide a summary pulling information from blogs and user guides, highlighting the sources at the end of each excerpt.

What is AWS?

Try asking a question that requires information from user guides, such as “How do I set up an AWS account?” Amazon Q will summarize relevant details from the permitted user guide sources for Pat’s group.

How do I set up an AWS account?

However, if you, as Pat, ask a question that requires information from whitepapers, analyst reports, or case studies, Amazon Q will indicate that it could not find any relevant information from the sources she has access to.

Ask a question such as “What are the strategic planning assumptions for the year 2025?” to see this.

Strategic planning

Sign in as Mary to the Amazon Q chat interface.

Sign out as user Pat. Start a new incognito browser session or use a different browser. Copy the web experience URL and sign in as user Mary. Repeat these steps each time you need to sign in as a different user.

Mary is part of the AWS-SA group, so she has access to blogs, case studies, analyst reports, and whitepapers.

When Mary asks the same question about strategic planning, Amazon Q will provide a comprehensive summary pulling information from all the permitted sources.

Mary strategic planning

With Mary’s sign-in, you can ask various other questions related to AWS services, architectures, or solutions, and Amazon Q will effectively summarize information from across all the content types Mary’s group has access to.

Key benefits of AWS

Sign in as Arnav to the Amazon Q chat interface

Arnav is not part of any group and is able to access only blogs. If Arnav asks a question about Amazon Polly, Amazon Q will return blog posts.

Amazon Polly

When Arnav tries to get information from the user guides, access is restricted. If they ask about something like how to set up an AWS account, Amazon Q responds that it could not find relevant information.

Set up an AWS account

This shows how Amazon Q respects the data access rules configured in the Amazon S3 data source, allowing users to gain insights only from the content their group has permissions to view, while still providing comprehensive answers when possible within those boundaries.

Troubleshooting

Troubleshooting your Amazon S3 connector provides information about error codes you might see for the Amazon S3 connector and suggested troubleshooting actions. If you encounter an HTTP status code 403 (Forbidden) error when you open your Amazon Q Business application, it means that the user is unable to access the application. See Troubleshooting Amazon Q Business and identity provider integration for common causes and how to address them.

Frequently asked questions

Q. Why isn’t Amazon Q Business answering any of my questions?

A. Verify that you have synced your data source on the Amazon Q console. Also, check the ACLs to ensure you have the required permissions to retrieve answers from Amazon Q.

Q. How can I sync documents without ACLs?

A. When configuring the Amazon S3 connector, under Sync scope, you can optionally choose not to include the metadata or ACL configuration file location in Advanced settings. This will allow you to sync documents without ACLs.

Sync scope

Q. I updated the contents of my S3 data source but Amazon Q business answers using old data.

A. After content has been updated in your S3 data source location, you must re-sync the contents for the updated data to be picked up by Amazon Q. Go to the Data sources Select the radio button next to the S3 data source and choose Sync now. After the sync is complete, verify that the updated data is reflected by running queries on Amazon Q.

Sync now

Q. I am unable to sign in as a new user through the web experience URL.

A. Clear your browser cookies and sign in as a new user.

Q. I keep trying to sign in but am getting this error:

Error

A. Try signing in from a different browser or clear browser cookies and try again.

Q. What are the supported document formats and what is considered a document in Amazon S3?

A. See Supported document types and What is a document? to learn more.

Call to action

Explore other features in Amazon Q Business such as:

  • The Amazon Q Business document enrichment feature helps you control both what documents and document attributes are ingested into your index and also how they’re ingested. Using document enrichment, you can create, modify, or delete document attributes and document content when you ingest them into your Amazon Q Business index. For example, you can scrub personally identifiable information (PII) by choosing to delete any document attributes related to PII.
  • Amazon Q Business features
    • Filtering using metadata – Use document attributes to customize and control users’ chat experience. Currently supported only if you use the Amazon Q Business API.
    • Source attribution with citations – Verify responses using Amazon Q Business source attributions.
    • Upload files and chat – Let users upload files directly into chat and use uploaded file data to perform web experience tasks.
    • Quick prompts – Feature sample prompts to inform users of the capabilities of their Amazon Q Business web experience.
  • To improve retrieved results and customize the user chat experience, you can map document attributes from your data sources to fields in your Amazon Q index. Learn more by exploring Amazon Q Business Amazon S3 data source connector field mappings.

Clean up

To avoid incurring future charges and to clean out unused roles and policies, delete the resources you created: the Amazon Q application, data sources, and corresponding IAM roles.

  1. To delete the Amazon Q application, go to the Amazon Q console and, on the Applications page, select your application.
  2. On the Actions drop-down menu, choose Delete.
  3. To confirm deletion, enter delete in the field and choose Delete. Wait until you get the confirmation message; the process can take up to 15 minutes.
  4. To delete the S3 bucket created in Prepare your S3 bucket as a data source, empty the bucket and then follow the steps to delete the bucket.
  5. Delete your IAM Identity Center instance.

Conclusion

This blog post has walked you through the steps to build a secure, permissions-based generative AI solution using Amazon Q and Amazon S3 as the data source. By configuring user groups and mapping their access privileges to different document folders in S3, it demonstrated that Amazon Q respects these access control rules. When users query the AI assistant, it provides comprehensive responses by analyzing only the content their group has permission to view, preventing unauthorized access to restricted information. This solution allows organizations to safely unlock insights from their data repositories using generative AI while ensuring data access governance.

Don’t let your data’s potential go untapped. Continue exploring how Amazon Q can transform your enterprise data to gain actionable insights. Join the conversation and share your thoughts or questions in the comments section below.


About the Author

Kruthi Jayasimha Rao is a Partner Solutions Architect with a focus in AI and ML. She provides technical guidance to AWS Partners in following best practices to build secure, resilient, and highly available solutions in the AWS Cloud.


Keagan Mirazee is a Partner Solutions Architect specializing in Generative AI to assist AWS Partners in engineering reliable and scalable cloud solutions.


Dipti Kulkarni is a Sr. Software Development Engineer for Amazon Q. Dipti is a passionate engineer building connectors for Amazon Q.

Read More

Boosting Salesforce Einstein’s code generating model performance with Amazon SageMaker

Boosting Salesforce Einstein’s code generating model performance with Amazon SageMaker

This post is a joint collaboration between Salesforce and AWS and is being cross-published on both the Salesforce Engineering Blog and the AWS Machine Learning Blog.

Salesforce, Inc. is an American cloud-based software company headquartered in San Francisco, California. It provides customer relationship management (CRM) software and applications focused on sales, customer service, marketing automation, ecommerce, analytics, and application development. Salesforce is building toward artificial general intelligence (AGI) for business, enabling predictive and generative functions within their flagship software-as-a-service (SaaS) CRM, and working toward intelligent automations using artificial intelligence (AI) as well as agents.

Salesforce Einstein is a set of AI technologies that integrate with Salesforce’s Customer Success Platform to help businesses improve productivity and client engagement. Einstein has a list of over 60 features, unlocked at different price points and segmented into four main categories: machine learning (ML), natural language processing (NLP), computer vision, and automatic speech recognition. Einstein delivers advanced AI capabilities into sales, service, marketing, and other functions, empowering companies to deliver more personalized and predictive customer experiences. Einstein has out-of-the-box AI features such as sales email generation in Sales Cloud and service replies in Service Cloud. They also have tools such as Copilot, Prompt, and Model Builder, three tools contained in the Einstein 1 Studio, that allow organizations to build custom AI functionality and roll it out to their users.

The Salesforce Einstein AI Platform team is the group supporting development of Einstein applications. They are committed to enhancing the performance and capabilities of AI models, with a particular focus on large language models (LLMs) for use with Einstein product offerings. These models are designed to provide advanced NLP capabilities for various business applications. Their mission is to continuously refine these LLMs and AI models by integrating state-of-the-art solutions and collaborating with leading technology providers, including open source communities and public cloud services like AWS and building it into a unified AI platform. This helps make sure Salesforce customers receive the most advanced AI technology available.

In this post, we share how the Salesforce Einstein AI Platform team boosted latency and throughput of their code generation LLM using Amazon SageMaker.

The challenge with hosting LLMs

In the beginning of 2023, the team started looking at solutions to host CodeGen, Salesforce’s in-house open source LLM for code understanding and code generation. The CodeGen model allows users to translate natural language, such as English, into programming languages, such as Python. Because they were already using AWS for inference for their smaller predictive models, they were looking to extend the Einstein platform to help them host CodeGen. Salesforce developed an ensemble of CodeGen models (Inline for automatic code completion, BlockGen for code block generation, and FlowGPT for process flow generation) specifically tuned for the Apex programming language. Salesforce Apex is a certified framework for building SaaS apps on top of Salesforce’s CRM functionality. They were looking for a solution that can securely host their model and help them handle a large volume of inference requests as well as multiple concurrent requests at scale. They also needed to be able to meet their throughput and latency requirements for their co-pilot application (EinsteinGPT for Developers). EinsteinGPT for Developers simplifies the start of development by creating smart Apex based on natural language prompts. Developers can accelerate coding tasks by scanning for code vulnerabilities and getting real-time code suggestions within the Salesforce integrated development environment (IDE), as shown in the following screenshot.

style="margin:

The Einstein team conducted a comprehensive evaluation of various tools and services, including open source options and paid solutions. After assessing these options, they found that SageMaker provided the best access to GPUs, scalability, flexibility, and performance optimizations for a wide range of scenarios, particularly in addressing their challenges with latency and throughput.

Why Salesforce Einstein chose SageMaker

SageMaker offered several specific features that proved essential to meeting Salesforce’s requirements:

  • Multiple serving engines – SageMaker includes specialized deep learning containers (DLCs), libraries, and tooling for model parallelism and large model inference (LMI) containers. LMI containers are a set of high-performance Docker Containers purpose built for LLM inference. With these containers, you can use high performance open source inference libraries like FasterTransformer, TensorRT-LLM, vLLM and Transformers NeuronX. These containers bundle together a model server with open source inference libraries to deliver an all-in-one LLM serving solution. The Einstein team liked how SageMaker provided quick-start notebooks that get them deploying these popular open source models in minutes.
  • Advanced batching strategies – The SageMaker LMI allows customers to optimize performance of their LLMs by enabling features like batching, which groups multiple requests together before they hit the model. Dynamic batching instructs the server to wait a predefined amount of time and batch up all requests that occur in that window with a maximum of 64 requests, while paying attention to a configured preferred size. This optimizes the use of GPU resources and balances throughput with latency, ultimately reducing the latter. The Einstein team liked how they were able to use dynamic batching through the LMI to increase throughput for their Codegen models while minimizing latency.
  • Efficient routing strategy – By default, SageMaker endpoints have a random routing strategy. SageMaker also supports a least outstanding requests (LOR) strategy, which allows SageMaker to optimally route requests to the instance that’s best suited to serve that request. SageMaker makes this possible by monitoring the load of the instances behind your endpoint and the models or inference components that are deployed on each instance. Customers have the flexibility to choose either algorithm depending on their workload needs. Along with the capability to handle multiple model instances across several GPUs, the Einstein team liked how the SageMaker routing strategy ensures that traffic is evenly and efficiently distributed to model instances, preventing any single instance from becoming a bottleneck.
  • Access to high-end GPUs – SageMaker provides access to top-end GPU instances, which are essential for running LLMs efficiently. This is particularly valuable given the current market shortages of high-end GPUs. SageMaker allowed the Einstein team to use auto-scaling of these GPUs to meet demand without manual intervention.
  • Rapid iteration and deployment – While not directly related to latency, the ability to quickly test and deploy changes using SageMaker notebooks helps in reducing the overall development cycle, which can indirectly impact latency by accelerating the implementation of performance improvements. The use of notebooks enabled the Einstein team to shorten their overall deployment time and get their models hosted in production much faster.

These features collectively help optimize the performance of LLMs by reducing latency and improving throughput, making Amazon SageMaker a robust solution for managing and deploying large-scale machine learning models.

One of the key capabilities was how using SageMaker LMI provided a blueprint of model performance optimization parameters for NVIDIA’s FasterTransformer library to use with CodeGen. When the team initially deployed CodeGen 2.5, a 7B parameter model on Amazon Elastic Compute Cloud (Amazon EC2), the model wasn’t performing well for inference. Initially, for a code block generation task, it could only handle six requests per minute, with each request taking over 30 seconds to process. This was far from efficient and scalable. However, after using the SageMaker FasterTransformer LMI notebook and referencing the advanced SageMaker-provided guides to understand how to optimize the different endpoint parameters provided, there was a significant improvement in model performance. The system now handles around 400 requests per minute with a reduced latency of approximately seven seconds per request, each containing about 512 tokens. This represents an over 6,500 percent increase in throughput after optimization. This enhancement was a major breakthrough, demonstrating how the capabilities of SageMaker were instrumental in optimizing the throughput of the LLM and reducing cost. (The FasterTransformer backend has been deprecated by NVIDIA; the team is working toward migrating to the TensorRT (TRT-LLM) LMI.)

To assess the performance of LLMs, the Einstein team focuses on two key metrics:

  • Throughput – Measured by the number of tokens an LLM can generate per second
  • Latency – Determined by the time it takes to generate these tokens for individual requests

Extensive performance testing and benchmarking was conducted to track these metrics. Before using SageMaker, CodeGen models had a lower token-per-second rate and higher latencies. With SageMaker optimization, the team observed significant improvements in both throughput and latency, as shown in the following figure.

Codegen Latency Graph

Latency and throughput changes with different techniques for CodeGen1 and CodeGen2.5 models. CodeGen1 is the original version of CodeGen, which is a 16B model. CodeGen2.5 is the optimized version, which is a 7B model. For more information about CodeGen 2.5, refer to CodeGen2.5: Small, but mighty.

New challenges and opportunities

The primary challenge that the team faced when integrating SageMaker was enhancing the platform to include specific functionalities that were essential for their projects. For instance, they needed additional features for NVIDIA’s FasterTransformer to optimize their model performance. Through a productive collaboration with the SageMaker team, they successfully integrated this support, which initially was not available.

Additionally, the team identified an opportunity to improve resource efficiency by hosting multiple LLMs on a single GPU instance. Their feedback helped develop the inference component feature, which now allows Salesforce and other SageMaker users to utilize GPU resources more effectively. These enhancements were crucial in tailoring the platform to Salesforce’s specific needs.

Key takeaways

The team took away the following key lessons from optimizing models in SageMaker for future projects:

  • Stay updated – It’s crucial to keep up with the latest inferencing engines and optimization techniques because these advancements significantly influence model optimization.
  • Tailor optimization strategies – Model-specific optimization strategies like batching and quantization require careful handling and coordination, because each model might require a tailored approach.
  • Implement cost-effective model hosting – You can optimize the allocation of limited GPU resources to control expenses. Techniques such as virtualization can be used to host multiple models on a single GPU, reducing costs.
  • Keep pace with innovations – The field of model inferencing is rapidly evolving with technologies like Amazon SageMaker JumpStart and Amazon Bedrock. Developing strategies for adopting and integrating these technologies is imperative for future optimization efforts.

Conclusion

In this post, we shared how the Salesforce Einstein AI Platform team boosted latency and throughput of their code generation LLM using SageMaker, and saw an over 6,500 percent increase in throughput after optimization.

Looking to host your own LLMs on SageMaker? To get started, see this guide.

_______________________________________________________________________

About the Authors

Pawan Agarwal is the Senior Director of Software Engineering at Salesforce. He leads efforts in Generative and Predictive AI, focusing on inferencing, training, fine-tuning, and notebooking technologies that power the Salesforce Einstein suite of applications.

Rielah De Jesus is a Principal Solutions Architect at AWS who has successfully helped various enterprise customers in the DC, Maryland, and Virginia area move to the cloud. In her current role she acts as a customer advocate and technical advisor focused on helping organizations like Salesforce achieve success on the AWS platform. She is also a staunch supporter of Women in IT and is very passionate about finding ways to creatively use technology and data to solve everyday challenges.

Read More

Detect and protect sensitive data with Amazon Lex and Amazon CloudWatch Logs

Detect and protect sensitive data with Amazon Lex and Amazon CloudWatch Logs

In today’s digital landscape, the protection of personally identifiable information (PII) is not just a regulatory requirement, but a cornerstone of consumer trust and business integrity. Organizations use advanced natural language detection services like Amazon Lex for building conversational interfaces and Amazon CloudWatch for monitoring and analyzing operational data.

One risk many organizations face is the inadvertent exposure of sensitive data through logs, voice chat transcripts, and metrics. This risk is exacerbated by the increasing sophistication of cyber threats and the stringent penalties associated with data protection violations. Dealing with massive datasets is not just about identifying and categorizing PII. The challenge also lies in implementing robust mechanisms to obfuscate and redact this sensitive data. At the same time, it’s crucial to make sure these security measures don’t undermine the functionality and analytics critical to business operations.

This post addresses this pressing pain point, offering prescriptive guidance on safeguarding PII through detection and masking techniques specifically tailored for environments using Amazon Lex and CloudWatch Logs.

Solution overview

To address this critical challenge, our solution uses the slot obfuscation feature in Amazon Lex and the data protection capabilities of CloudWatch Logs, tailored specifically for detecting and protecting PII in logs.

In Amazon Lex, slots are used to capture and store user input during a conversation. Slots are placeholders within an intent that represent an action the user wants to perform. For example, in a flight booking bot, slots might include departure city, destination city, and travel dates. Slot obfuscation makes sure any information collected through Amazon Lex conversational interfaces, such as names, addresses, or any other PII entered by users, is obfuscated at the point of capture. This method reduces the risk of sensitive data exposure in chat logs and playbacks.

In CloudWatch Logs, data protection and custom identifiers add an additional layer of security by enabling the masking of PII within session attributes, input transcripts, and other sensitive log data that is specific to your organization.

This approach minimizes the footprint of sensitive information across these services and helps with compliance with data protection regulations.

In the following sections, we demonstrate how to identify and classify your data, locate your sensitive data, and finally monitor and protect it, both in transit and at rest, especially in areas where it may inadvertently appear. The following are the four ways to do this:

  • Amazon Lex – Monitor and protect data with Amazon Lex using slot obfuscation and selective conversation log capture
  • CloudWatch Logs – Monitor and protect data with CloudWatch Logs using playbacks and log group policies
  • Amazon S3 – Monitor and protect data with Amazon Simple Storage Service (Amazon S3) using bucket security and encryption
  • Service Control Policies Monitor and protect with data governance controls and risk management policies using Service Control Policies (SCPs) to prevent changes to Amazon Lex chatbots and CloudWatch Logs groups, and restrict unmasked data viewing in CloudWatch Logs Insights

Identify and classify your data

The first step is to identify and classify the data flowing through your systems. This involves understanding the types of information processed and determining their sensitivity level.

To determine all the slots in an intent in Amazon Lex, complete the following steps:

  1. On the Amazon Lex console, choose Bots in the navigation pane.
  2. Choose your preferred bot.
  3. In the navigation pane, choose the locale under All Languages and choose Intents.
  4. Choose the required intent from the list.
  5. In the Slots section, make note of all the slots within the intent.

Lex bot slots

After you identify the slots within the intent, it’s important to classify them according to their sensitivity level and the potential impact of unauthorized access or disclosure. For example, you may have the following data types:

  • Name
  • Address
  • Phone number
  • Email address
  • Account number

Email address and physical mailing address are often considered a medium classification level. Sensitive data, such as name, account number, and phone number, should be tagged with a high classification level, indicating the need for stringent security measures. These guidelines can help with systematically evaluating data.

Locate your data stores

After you classify the data, the next step is to locate where this data resides or is processed in your systems and applications. For services involving Amazon Lex and CloudWatch, it’s crucial to identify all data stores and their roles in handling PII.

CloudWatch captures logs generated by Amazon Lex, including interaction logs that might contain PII. Regular audits and monitoring of these logs are essential to detect any unauthorized access or anomalies in data handling.

Amazon S3 is often used in conjunction with Amazon Lex for storing call recordings or transcripts, which may contain sensitive information. Making sure these storage buckets are properly configured with encryption, access controls, and lifecycle policies are vital to protect the stored data.

Organizations can create a robust framework for protection by identifying and classifying data, along with pinpointing the data stores (like CloudWatch and Amazon S3). This framework should include regular audits, access controls, and data encryption to prevent unauthorized access and comply with data protection laws.

Monitor and protect data with Amazon Lex

In this section, we demonstrate how to protect your data with Amazon Lex using slot obfuscation and selective conversation log capture.

Slot obfuscation in Amazon Lex

Sensitive information can appear in the input transcripts of conversation logs. It’s essential to implement mechanisms that detect and mask or redact PII in these transcripts before they are stored or logged.

In the development of conversational interfaces using Amazon Lex, safeguarding PII is crucial to maintain user privacy and comply with data protection regulations. Slot obfuscation provides a mechanism to automatically obscure PII within conversation logs, making sure sensitive information is not exposed. When configuring an intent within an Amazon Lex bot, developers can mark specific slots—placeholders for user-provided information—as obfuscated. This setting tells Amazon Lex to replace the actual user input for these slots with a placeholder in the logs. For instance, enabling obfuscation for slots designed to capture sensitive information like account numbers or phone numbers makes sure any matching input is masked in the conversation log. Slot obfuscation allows developers to significantly reduce the risk of inadvertently logging sensitive information, thereby enhancing the privacy and security of the conversational application. It’s a best practice to identify and mark all slots that could potentially capture PII during the bot design phase to provide comprehensive protection across the conversation flow.

To enable obfuscation for a slot from the Amazon Lex console, complete the following steps:

  1. On the Amazon Lex console, choose Bots in the navigation pane.
  2. Choose your preferred bot.
  3. In the navigation pane, choose the locale under All Languages and choose Intents.
  4. Choose your preferred intent from the list.
  5. In the Slots section, expand the slot details.
  6. Choose Advanced options to access additional settings.
  7. Select Enable slot obfuscation.
  8. Choose Update slot to save the changes.

Lex enable slot obfuscation

Selective conversation log capture

Amazon Lex offers capabilities to select how conversation logs are captured with text and audio data from live conversations by enabling the filtering of certain types of information from the conversation logs. Through selective capture of necessary data, businesses can minimize the risk of exposing private or confidential information. Additionally, this feature can help organizations comply with data privacy regulations, because it gives more control over the data collected and stored. There is a choice between text, audio, or text and audio logs.

When selective conversation log capture is enabled for text and audio logs, it disables logging for all intents and slots in the conversation. To generate text and audio logs for particular intents and slots, set the text and audio selective conversation log capture session attributes for those intents and slots to “true”. When selective conversation log capture is enabled, any slot values in SessionState, Interpretations, and Transcriptions for which logging is not enabled using session attributes will be obfuscated in the generated text log.

To enable selective conversation log capture, complete the following steps:

  1. On the Amazon Lex console, choose Bots in the navigation pane.
  2. Choose your preferred bot.
  3. Choose Aliases under Deployment and choose the bot’s alias.
  4. Choose Manage conversation logs.
  5. Select Selectively log utterances.
    1. For text logs, choose a CloudWatch log group.
    2. For audio logs, choose an S3 bucket to store the logs and assign an AWS Key Management Service (AWS KMS) key for added security.
  6. Save the changes.

Lex enable selective text logging

Lex enable selective audio logging

Now selective conversation log capture for a slot is activated.

  1. Choose Intents in the navigation pane and choose your intent.
  2. Under Initial responses, choose Advanced options and expand Set values.
  3. For Session attributes, set the following attributes based on the intents and slots for which you want to enable selective conversation log capture. This will capture utterances that contain only a specific slot in the conversation.
    1. x-amz-lex:enable-audio-logging:<intent>:<slot> = "true"
    2. x-amz-lex:enable-text-logging:<intent>:<slot> = "true"
  4. Choose Update options and rebuild the bot.

Replace <intent> and <slot> with respective intent and slot names.

Lex selective conversation log capture

Monitor and protect data with CloudWatch Logs

In this section, we demonstrate how to protect your data with CloudWatch using playbacks and log group policies.

Playbacks in CloudWatch Logs

When Amazon Lex engages in interactions, delivering prompts or messages from the bot to the customer, there’s a potential risk for PII to be inadvertently included in these communications. This risk extends to CloudWatch Logs, where these interactions are recorded for monitoring, debugging, and analysis purposes. The playback of prompts or messages designed to confirm or clarify user input can inadvertently expose sensitive information if not properly handled. To mitigate this risk and protect PII within these interactions, a strategic approach is necessary when designing and deploying Amazon Lex bots.

The solution lies in carefully structuring how slot values, which may contain PII, are referenced and used in the bot’s response messages. Adopting a prescribed format for passing slot values, specifically by encapsulating them within curly braces (for example, {slotName}), allows developers to control how this information is presented back to the user and logged in CloudWatch. This method makes sure that when the bot constructs a message, it refers to the slot by its name rather than its value, thereby preventing any sensitive information from being directly included in the message content. For example, instead of the bot saying, “Is your phone number 123-456-7890? ” it would use a generic placeholder, “Is your phone number {PhoneNumber}? ” with {PhoneNumber} being a reference to the slot that captured the user’s phone number. This approach allows the bot to confirm or clarify information without exposing the actual data.

When these interactions are logged in CloudWatch, the logs will only contain the slot name references, not the actual PII. This technique significantly reduces the risk of sensitive information being exposed in logs, enhancing privacy and compliance with data protection regulations. Organizations should make sure all personnel involved in bot design and deployment are trained on these practices to consistently safeguard user information across all interactions.

The following is a sample AWS Lambda function code in Python for referencing the slot value of a phone number provided by the user. SML tags are used to format the slot value to provide slow and clear speech output, and returning a response to confirm the correctness of the captured phone number:

def lambda_handler(event, context):
    # Extract the intent name from the event
    intent_name = event['sessionState']['intent']['name']
    # Extract the slots from the event
    slots = event['sessionState']['intent']['slots']

    # Check if the intent name is 'INTENT_NAME'
     if intent_name == 'INTENT_NAME':
         # Retrieve the phone number from the 'SLOT_NAME' slot
         phone_number = slots['SLOT_NAME']['value']['interpretedValue']
        
        # Create an SSML-formatted message with the phone number
        msg = f'''<speak>
                Thank you for providing your phone number. Is 
                <prosody rate="slow">
                <say-as interpret-as="telephone">{phone_number}</say-as>
                </prosody> correct?
                </speak>'''
        
        # Create a message array
        message_array = [
            {
                'contentType': 'SSML',
                'content': msg
            }
        ]
        
        # Response with the dialog action, intent state, and the message array
        response = {
            'sessionState': {
                'dialogAction': {
                    'type': 'Close'
                },
                'intent': {
                    'name': intent_name,
                    'state': 'Fulfilled'
                }
            },
            'messages': message_array
        }
    else:
        # Generic response for unhandled intents
        response = {
            'sessionState': {
                'dialogAction': {
                    'type': 'Close'
                },
                'intent': {
                    'name': intent_name,
                    'state': 'Fulfilled'
                }
            },
            'messages': [
                {
                    'contentType': 'PlainText',
                    'content': 'I apologize, but I am unable to assist.'
                }
            ]
        }
    return response

Replace INTENT_NAME and SLOT_NAME with your preferred intent and slot names, respectively.

CloudWatch data protection log group policies for data identifiers

Sensitive data that’s ingested by CloudWatch Logs can be safeguarded by using log group data protection policies. These policies allow to audit and mask sensitive data that appears in log events ingested by the log groups in your account.

CloudWatch Logs supports both managed and custom data identifiers.

Managed data identifiers offer preconfigured data types to protect financial data, personal health information (PHI), and PII. For some types of managed data identifiers, the detection depends on also finding certain keywords in proximity with the sensitive data.

Each managed data identifier is designed to detect a specific type of sensitive data, such as name, email address, account numbers, AWS secret access keys, or passport numbers for a particular country or region. When creating a data protection policy, you can configure it to use these identifiers to analyze logs ingested by the log group, and take actions when they are detected.

CloudWatch Logs data protection can detect the categories of sensitive data by using managed data identifiers.

To configure managed data identifiers on the CloudWatch console, complete the following steps:

  1. On the CloudWatch console, under Logs in the navigation pane, choose Log groups.
  2. Select your log group and on the Actions menu, choose Create data protection policy.
  3. Under Auditing and masking configuration, for Managed data identifiers, select all the identifiers for which data protection policy should be applied.
  4. Choose the data store to apply the policy to and save the changes.

Cloudwatch managed data identifiers

Custom data identifiers let you define your own custom regular expressions that can be used in your data protection policy. With custom data identifiers, you can target business-specific PII use cases that managed data identifiers don’t provide. For example, you can use custom data identifiers to look for a company-specific account number format.

To create a custom data identifier on the CloudWatch console, complete the following steps:

  1. On the CloudWatch console, under Logs in the navigation pane, choose Log groups.
  2. Select your log group and on the Actions menu, choose Create data protection policy.
  3. Under Custom Data Identifier configuration, choose Add custom data identifier.
  4. Create your own regex patterns to identify sensitive information that is unique to your organization or specific use case.
  5. After you add your data identifier, choose the data store to apply this policy to.
  6. Choose Activate data protection.

Cloudwatch custom data identifier

For details about the types of data that can be protected, refer to Types of data that you can protect.

Monitor and protect data with Amazon S3

In this section, we demonstrate how to protect your data in S3 buckets.

Encrypt audio recordings in S3 buckets

PII can often be captured in audio recordings, especially in sectors like customer service, healthcare, and financial services, where sensitive information is frequently exchanged over voice interactions. To comply with domain-specific regulatory requirements, organizations must adopt stringent measures for managing PII in audio files.

One approach is to disable the recording feature entirely if it poses too high a risk of non-compliance or if the value of the recordings doesn’t justify the potential privacy implications. However, if audio recordings are essential, streaming the audio data in real time using Amazon Kinesis provides a scalable and secure method to capture, process, and analyze audio data. This data can then be exported to a secure and compliant storage solution, such as Amazon S3, which can be configured to meet specific compliance needs including encryption at rest. You can use AWS KMS or AWS CloudHSM to manage encryption keys, offering robust mechanisms to encrypt audio files at rest, thereby securing the sensitive information they might contain. Implementing these encryption measures makes sure that even if data breaches occur, the encrypted PII remains inaccessible to unauthorized parties.

Configuring these AWS services allows organizations to balance the need for audio data capture with the imperative to protect sensitive information and comply with regulatory standards.

S3 bucket security configurations

You can use an AWS CloudFormation template to configure various security settings for an S3 bucket that stores Amazon Lex data like audio recordings and logs. For more information, see Creating a stack on the AWS CloudFormation console. See the following example code:

AWSTemplateFormatVersion: '2010-09-09'
Description: Create a secure S3 bucket with KMS encryption to store Lex Data
Resources:
  S3Bucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: YOUR_LEX_DATA_BUCKET
      AccessControl: Private
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true
      BucketEncryption:
        ServerSideEncryptionConfiguration:
          - ServerSideEncryptionByDefault:
              SSEAlgorithm: aws:kms
              KMSMasterKeyID: alias/aws/s3 
      VersioningConfiguration:
        Status: Enabled
      ObjectLockConfiguration:
        ObjectLockEnabled: Enabled
        Rule:
          DefaultRetention:
            Mode: GOVERNANCE
            Years: 5
      LoggingConfiguration:
        DestinationBucketName: !Ref YOUR_SERVER_ACCESS_LOG_BUCKET
        LogFilePrefix: lex-bucket-logs/

The template defines the following properties:

  • BucketName– Specifies your bucket. Replace YOUR_LEX_DATA_BUCKET with your preferred bucket name.
  • AccessControl – Sets the bucket access control to Private, denying public access by default.
  • PublicAccessBlockConfiguration – Explicitly blocks all public access to the bucket and its objects
  • BucketEncryption – Enables server-side encryption using the default KMS encryption key ID, alias/aws/s3, managed by AWS for Amazon S3. You can also create custom KMS keys. For instructions, refer to Creating symmetric encryption KMS keys
  • VersioningConfiguration – Enables versioning for the bucket, allowing you to maintain multiple versions of objects.
  • ObjectLockConfiguration – Enables object lock with a governance mode retention period of 5 years, preventing objects from being deleted or overwritten during that period.
  • LoggingConfiguration – Enables server access logging for the bucket, directing log files to a separate logging bucket for auditing and analysis purposes. Replace YOUR_SERVER_ACCESS_LOG_BUCKET with your preferred bucket name.

This is just an example; you may need to adjust the configurations based on your specific requirements and security best practices.

Monitor and protect with data governance controls and risk management policies

In this section, we demonstrate how to protect your data with using a Service Control Policy (SCP). To create an SCP, see Creating an SCP.

Prevent changes to an Amazon Lex chatbot using an SCP

To prevent changes to an Amazon Lex chatbot using an SCP, create one that denies the specific actions related to modifying or deleting the chatbot. For example, you could use the following SCP:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Deny",
      "Action": [
        "lex:DeleteBot",
        "lex:DeleteBotAlias",
        "lex:DeleteBotChannelAssociation",
        "lex:DeleteBotVersion",
        "lex:DeleteIntent",
        "lex:DeleteSlotType",
        "lex:DeleteUtterances",
        "lex:PutBot",
        "lex:PutBotAlias",
        "lex:PutIntent",
        "lex:PutSlotType"
      ],
      "Resource": [
        "arn:aws:lex:*:YOUR_ACCOUNT_ID:bot:YOUR_BOT_NAME",
        "arn:aws:lex:*:YOUR_ACCOUNT_ID:intent:YOUR_BOT_NAME:*",
        "arn:aws:lex:*:YOUR_ACCOUNT_ID:slottype:YOUR_BOT_NAME:*"
      ],
      "Condition": {
        "StringEquals": {
          "aws:PrincipalArn": "arn:aws:iam::YOUR_ACCOUNT_ID:role/YOUR_IAM_ROLE"
        }
      }
    }
  ]
}

The code defines the following:

  • Effect – This is set to Deny, which means that the specified actions will be denied.
  • Action – This contains a list of actions related to modifying or deleting Amazon Lex bots, bot aliases, intents, and slot types.
  • Resource – This lists the Amazon Resource Names (ARNs) for your Amazon Lex bot, intents, and slot types. Replace YOUR_ACCOUNT_ID with your AWS account ID and YOUR_BOT_NAME with the name of your Amazon Lex bot.
  • Condition – This makes sure the policy only applies to actions performed by a specific IAM role. Replace YOUR_ACCOUNT_ID with your AWS account ID and YOUR_IAM_ROLE with the name of the AWS Identity and Access Management (IAM) provisioned role you want this policy to apply to.

When this SCP is attached to an AWS Organizations organizational unit (OU) or an individual AWS account, it will allow only the specified provisioning role while preventing all other IAM entities (users, roles, or groups) within that OU or account from modifying or deleting the specified Amazon Lex bot, intents, and slot types.

This SCP only prevents changes to the Amazon Lex bot and its components. It doesn’t restrict other actions, such as invoking the bot or retrieving its configuration. If more actions need to be restricted, you can add them to the Action list in the SCP.

Prevent changes to a CloudWatch Logs log group using an SCP

To prevent changes to a CloudWatch Logs log group using an SCP, create one that denies the specific actions related to modifying or deleting the log group. The following is an example SCP that you can use:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Deny",
      "Action": [
        "logs:DeleteLogGroup",
        "logs:PutRetentionPolicy"
      ],
      "Resource": "arn:aws:logs:*:YOUR_ACCOUNT_ID:log-group:/aws/YOUR_LOG_GROUP_NAME*",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalArn": "arn:aws:iam::YOUR_ACCOUNT_ID:role/YOUR_IAM_ROLE"
        }
      }
    }
  ]
}

The code defines the following:

  • Effect – This is set to Deny, which means that the specified actions will be denied.
  • Action – This includes logs:DeleteLogGroup and logs:PutRetentionPolicy actions, which prevent deleting the log group and modifying its retention policy, respectively.
  • Resource – This lists the ARN for your CloudWatch Logs log group. Replace YOUR_ACCOUNT_ID with your AWS account ID and YOUR_LOG_GROUP_NAME with the name of your log group.
  • Condition – This makes sure the policy only applies to actions performed by a specific IAM role. Replace YOUR_ACCOUNT_ID with your AWS account ID and YOUR_IAM_ROLE with the name of the IAM provisioned role you want this policy to apply to.

Similar to the preceding chatbot SCP, when this SCP is attached to an Organizations OU or an individual AWS account, it will allow only the specified provisioning role to delete the specified CloudWatch Logs log group or modify its retention policy, while preventing all other IAM entities (users, roles, or groups) within that OU or account from performing these actions.

This SCP only prevents changes to the log group itself and its retention policy. It doesn’t restrict other actions, such as creating or deleting log streams within the log group or modifying other log group configurations. To restrict additional actions, add it to the Action list in the SCP.

Also, this SCP will apply to all log groups that match the specified resource ARN pattern. To target a specific log group, modify the Resource value accordingly.

Restrict viewing of unmasked sensitive data in CloudWatch Logs Insights using an SCP

When you create a data protection policy, by default, any sensitive data that matches the data identifiers you’ve selected is masked at all egress points, including CloudWatch Logs Insights, metric filters, and subscription filters. Only users who have the logs:Unmask IAM permission can view unmasked data. The following is an SCP you can use:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "RestrictUnmasking",
      "Effect": "Deny",
      "Action": "logs:Unmask",
      "Resource": "arn:aws:logs:*:YOUR_ACCOUNT_ID:log-group:YOUR_LOG_GROUP:*",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalArn": "arn:aws:iam::YOUR_ACCOUNT_ID:role/YOUR_IAM_ROLE"
        }
      }
    }
  ]
}

It defines the following:

  • Effect – This is set to Deny, which means that the specified actions will be denied.
  • Action – This includes logs:Unmask, which prevents viewing of masked data.
  • Resource – This lists the ARN for your CloudWatch Logs log group. Replace YOUR_ACCOUNT_ID with your AWS account ID and YOUR_LOG_GROUP_NAME with the name of your log group.
  • Condition – This makes sure the policy only applies to actions performed by a specific IAM role. Replace YOUR_ACCOUNT_ID with your AWS account ID and YOUR_IAM_ROLE with the name of the IAM provisioned role you want this policy to apply to.

Similar to the previous SCPs, when this SCP is attached to an Organizations OU or an individual AWS account, it will allow only the specified provisioning role while preventing all other IAM entities (users, roles, or groups) within that OU or account from unmasking sensitive data from the CloudWatch Logs log group.

Similar to the previous log group service control policy, this SCP only prevents changes to the log group itself and its retention policy. It doesn’t restrict other actions such as creating or deleting log streams within the log group or modifying other log group configurations. To restrict additional actions, add them to the Action list in the SCP.

Also, this SCP will apply to all log groups that match the specified resource ARN pattern. To target a specific log group, modify the Resource value accordingly.

Clean up

To avoid incurring additional charges, clean up your resources:

  1. Delete the Amazon Lex bot:
    1. On the Amazon Lex console, choose Bots in the navigation pane.
    2. Select the bot to delete and on the Action menu, choose Delete.
  2. Delete the associated Lambda function:
    1. On the Lambda console, choose Functions in the navigation pane.
    2. Select the function associated with the bot and on the Action menu, choose Delete.
  3. Delete the account-level data protection policy. For instructions, see DeleteAccountPolicy.
  4. Delete the CloudFormation log group policy:
    1. On the CloudWatch console, under Logs in the navigation pane, choose Log groups.
    2. Choose your log group.
    3. On the Data protection tab, under Log group policy, choose the Actions menu and choose Delete policy.
  5. Delete the S3 bucket that stores the Amazon Lex data:
    1. On the Amazon S3 console, choose Buckets in the navigation pane.
    2. Select the bucket you want to delete, then choose Delete.
    3. To confirm that you want to delete the bucket, enter the bucket name and choose Delete bucket.
  6. Delete the CloudFormation stack. For instructions, see Deleting a stack on the AWS CloudFormation console.
  7. Delete the SCP. For instructions, see Deleting an SCP.
  8. Delete the KMS key. For instructions, see Deleting AWS KMS keys.

Conclusion

Securing PII within AWS services like Amazon Lex and CloudWatch requires a comprehensive and proactive approach. By following the steps in this post—identifying and classifying data, locating data stores, monitoring and protecting data in transit and at rest, and implementing SCPs for Amazon Lex and Amazon CloudWatch—organizations can create a robust security framework. This framework not only protects sensitive data, but also complies with regulatory standards and mitigates potential risks associated with data breaches and unauthorized access.

Emphasizing the need for regular audits, continuous monitoring, and updating security measures in response to emerging threats and technological advancements is crucial. Adopting these practices allows organizations to safeguard their digital assets, maintain customer trust, and build a reputation for strong data privacy and security in the digital landscape.


About the Authors

rashmicg-bioRashmica Gopinath is a software development engineer with Amazon Lex. Rashmica is responsible for developing new features, improving the service’s performance and reliability, and ensuring a seamless experience for customers building conversational applications. Rashmica is dedicated to creating innovative solutions that enhance human-computer interaction. In her free time, she enjoys winding down with the works of Dostoevsky or Kafka.

Dipkumar Mehta is a Principal Consultant with the Amazon ProServe Natural Language AI team. He focuses on helping customers design, deploy, and scale end-to-end Conversational AI solutions in production on AWS. He is also passionate about improving customer experience and driving business outcomes by leveraging data. Additionally, Dipkumar has a deep interest in Generative AI, exploring its potential to revolutionize various industries and enhance AI-driven applications.

David Myers is a Sr. Technical Account Manager with AWS Enterprise Support . With over 20 years of technical experience observability has been part of his career from the start. David loves improving customers observability experiences at Amazon Web Services.

Sam Patel is a Security Consultant specializing in safeguarding Generative AI (GenAI), Artificial Intelligence systems, and Large Language Models (LLM) for Fortune 500 companies. Serving as a trusted advisor, he invents and spearheads the development of cutting-edge best practices for secure AI deployment, empowering organizations to leverage transformative AI capabilities while maintaining stringent security and privacy standards.

Read More

AWS AI chips deliver high performance and low cost for Llama 3.1 models on AWS

AWS AI chips deliver high performance and low cost for Llama 3.1 models on AWS

Today, we are excited to announce AWS Trainium and AWS Inferentia support for fine-tuning and inference of the Llama 3.1 models. The Llama 3.1 family of multilingual large language models (LLMs) is a collection of pre-trained and instruction tuned generative models in 8B, 70B, and 405B sizes. In a previous post, we covered how to deploy Llama 3 models on AWS Trainium and Inferentia based instances in Amazon SageMaker JumpStart. In this post, we outline how to get started with fine-tuning and deploying the Llama 3.1 family of models on AWS AI chips, to realize their price-performance benefits.

Overview of Llama 3.1 models

The Llama 3.1 family of multilingual LLMs are a collection of pre-trained and instruction tuned generative models in 8B, 70B, and 405B sizes (text in/text and code out). All models support long context length (128k) and are optimized for inference with support for grouped query attention (GQA).

The Llama 3.1 instruction tuned models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available publicly available chat models on common industry benchmarks. They have been trained to generate tool calls for a few specific tools for capabilities like search, image generation, code execution, and mathematical reasoning. In addition, they support zero-shot tool use.

Llama 3.1 405B is the world’s largest publicly available LLM according to Meta. The model sets a new standard for artificial intelligence (AI) and is ideal for enterprise-level applications and research and development. It’s ideal for tasks like synthetic data generation, where the outputs of the model can be used to improve smaller Llama models after fine-tuning, and model distillations to transfer knowledge to smaller models from the 405B model. This model excels at general knowledge, long-form text generation, multilingual translation, machine translation, coding, math, tool use, enhanced contextual understanding, and advanced reasoning and decision-making.

Architecturally, the core LLM for Llama 3 and Llama 3.1 has the same dense architecture. They are auto-regressive language models that use an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.

The responsible use guide from Meta can assist you in implementing additional fine-tuning that may be necessary to customize and optimize the models with appropriate safety mitigations.

Trainium powers Llama 3.1 on Amazon Bedrock and Amazon SageMaker

The fastest way to get started with Llama 3.1 on AWS is through Amazon Bedrock, which is powered by our purpose-built AI infrastructure including AWS Trainium. Through its fully managed API, Amazon Bedrock delivers the benefits of our purpose-built AI infrastructure and simplifies access to these powerful models so you can focus on building differentiated AI applications.

If you need greater control over the underlying resources, you can fine-tune and deploy Llama 3.1 models with SageMaker. Trainium support for Llama 3.1 in SageMaker JumpStart is coming soon.

AWS Trainium and AWS Inferentia2 enable high performance and low cost for Llama 3.1 models

If you want to build your own ML pipelines for training and inference for greater flexibility and control, you can get started with Llama 3.1 on AWS AI chips using Amazon Elastic Compute Cloud (Amazon EC2) Trn1 and Inf2 instances. Let’s see how you can get started with the new Llama 3.1 8/70B models on Trainium using the AWS Neuron SDK.

Fine-tune Llama 3.1 on Trainium

To get started with fine-tuning either Llama 3.1 8B or Llama 3.1 70B, you can use the NeuronX Distributed library. NeuronX Distributed provides implementations of some of the more popular distributed training and inference techniques. To start fine-tuning, you can use the following samples:

Both samples are built on top of AWS ParallelCluster to manage the Trainium cluster infrastructure and Slurm for workload management. The following is the example Slurm command to initiate training for Llama3.1 70B:

sbatch --exclusive 
--nodes 32 
--cpus-per-task 128 
--wrap="srun bash $(pwd)/run_llama3_70B_tp_pp.sh"

Inside the Slurm script, we launch a distributed training process on our cluster. In the runner scripts, we load the pre-trained weights and configuration provided by Meta, and launch the training process: 

torchrun $DISTRIBUTED_ARGS run_llama_nxd.py 
    —train_batch_size $BS 
    —use_meta_device_init 1 
    —training_dir $DATA_PATH 
    —training_config $SCRIPT_DIR/${MODEL_SIZE}config_llama${LLAMA_VERSION} 
    —max_steps $max_steps 
    —seq_len $SEQ_LEN 
    —pipeline_parallel_size $PP_DEGREE 
    —tensor_parallel_size $TP_DEGREE 
    —num_microbatches $NUM_MICROBATCHES 
    —lr 0.000015 
    —min_lr 1e-06 
    —beta1 0.9 
    —beta2 0.95 
    —weight_decay 0.1 
    —warmup_steps 2000 
    —constant_steps 0 
    —use_zero1_optimizer 1 
    —use_selective_checkpoint 1 
    —use_flash_attention 1 
    —qkv_linear 1 
    —kv_replicator 4 
    —pretrained_weight 1 
    —save_load_xser 1 
    —checkpoint_dir "/shared/llama${LLAMA_VERSION}${MODEL_SIZE}/" 
    —checkpoint_freq $checkpoint_freq 
    —num_kept_checkpoint -1 
    —loading_step -1 
    —tb_dir $tb_dir |& tee $LOG_PATH/log
exit ${PIPESTATUS[0]}

Deploy Llama 3.1 on Trainium

When your model is ready to deploy, you can do so by updating the model ID in the previous Llama 3 8B Neuron sample code:

model_id = "meta-llama/Meta-Llama-3.1-8B"
neuron_model = LlamaForSampling.from_pretrained(model_id, neuron_config=neuron_config, batch_size=1, tp_degree=24, amp='bf16', n_positions=4096)
neuron_model.to_neuron()

You can use the same sample inference code:

tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Hello, I'm a language model and I like to"
input_ids = tokenizer.encode(prompt, return_tensors="pt")

# run inference with top-k sampling
with torch.inference_mode():
    start = time.time()
    generated_sequences = neuron_model.sample(input_ids, sequence_length=2048, top_k=50)
    elapsed = time.time() - start

generated_sequences = [tokenizer.decode(seq) for seq in generated_sequences]
print(f'generated sequences {generated_sequences} in {elapsed} seconds')

For step-by-step details, refer to the new Llama 3.1 examples:

You can also use Hugging Face’s Optimum Neuron library to quickly deploy models directly from SageMaker through the Hugging Face Model Hub. From the Llama 3.1 model card hub, choose Deploy, then SageMaker, and finally AWS Inferentia & Trainium. Copy the example code into a SageMaker notebook, then choose Run.

import json
import sagemaker
import boto3
from sagemaker.huggingface import HuggingFaceModel, get_huggingface_llm_image_uri

try:
    role = sagemaker.get_execution_role()
except ValueError:
    iam = boto3.client("iam")
    role = iam.get_role(RoleName="sagemaker_execution_role")["Role"]["Arn"]

# Hub Model configuration. https://huggingface.co/models
hub = {
    "HF_MODEL_ID": "meta-llama/Meta-Llama-3.1-8B",
    "HF_NUM_CORES": "2",
    "HF_AUTO_CAST_TYPE": "fp16",
    "MAX_BATCH_SIZE": "8",
    "MAX_INPUT_LENGTH": "3686",
    "MAX_TOTAL_TOKENS": "4096",
    "HF_TOKEN": "<REPLACE WITH YOUR TOKEN>",
}

assert hub["HF_TOKEN"] != "<REPLACE WITH YOUR TOKEN>", "Please replace '<REPLACE WITH YOUR TOKEN>' with your Hugging Face Hub API token"


# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
    image_uri=get_huggingface_llm_image_uri("huggingface-neuronx", version="0.0.23"),
    env=hub,
    role=role,
)

# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
    initial_instance_count=1,
    instance_type="ml.inf2.xlarge",
    container_startup_health_check_timeout=1800,
    volume_size=512,
)

# send request
predictor.predict(
    {
        "inputs": "What is is the capital of France?",
        "parameters": {
            "do_sample": True,
            "max_new_tokens": 128,
            "temperature": 0.7,
            "top_k": 50,
            "top_p": 0.95,
        }
    }
)

Additionally, if you want to use vLLM to deploy the models, you can refer to the continuous batching guide to create the environment. After you create the environment, you can use vLLM to deploy Llama 3.1 8/70B models on AWS Trainium or Inferentia. The following an example to deploy Llama 3.1 8B:

from vllm import LLM, SamplingParams
# Sample prompts.
prompts = [
    "Hello, my name is",
    "The president of the United States is",
    "The capital of France is",
    "The future of AI is",
]
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
# Create an LLM.
llm = LLM(
    model="meta-llama/Meta-Llama-3.1-8B",
    max_num_seqs=8,
    # The max_model_len and block_size arguments are required to be same as max sequence length,
    # when targeting neuron device. Currently, this is a known limitation in continuous batching
    # support in transformers-neuronx.
    max_model_len=128,
    block_size=128,
    # The device can be automatically detected when AWS Neuron SDK is installed.
    # The device argument can be either unspecified for automated detection, or explicitly assigned.
    device="neuron",
    tensor_parallel_size=8)
# Generate texts from the prompts. The output is a list of RequestOutput objects
# that contain the prompt, generated text, and other information.
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

Conclusion

AWS Trainium and Inferentia deliver high performance and low cost for fine-tuning and deploying Llama 3.1 models. We are excited to see how you will use these powerful models and our purpose-built AI infrastructure to build differentiated AI applications. To learn more about how to get started with AWS AI chips, refer to Model Samples and Tutorials in AWS Neuron Documentation.


About the Authors

John Gray is a Sr. Solutions Architect in Annapurna Labs, AWS, based out of Seattle. In this role, John works with customers on their AI and machine learning use cases, architects solutions to cost-effectively solve their business problems, and helps them build a scalable prototype using AWS AI chips.

Pinak Panigrahi works with customers to build ML-driven solutions to solve strategic business problems on AWS. In his current role, he works on optimizing training and inference of generative AI models on AWS AI chips.

Kamran Khan, Head of Business Development for AWS Inferentina/Trianium at AWS. He has over a decade of experience helping customers deploy and optimize deep learning training and inference workloads using AWS Inferentia and AWS Trainium.

Shruti Koparkar is a Senior Product Marketing Manager at AWS. She helps customers explore, evaluate, and adopt Amazon EC2 accelerated computing infrastructure for their machine learning needs.

Read More

Use Llama 3.1 405B to generate synthetic data for fine-tuning tasks

Use Llama 3.1 405B to generate synthetic data for fine-tuning tasks

Today, we are excited to announce the availability of the Llama 3.1 405B model on Amazon SageMaker JumpStart, and Amazon Bedrock in preview. The Llama 3.1 models are a collection of state-of-the-art pre-trained and instruct fine-tuned generative artificial intelligence (AI) models in 8B, 70B, and 405B sizes. Amazon SageMaker JumpStart is a machine learning (ML) hub that provides access to algorithms, models, and ML solutions so you can quickly get started with ML. Amazon Bedrock offers a straightforward way to build and scale generative AI applications with Meta Llama models, using a single API.

In this post, we show how to use Llama 3.1 405B to generate data (labels for a sample dataset), and how to use the generated data to fine-tune a smaller model like Llama 3 8B to generate better responses compared to the non-fine-tuned model.

Overview of Llama 3.1 405B

The Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pre-trained and instruction tuned generative models in 8B, 70B, and 405B sizes (text in/text out). All models support long context length (128,000) and are optimized for inference with support for grouped query attention (GQA). The Llama 3.1 instruction tuned text-only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the publicly available chat models on common industry benchmarks.

Llama 3.1 405B is the first publicly available model that rivals the top models in AI when it comes to state-of-the-art capabilities in general knowledge, steerability, math, tool use, and multilingual translation. There are some unique ways to use it—in addition to direct inference, you can use the Llama 3.1 405B model to generate synthetic data to improve smaller models, and it can be a powerful domain-specific model by acting as the base model for domain-specific fine-tuning.

Llama 3.1 models are available today for inferencing on SageMaker JumpStart and Amazon Bedrock. On SageMaker JumpStart, they are rolling out to all AWS Regions where SageMaker JumpStart is available and support the required instance types. Llama 3.1 405B will require P5 instances on Amazon SageMaker. The Llama 3.1 models are also available today in the us-west-2 Region on Amazon Bedrock, with planned future expanded Regional availability.

Prerequisites

The following prerequisites are needed to implement the steps outlined in this post:

Responses from the Llama 3 8B Instruct model

Firstly, we perform inference with the Llama 3 8B model either directly through Amazon Bedrock or a deployed endpoint using SageMaker JumpStart. With Llama 3 Instruct models, which are optimized for dialogue use cases, the input to the model endpoints is the previous history between the chat assistant and the user. We can ask context-aware questions to conversations that have happened so far, using specific formatting for the input text (described in our earlier Llama 3B release posts, Meta Llama 3 models are now available in Amazon Bedrock and Meta Llama 3 models are now available in Amazon SageMaker JumpStart).

In the following example, the user has a conversation with the assistant about tourist sites in Paris. The assistant generated four different recommendation options, and then the user inquires about the first option:

Input: <|begin_of_text|><|start_header_id|>user<|end_header_id|>nnI am going to Paris, what should I see?<|eot_id|><|start_header_id|>assistant<|end_header_id|>nnParis, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:nn1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.nnThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world.<|eot_id|><|start_header_id|>user<|end_header_id|>nnWhat is so great about #1?<|eot_id|><|start_header_id|>assistant<|end_header_id|>nn

Output: The Eiffel Tower! It's an iconic symbol of Paris and one of the most recognizable landmarks in the world. Here are some reasons why it's so great:nn1. **Engineering marvel**: When it was built for the 1889 World's Fair, the Eiffel Tower was a groundbreaking feat of engineering. It was the tallest structure in the world at the time, standing at 324 meters (1,063 feet).n2. **Panoramic views**: The Eiffel Tower offers breathtaking 360-degree views of the city from its observation decks. You can see many of Paris' famous landmarks, such as the Seine River, the Louvre, and Montmartre.n3. **Romantic atmosphere**: The Eiffel Tower is often associated with romance and is a popular spot for proposals, weddings, and romantic dinners. The tower's iron latticework and twinkling lights at night add to its romantic ambiance.n4. **Historical significance**: The Eiffel Tower has played a significant role in French history, serving as a communication tower during World War I and a symbol of French culture and engineering.n5. **Accessibility**: The Eiffel Tower is easily accessible by public transportation or on foot, and there are various ways to experience it, including taking the stairs, elevator, or a guided tour.n6. **Nighttime spectacle**: The Eiffel Tower is even more magical at night, when it's illuminated with thousands of twinkling lights. The light show, known as the "Eiffel Tower Sparkle," takes place every hour on the hour from dusk to midnight.n7. **Iconic status**: The Eiffel Tower has been featured in countless films, literature, and art, making it an instantly recognizable symbol of Paris and France.nnOverall, the Eiffel Tower is a must-visit attraction in Paris, offering a unique combination of history, engineering, romance, and stunning views.

The Llama 3 8B model is able to generate answers for the questions without issues.

Next, let’s test the ability of Llama 3 8B to answer logical and arithmetic questions (derived from Hugging Face’s AQUA-RAT dataset—instead of multiple choice options, we ask for full answers) as follows:

Input:
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
A board 7ft. 9 inches long is divided into 3 equal parts .
What is the length of each part??<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>

Output:
A nice simple problem!Let's convert the length of the board to inches:
7 ft 9 inches = 95 inches (since 1 foot = 12 inches).Since the board is
divided into 3 equal parts, each part will have the same length.
To find the length of each part, we'll divide the total length (95 inches)
by 3:95 inches ÷ 3 = 31.67 inches
So each part of the board is approximately 31.67 inches long.

This answer looks almost correct but not quite. The correct answer is 31 inches long. Similar logical questions are not answered correctly by the Llama 3 8B model.

In order for the Llama 3 8B model to improve its logical question answering capability, we want to fine-tune the model with data from the AQUA-RAT dataset. As we already mentioned, the AQUA-RAT dataset contains multiple choice options for the LLM to choose from. Because we don’t have the full answers for this dataset, we use the Llama 3.1 405B model to generate the verbal answer to the questions, and use that dataset to fine-tune the Llama 3 8B model.

Generate label data using Llama 3.1 405B

Because Llama 3.1 405B is the most capable of the Llama 3.1 collection of models, and because of its state-of-the-art math and general knowledge capabilities, we run direct inference of the questions in the AQUA-RAT dataset on Llama 3.1 405B using either SageMaker JumpStart or Amazon Bedrock. This helps us generate the answers we want to use to fine-tune the smaller Llama 3 8B models. In essence, we’re using Llama 3.1 405B as an alternative to human annotation to generate labels for the dataset. The following are example inference outputs from the 405B model:

Input:
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
A board 7ft. 9 inches long is divided into 3 equal parts .
What is the length of each part??<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>

Output:
A nice simple problem!Let's break it down:The board is 7ft 9in long.
We can convert this to inches only:7ft = 7 × 12 = 84in (since there are
12 inches in a foot) Add the 9 inches: 84in + 9in = 93in
Now, the board is divided into 3 equal parts, so we need to divide the
total length (93in) by 3:93in ÷ 3 = 31inSo, each part is 31 inches long.

We can clearly see that the 405B answer is logically and mathematically correct, and we can use this answer in the place of human annotation for fine-tuning purposes. We concatenate the answers from the 405B model for approximately 2,000 questions from the AQUA-RAT dataset, which becomes our training dataset. Our test dataset comes from extracted answers for approximately 4,000 more questions.

The training data is constructed as a JSON object as follows:

{"instruction": "The entrance fee for a fair is $5 for persons under the age of 18, and 20% more for persons older. Each ride at the fair costs $0.50. If Joe goes with her 6 years old twin brothers, and they each took 3 rides in total. How much money does Joe end up spending at the fair?", "response": "The entrance fee for the 2 brothers is 5 x 2 = 10 dollars. The entrance fee for Joe is 5 x 1.2 = 6 dollars. So the total entrance fee is 10 + 6 = 16 dollars. The cost of the rides is 3 x 0.5 x 3 = 4.5 dollars. So the total cost is 16 + 4.5 = 20.5 dollars. The answer is 20.5."}

{"instruction": "John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of number is?", "response": "The average of 15 numbers is 40. So the sum of the 15 numbers is 15 x 40 = 600. If 10 is added to each number, then the sum becomes 600 + 15 x 10 = 600 + 150 = 750. The average of the new numbers is 750 / 15 = 50.n#### 50"}

{"instruction": "A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance?", "response": "A simple problem!nnLet's use the formula:nnDistance = Speed u00d7 TimennGiven:nSpeed (v) = 20 km/hrnTime (t) = 2.5 hrnnPlugging in the values, we get:nnDistance = 20 km/hr u00d7 2.5 hrn= 50 kmnnSo, the person traveled a distance of 50 km."}

The instruction and response dataset are then used to fine-tune the Llama 3 8B model in SageMaker JumpStart.

Fine-tune the Llama 3 8B model with the generated labels

For instructions on fine-tuning this model, refer to Fine-tune Code Llama on Amazon SageMaker JumpStart. We used a g5.12xlarge instance using the instruction fine-tuning option for five epochs, and observed that the loss function showed improvement both on the training and test sets.

When the fine-tuning was complete, we deployed the fine-tuned model for inference using SageMaker JumpStart on a g5.12xlarge instance (you can deploy on any other supported instance based on your throughput and latency requirements). Please refer to Meta’s End User License Agreement (EULA) for requirements when using synthetic-fine-tuned models for any use case.

Then we performed inference on another 1,000 unseen queries from the AQUA-RAT dataset. The following table shows a few of the queries for comparison.

Query Answer Llama 3 8B Responses Before Fine-Tuning Llama 3 8B Responses After Fine-Tuning
Tim has 350 pounds of cement in 100, 50, and 25 pound bags. He has an equal number of each size bag. How many bags of cement does Tim have? 6 We know there are 3 types of bags. 100, 50, and 25 pounds. He has an equal number of each. So if x is the number of each type of bag, then you know 100x + 50x + 25x = 350. This is because each type of bag has x number of bags, and there is 100 pounds of cement per bag in the first case, 50 in the second, and 25 in the third. 175x = 350. So if we divide both sides by 175 we get x = 2. The answer is 2 bags of each. Let’s assume Tim has X 100 pound bags, X 50 pound bags, and X 25 pound bags. He has in total 100X + 50X + 25X = 175X pounds of cement. Since he has 350 pounds total, we know 175X = 350, so X = 2. That means he has 2 bags of each type, so in total he has 2 + 2 + 2 = 6 bags. The answer is 6.
The purchase price of an article is $48. In order to include 30% of cost for overhead and to provide $12 of net profit, what should the markup be? $55

To solve this problem, we can start by identifying the cost of the article, which is the purchase price of $48. We are also given that the markup should be enough to include 30% of the cost for overhead and provide a net profit of $12.

Let’s break this down step by step:

1. Calculate the cost of the article: $48 (purchase price)
2. Calculate the overhead cost: 30% of the cost = 0.3 x $48 = $14.40
3. Calculate the total cost: cost + overhead cost = $48 + $14.40 = $62.40
4. Calculate the selling price: total cost + net profit = $62.40 + $12 = $74.40

Therefore, the markup should be $74.40 – $48 = $26.40. This means that the selling price should be $26.40 more than the purchase price to include 30% of the cost for overhead and provide a net profit of $12.

Step 1: Calculate the cost price of the articlenThe cost price of the article is $48.nn## Step 2: Calculate the overhead cost as a percentage of the cost pricenThe overhead cost is 30% of the cost price, which is 0.3 * $48 = $14.40.nn## Step 3: Calculate the selling price of the articlenThe selling price of the article is the cost price plus the overhead cost plus the net profit. The net profit is $12.nn## Step 4: Calculate the selling pricenThe selling price is $48 + $14.40 + $12 = $74.40.nn## Step 5: Calculate the markup as a percentage of the cost pricenThe markup is the difference between the selling price and the cost price, divided by the cost price. The markup is ($74.40 – $48) / $48 = $26.40 / $48 = 55%.nnThe final answer is: $55‘}
The slant height of a cone is 35 cm and radius of the base is 14cm, find the curved surface of the cone. 1540 A nice problem!
The slant height of a cone is the distance from the center of the base to the vertex of the cone. Let’s call this distance “l”. We are given that the slant height is 35 cm, so: l = 35 cm
The radius of the base is given as 14 cm, so the diameter of the base is 28 cm. The formula to find the curved surface area of a cone is: Curved Surface Area = πr(l + r) where r is the radius of the base and l is the slant height. Plugging in the values, we get:
Curved Surface Area = π(14)(35 + 14) = π(14)(49) = 686π
Now, we can simplify this expression by multiplying the numbers:
Curved Surface Area = 686 × 3.14 (approximately) = 2153.24 cm²
So, the curved surface area of the cone is approximately 2153.24 cm².
The formula for the curved surface area of a cone is:
Curved Surface Area = πrl
Where r is the radius of the base and l is the slant height.
Given: r = 14 cm  l = 35 cm
Plugging in the values, we get:
Curved Surface Area = π(14)(35) = 14π(35) = 490π = 490 × 3.14 = 1537.4 cm²
Therefore, the curved surface area of the cone is approximately 1537.4 cm².
The speed at which a man can row a boat in still water is 25 kmph. If he rows downstream, where the speed of current is 11 kmph, what time will he take to cover 80 metres? 8 seconds The speed of the man in still water is 25 kmph. The speed of the current is 11 kmph. So the speed downstream is 25 + 11 = 36 kmph. 80 meters is 0.02 km. So the time taken is 0.02 / 36 = 0.0056 hours. The answer is 5.6 minutes. Speed of the boat in still water = 25 kmph
Speed of the current = 11 kmph
Speed downstream = 25 + 11 = 36 kmph
Speed downstream in m/s = (36 * 5/18) = 10 m/s
Distance = 80 m
Time = Distance / Speed = 80 / 10 = 8 seconds.

We can qualitatively see that the answers for the math problems and logical questions are improved after fine-tuning the model. We will perform additional quantitative metric evaluation in future posts. This exercise demonstrates how you can use the Llama 3.1 405B model to efficiently generate datasets in an accelerated fashion and then use those datasets to significantly improve the task-specific capabilities of smaller models.

Conclusion

In this post, we showed how you can use the new Llama 3.1 405B model to synthesize and generate data labels to improve the performance of a much smaller model (Llama 3 8B in this case). We also showed that the responses generated by the fine-tuned model are much improved compared to the model without fine-tuning. We also provided the code notebook that you can use to run and test the solution.

As a next step, we encourage you to use this idea along with the Llama-3.1 405B model in your use case to generate labels or even unlabeled data that can then be used by a smaller model downstream to help solve your use case.


About the Authors

Sebastian Bustillo is an Enterprise Solutions Architect at AWS. He focuses on AI/ML technologies with a profound passion for generative AI and compute accelerators. At AWS, he helps customers unlock business value through cloud technologies and AI/ML. When he’s not at work, he enjoys brewing a perfect cup of specialty coffee and riding his MTB.

Dr. Farooq Sabir is a Senior Artificial Intelligence and Machine Learning Specialist Solutions Architect at AWS. He holds PhD and MS degrees in Electrical Engineering from the University of Texas at Austin and an MS in Computer Science from Georgia Institute of Technology. He has over 15 years of work experience and also likes to teach and mentor college students. At AWS, he helps customers formulate and solve their business problems in data science, machine learning, computer vision, artificial intelligence, numerical optimization, and related domains. Based in Dallas, Texas, he and his family love to travel and go on long road trips.

Dr. Natarajan Chennimalai Kumar is a Principal Solutions Architect in the 3rd Party Model Provider team at AWS, working closely with the Llama partner engineering team at Meta to enable AWS customers use Meta’s Llama models. He holds a PhD from University of Illinois at Urbana-Champaign. He is based in the Bay Area in California. Outside of work, he enjoys watching shows with his kids, playing tennis, and traveling with his family.

Madhur Prashant is an AI and ML Solutions Architect at Amazon Web Services. He is passionate about the intersection of human thinking and generative AI. His interests lie in generative AI, specifically building solutions that are helpful and harmless, and most of all optimal for customers. Outside of work, he loves doing yoga, writing blogs, hiking, spending time with his twin, and playing the guitar.

Dr. Nikita Ivkin is a Senior Applied Scientist for Amazon SageMaker. He focuses on inference acceleration for foundation models and scalable ML algorithms in general. His research interests are in the area of inference acceleration, streaming algorithms, and federated learning, with publishing in a variety of machine learning and computer science venues such as NeurIPS, ICML, ICLR, STOC, PODS, and others.

Supriya Puragundla is a Senior Solutions Architect at AWS. She has over 15 years of IT experience in software development, design, and architecture. She helps key customer accounts on their data, generative AI, and AI/ML journeys. She is passionate about data-driven AI and the area of depth in ML and generative AI.

Dr. Xin Huang is a Senior Applied Scientist for Amazon SageMaker JumpStart and Amazon SageMaker built-in algorithms. He focuses on developing scalable machine learning algorithms. His research interests are in the area of natural language processing, explainable deep learning on tabular data, and robust analysis of non-parametric space-time clustering. He has published many papers in ACL, ICDM, and KDD conferences, and Royal Statistical Society: Series A.

Dr. Ashish Khetan is a Senior Applied Scientist with Amazon SageMaker JumpStart and helps develop machine learning algorithms. He got his PhD from University of Illinois Urbana-Champaign. He is an active researcher in machine learning and statistical inference, and has published many papers in NeurIPS, ICML, ICLR, JMLR, ACL, and EMNLP conferences.

Karl Albertsen leads the product management and partnership teams for Amazon SageMaker. He is focused on making AI accessible, cost-effective, and high-performing for business applications.

Christopher Whitten is an SDE with the SageMaker JumpStart team leading model onboarding and deeper integration with SageMaker services. Chris is passionate about accelerating the ubiquity of AI in practical business applications. His technical interests include agentic workflows and MLOps.

Hemant Singh is an Applied Scientist with experience in Amazon SageMaker JumpStart. He got his master’s from Courant Institute of Mathematical Sciences and B.Tech from IIT Delhi. He has experience in working on a diverse range of machine learning problems within the domain of natural language processing, computer vision, and time series analysis.

Evan Kravitz is a software engineer at Amazon Web Services, working on SageMaker JumpStart. He is interested in the confluence of machine learning with cloud computing. Evan received his undergraduate degree from Cornell University and master’s degree from the University of California, Berkeley. In 2021, he presented a paper on adversarial neural networks at the ICLR conference. In his free time, Evan enjoys cooking, traveling, and going on runs in New York City.

Read More

Llama 3.1 models are now available in Amazon SageMaker JumpStart

Llama 3.1 models are now available in Amazon SageMaker JumpStart

Today, we are excited to announce that the state-of-the-art Llama 3.1 collection of multilingual large language models (LLMs), which includes pre-trained and instruction tuned generative AI models in 8B, 70B, and 405B sizes, is available through Amazon SageMaker JumpStart to deploy for inference. Llama is a publicly accessible LLM designed for developers, researchers, and businesses to build, experiment, and responsibly scale their generative artificial intelligence (AI) ideas. In this post, we walk through how to discover and deploy Llama 3.1 models using SageMaker JumpStart.

Overview of Llama 3.1

The Llama 3.1 multilingual LLMs are a collection of pre-trained and instruction tuned generative models in 8B, 70B, and 405B sizes (text in/text and code out). All models support long context length (128,000) and are optimized for inference with support for grouped query attention (GQA). The Llama 3.1 instruction tuned text-only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the publicly available chat models on common industry benchmarks.

At its core, Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. Architecturally, the core LLM for Llama 3 and Llama 3.1 is the same dense architecture.

Llama 3.1 also offers instruct variants, and the instruct model is fine-tuned for tool use. The model has been trained to generate calls for a few specific tools for capabilities like search, image generation, code execution, and mathematical reasoning. In addition, the model supports zero-shot tool use.

The responsible use guide from Meta can assist you in performing additional fine-tuning that may be necessary to customize and optimize the models with appropriate safety mitigations.

Overview of SageMaker JumpStart

SageMaker JumpStart offers access to a broad selection of publicly available foundation models (FMs). These pre-trained models serve as powerful starting points that can be deeply customized to address specific use cases. You can now use state-of-the-art model architectures, such as language models, computer vision models, and more, without having to build them from scratch.

With SageMaker JumpStart, you can deploy models in a secure environment. The models are provisioned on dedicated SageMaker Inference instances, including AWS Trainium and AWS Inferentia powered instances, and are isolated within your virtual private cloud (VPC). This enforces data security and compliance, because the models operate under your own VPC controls, rather than in a shared public environment. After deploying an FM, you can further customize and fine-tune it using the extensive capabilities of Amazon SageMaker, including SageMaker Inference for deploying models and container logs for improved observability. With SageMaker, you can streamline the entire model deployment process.

Discover Llama 3.1 models in SageMaker JumpStart

SageMaker JumpStart provides FMs through two primary interfaces: Amazon SageMaker Studio and the SageMaker Python SDK. This provides multiple options to discover and use hundreds of models for your specific use case.

SageMaker Studio is a comprehensive integrated development environment (IDE) that offers a unified, web-based interface for performing all aspects of the machine learning (ML) development lifecycle. From preparing data to building, training, and deploying models, SageMaker Studio provides purpose-built tools to streamline the entire process. In SageMaker Studio, you can access SageMaker JumpStart to discover and explore the extensive catalog of FMs available for deployment to inference capabilities on SageMaker Inference.

Alternatively, you can use the SageMaker Python SDK to programmatically access and utilize SageMaker JumpStart models. This approach allows for greater flexibility and integration with existing AI and ML workflows and pipelines. By providing multiple access points, SageMaker JumpStart helps you seamlessly incorporate pre-trained models into your AI and ML development efforts, regardless of your preferred interface or workflow.

Deploy Llama 3.1 models for inference using SageMaker JumpStart

On the SageMaker JumpStart landing page, you can browse for solutions, models, notebooks, and other resources. You can find the Llama 3.1 models in the Foundation Models: Text Generation carousel.

If you don’t see the Llama 3.1 models, update your SageMaker Studio version by shutting down and restarting. For more information about version updates, refer to Shut down and Update Studio Classic Apps.

The following table lists the Llama 3.1 models you can access in SageMaker JumpStart.

Model Name Description Key Capabilities
Meta-Llama-3.1-8B Llama-3.1-8B is a state-of-the-art publicly accessible model that excels at language nuances, contextual understanding, and complex tasks like translation and dialogue generation in 8 languages. Top capabilities include multilingual support and stronger reasoning capabilities, enabling advanced use cases like long-form text summarization and multilingual conversational agents.
Meta-Llama-3.1-8B-Instruct Llama-3.1-8B-Instruct is an update to Meta-Llama-3-8B-Instruct, an assistant-like chat model, that includes an expanded 128K context length, multilinguality, and improved reasoning capabilities. Top capabilities include the ability to follow instructions and tasks, improved reasoning and understanding of nuances and context, and multilingual translation.
Meta-Llama-3.1-70B Llama-3.1-70B is a state-of-the-art publicly accessible model that excels at language nuances, contextual understanding, and complex tasks like translation and dialogue generation in 8 languages. Top capabilities include multilingual support and stronger reasoning capabilities, enabling advanced use cases like long-form text summarization, and multilingual conversational agents.
Meta-Llama-3.1-70B-Instruct Llama-3.1-70B-Instruct is an update to Llama-3-70B-Instruct, an assistant-like chat model, that includes an expanded 128K context length, multilinguality, and improved reasoning capabilities. Top capabilities include the ability to follow instructions and tasks, improved reasoning and understanding of nuances and context, and multilingual translation.
Meta-Llama-3.1-405B Llama-3.1-405B is the largest, most capable publicly available FM, unlocking new applications and innovations, and paving the way for groundbreaking technologies like synthetic data generation and model distillation. Llama-3.1-405B unlocks innovation with capabilities like general knowledge, steerability, math, tool use, and multilingual translation, enabling new possibilities for innovation and development.
Meta-Llama-3.1-405B-Instruct Llama-3.1-405B-Instruct is the largest and most powerful of the Llama 3.1 Instruct models. It’s a highly advanced model for conversational inference and reasoning, synthetic data generation, and a base to do specialized continual pre-training or fine-tuning on a specific domain. Llama-3.1-405B unlocks innovation with capabilities like general knowledge, steerability, math, tool use, and multilingual translation, enabling new possibilities for innovation and development.
Meta-Llama-3.1-405B-FP8 This is FP8 Quantized Version of Llama-3.1-405B. Llama-3.1-405B unlocks innovation with capabilities like general knowledge, steerability, math, tool use, and multilingual translation, enabling new possibilities for innovation and development.
Meta-Llama-3.1-405B-Instruct-FP8 This is FP8 Quantized Version of Llama-3.1-405B-Instruct. Llama-3.1-405B unlocks innovation with capabilities like general knowledge, steerability, math, tool use, and multilingual translation, enabling new possibilities for innovation and development.

You can choose the model card to view details about the model such as license, data used to train, and how to use. You can also find two buttons, Deploy and Open Notebook, which help you use the model.

When you choose either button, a pop-up window will show the End-User License Agreement (EULA) and acceptable use policy for you to accept.

Upon acceptance, you will proceed to the next step to use the model.

Deploy Llama 3.1 models for inference using the Python SDK

When you choose Deploy and accept the terms, model deployment will start. Alternatively, you can deploy through the example notebook by choosing Open Notebook. The notebook provides end-to-end guidance on how to deploy the model for inference and clean up resources.

To deploy using a notebook, you start by selecting an appropriate model, specified by the model_id. You can deploy any of the selected models on SageMaker.

You can deploy a Llama 3.1 405B model in FP8 using SageMaker JumpStart with the following SageMaker Python SDK code:

from sagemaker.jumpstart.model import JumpStartModel

model = JumpStartModel(model_id = "meta-llama-3-1-405b-fp8")
predictor = model.deploy(accept_eula=accept_eula)

This deploys the model on SageMaker with default configurations, including default instance type and default VPC configurations. You can change these configurations by specifying non-default values in JumpStartModel. To successfully deploy the model, you must manually set accept_eula=True as a deploy method argument. After it’s deployed, you can run inference against the deployed endpoint through the SageMaker predictor:

payload = {
    "inputs": "The color of the sky is blue but sometimes it can also be ",
    "parameters": {"max_new_tokens":256, "top_p":0.9, "temperature":0.6}
}
response = predictor.predict(payload)

The following table lists all the Llama models available in SageMaker JumpStart along with the model_ids, default instance types, and the maximum number of total tokens (sum of number of input tokens and number of generated tokens) supported for each of these models. For increased context length, customers can modify the default instance type in the SageMaker JumpStart UI.

Model Name Model ID Default instance type Supported instance types
Meta-Llama-3.1-8B meta-llama-3-1-8b ml.g5.4xlarge (2,000 context length ) ml.g5.4xlarge, ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge, ml.g5.4xlarge, ml.g5.8xlarge, ml.g6.12xlarge, ml.p4d.24xlarge, ml.p5.48xlarge
Meta-Llama-3.1-8B-Instruct meta-llama-3-1-8b-instruct ml.g5.4xlarge (2,000 context length ) Same as Llama-3.1-8B
Meta-Llama-3.1-70B meta-llama-3-1-70b ml.p4d.24xlarge (12,000 context length on 8 A100s) ml.g5.48xlarge, ml.g6.48xlarge, ml.p4d.24xlarge, ml.p5.48xlarge
Meta-Llama-3.1-70B-Instruct meta-llama-3-1-70b-instruct ml.p4d.24xlarge (12,000 context length on 8 A100s) Same as Llama-3.1-70B
Meta-Llama-3.1-405B meta-llama-3-1-405b ml.p5.48xlarge 2x ml.p5.48xlarge
Meta-Llama-3.1-405B-Instruct meta-llama-3-1-405b-instruct ml.p5.48xlarge 2x ml.p5.48xlarge
Meta-Llama-3.1-405B-FP8 meta-llama-3-1-405b-fp8 ml.p5.48xlarge (8,000 context length on 8 H100s) ml.p5.48xlarge
Meta-Llama-3.1-405B-Instruct-FP8 meta-llama-3-1-405-instruct-fp8 ml.p5.48xlarge (8,000 context length on 8 H100s) ml.p5.48xlarge

Inference and example prompts for Llama-3.1-405B-Instruct

You can use Llama models for text completion for any piece of text. Through text generation, you can perform a variety of tasks, such as question answering, language translation, sentiment analysis, and more. Input payload to the endpoint looks like the following code:

<|begin_of_text|><|start_header_id|>[ROLE_1]<|end_header_id|>

[ROLE_1_MESSAGE]<|eot_id|><|begin_of_text|><|start_header_id|>[ROLE_2]<|end_header_id|>

[ROLE_2_MESSAGE]<|eot_id|><|begin_of_text|><|start_header_id|>[ROLE_3]<|end_header_id|>

...

The roles should alternate between user and assistant while optionally starting with a system role.

In the next example, we show how to use Llama Instruct models within a conversational context, where a multi-turn chat is occurring between a user and an assistant. The first few rounds of the conversation are provided as input to the model:

<|begin_of_text|><|start_header_id|>user<|end_header_id|>

I am going to Paris, what should I see?<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris: 1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city. 2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa. 3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows. These are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world.<|eot_id|><|start_header_id|>user<|end_header_id|> What is so great about #1?<|eot_id|><|start_header_id|>assistant<|end_header_id|>

This produces the following response:

The Eiffel Tower! It's an iconic symbol of Paris and one of the most recognizable landmarks in the world. Here are some reasons why it's so great:

1. **Engineering marvel**: When it was built for the 1889 World's Fair, the Eiffel Tower was a groundbreaking feat of engineering. It was the tallest structure in the world at the time, standing at 324 meters (1,063 feet).
2. **Breathtaking views**: The Eiffel Tower offers stunning views of the city from its observation decks. On a clear day, you can see for miles in every direction, taking in the beautiful Parisian landscape.
3. **Romantic atmosphere**: The Eiffel Tower is often associated with romance and love. It's a popular spot for proposals, weddings, and romantic getaways.
4. **Historical significance**: The Eiffel Tower has played a significant role in history, serving as a communication tower during World War I and a symbol of French resistance during World War II.
5. **Iconic design**: The Eiffel Tower's lattice-like design is instantly recognizable and has been imitated and parodied countless times in art, architecture, and popular culture.

Llama Guard

You can also use the Llama Guard model to help add guardrails for these models. Llama Guard provides input and output guardrails for LLM deployments. Llama Guard is a publicly available model that performs competitively on common open benchmarks and provides developers with a pre-trained model to help defend against generating potentially risky outputs. This model has been trained on a mix of publicly available datasets to enable detection of common types of potentially risky or violating content that may be relevant to a number of developer use cases.

You can use Llama Guard as a supplemental tool for developers to integrate into their own mitigation strategies, such as for chatbots, content moderation, customer service, social media monitoring, and education. By passing user-generated content through Llama Guard before publishing or responding to it, developers can flag unsafe or inappropriate language and take action to maintain a safe and respectful environment. Llama Guard is available on SageMaker JumpStart.

Conclusion

In this post, we explored how SageMaker JumpStart empowers data scientists and ML engineers to discover, access, and run a wide range of pre-trained FMs for inference, including Meta’s most advanced and capable models to date. Llama 3.1 models are available today in SageMaker JumpStart initially in the US East (N. Virginia), US East (Ohio), and US West (Oregon) AWS Regions. Get started with SageMaker JumpStart and Llama 3.1 models today.

Resources

For additional resources, refer to the following:


About the Authors

Saurabh Trikande is a Senior Product Manager for Amazon SageMaker Inference. He is passionate about working with customers and is motivated by the goal of democratizing machine learning. He focuses on core challenges related to deploying complex ML applications, multi-tenant ML models, cost optimizations, and making deployment of deep learning models more accessible. In his spare time, Saurabh enjoys hiking, learning about innovative technologies, following TechCrunch, and spending time with his family.

James Park is a Solutions Architect at Amazon Web Services. He works with Amazon.com to design, build, and deploy technology solutions on AWS, and has a particular interest in AI and machine learning. In his spare time he enjoys seeking out new cultures, new experiences, and staying up to date with the latest technology trends.You can find him on LinkedIn.

Dr. Kyle Ulrich is an Applied Scientist with the Amazon SageMaker built-in algorithms team. His research interests include scalable machine learning algorithms, computer vision, time series, Bayesian non-parametrics, and Gaussian processes. His PhD is from Duke University and he has published papers in NeurIPS, Cell, and Neuron.

Jonathan Guinegagne is a Senior Software Engineer with Amazon SageMaker JumpStart at AWS. He got his master’s degree from Columbia University. His interests span machine learning, distributed systems, and cloud computing, as well as democratizing the use of AI. Jonathan is originally from France and now lives in Brooklyn, NY.

Christopher Whitten is a software developer on the JumpStart team. He helps scale model selection and integrate models with other SageMaker services. Chris is passionate about accelerating the ubiquity of AI across a variety of business domains.

Read More

Intelligent document processing using Amazon Bedrock and Anthropic Claude

Intelligent document processing using Amazon Bedrock and Anthropic Claude

Generative artificial intelligence (AI) not only empowers innovation through ideation, content creation, and enhanced customer service, but also streamlines operations and boosts productivity across various domains. To effectively harness this transformative technology, Amazon Bedrock offers a fully managed service that integrates high-performing foundation models (FMs) from leading AI companies, such as AI21 Labs, Anthropic, Cohere, Meta, Stability AI, Mistral AI, and Amazon. By providing access to these advanced models through a single API and supporting the development of generative AI applications with an emphasis on security, privacy, and responsible AI, Amazon Bedrock enables you to use AI to explore new avenues for innovation and improve overall offerings.

Enterprise customers can unlock significant value by harnessing the power of intelligent document processing (IDP) augmented with generative AI. By infusing IDP solutions with generative AI capabilities, organizations can revolutionize their document processing workflows, achieving exceptional levels of automation and reliability. This combination enables advanced document understanding, highly effective structured data extraction, automated document classification, and seamless information retrieval from unstructured text. With these capabilities, organizations can achieve scalable, efficient, and high-value document processing that drives business transformation and competitiveness, ultimately leading to improved productivity, reduced costs, and enhanced decision-making.

In this post, we show how to develop an IDP solution using Anthropic Claude 3 Sonnet on Amazon Bedrock. We demonstrate how to extract data from a scanned document and insert it into a database.

The Anthropic Claude 3 Sonnet model is optimized for speed and efficiency, making it an excellent choice for intelligent tasks—particularly for enterprise workloads. It also possesses sophisticated vision capabilities, demonstrating a strong aptitude for understanding a wide range of visual formats, including photos, charts, graphs, and technical diagrams. Although we demonstrate this solution using the Anthropic Claude 3 Sonnet model, you can alternatively use the Haiku and Opus models if your use case requires them.

Solution overview

The proposed solution uses Amazon Bedrock and the powerful Anthropic Claude 3 Sonnet model to enable IDP capabilities. The architecture consists of several AWS services seamlessly integrated with the Amazon Bedrock, enabling efficient and accurate extraction of data from scanned documents.

The following diagram illustrates our solution architecture.

The solution consists of the following steps:

  1. The process begins with scanned documents being uploaded and stored in an Amazon Simple Storage Service (Amazon S3) bucket, which invokes an S3 Event Notification on object upload.
  2. This event invokes an AWS Lambda function, responsible for invoking the Anthropic Claude 3 Sonnet model on Amazon Bedrock.
  3. The Anthropic Claude 3 Sonnet model, with its advanced multimodal capabilities, processes the scanned documents and extracts relevant data in a structured JSON format.
  4. The extracted data from the Anthropic Claude 3 model is sent to an Amazon Simple Queue Service (Amazon SQS) queue. Amazon SQS acts as a buffer, allowing components to send and receive messages reliably without being directly coupled, providing scalability and fault tolerance in the system.
  5. Another Lambda function consumes the messages from the SQS queue, parses the JSON data, and stores the extracted key-value pairs in an Amazon DynamoDB table for retrieval and further processing.

This serverless architecture takes advantage of the scalability and cost-effectiveness of AWS services while harnessing the cutting-edge intelligence of Anthropic Claude 3 Sonnet. By combining the robust infrastructure of AWS with Anthropic’s FMs, this solution enables organizations to streamline their document processing workflows, extract valuable insights, and enhance overall operational efficiency.

The solution uses the following services and features:

  • Amazon Bedrock is a fully managed service that provides access to large language models (LLMs), allowing developers to build and deploy their own customized AI applications.
  • The Anthropic Claude 3 family offers a versatile range of models tailored to meet diverse needs. With three options—Opus, Sonnet, and Haiku—you can choose the perfect balance of intelligence, speed, and cost. These models excel at understanding complex enterprise content, including charts, graphs, technical diagrams, and reports.
  • Amazon DynamoDB is a fully managed, serverless, NoSQL database service.
  • AWS Lambda is a serverless computing service that allows you to run code without provisioning or managing servers.
  • Amazon SQS is a fully managed message queuing service.
  • Amazon S3 is a highly scalable, durable, and secure object storage service.

In this solution, we use the generative AI capabilities in Amazon Bedrock to efficiently extract data. As of writing of this post, Anthropic Claude 3 Sonnet only accepts images as input. The supported file types are GIF, JPEG, PNG, and WebP. You can choose to save images during the scanning process or convert the PDF to images.

You can also enhance this solution by implementing human-in-the-loop and model evaluation features. The goal of this post is to demonstrate how you can build an IDP solution using Amazon Bedrock, but to use this as a production-scale solution, additional considerations should be taken into account, such as testing for edge case scenarios, better exception handling, trying additional prompting techniques, model fine-tuning, model evaluation, throughput requirements, number of concurrent requests to be supported, and carefully considering cost and latency implications.

Prerequisites

You need the following prerequisites before you can proceed with this solution. For this post, we use the us-east-1 AWS Region. For details on available Regions, see Amazon Bedrock endpoints and quotas.

Use case and dataset

For our example use case, let’s look at a state agency responsible for issuing birth certificates. The agency may receive birth certificate applications through various methods, such as online applications, forms completed at a physical location, and mailed-in completed paper applications. Today, most agencies spend a considerable amount of time and resources to manually extract the application details. The process begins with scanning the application forms, manually extracting the details, and then entering them into an application that eventually stores the data into a database. This process is time-consuming, inefficient, not scalable, and error-prone. Additionally, it adds complexity if the application form is in a different language (such as Spanish).

For this demonstration, we use sample scanned images of birth certificate application forms. These forms don’t contain any real personal data. Two examples are provided: one in English (handwritten) and another in Spanish (printed). Save these images as .jpeg files to your computer. You need them later for testing the solution.

Create an S3 bucket

On the Amazon S3 console, create a new bucket with a unique name (for example, bedrock-claude3-idp-{random characters to make it globally unique}) and leave the other settings as default. Within the bucket, create a folder named images and a sub-folder named birth_certificates.

Create an SQS queue

On the Amazon SQS console, create a queue with the Standard queue type, provide a name (for example, bedrock-idp-extracted-data), and leave the other settings as default.

Create a Lambda function to invoke the Amazon Bedrock model

On the Lambda console, create a function (for example, invoke_bedrock_claude3), choose Python 3.12 for the runtime, and leave the remaining settings as default. Later, you configure this function to be invoked every time a new image is uploaded into the S3 bucket. You can download the entire Lambda function code from invoke_bedrock_claude3.py. Replace the contents of the lambda_function.py file with the code from the downloaded file. Make sure to substitute {SQS URL} with the URL of the SQS queue you created earlier, then choose Deploy.

The Lambda function should perform the following actions:

s3 = boto3.client('s3')
sqs = boto3.client('sqs')
bedrock = boto3.client('bedrock-runtime', region_name='us-east-1')
QUEUE_URL = {SQS URL}
MODEL_ID = "anthropic.claude-3-sonnet-20240229-v1:0"

The following code gets the image from the S3 bucket using the get_object method and converts it to base64 data:

image_data = s3.get_object(Bucket=bucket_name, Key=object_key)['Body'].read()
base64_image = base64.b64encode(image_data).decode('utf-8')

Prompt engineering is a critical factor in unlocking the full potential of generative AI applications like IDP. Crafting well-structured prompts makes sure that the AI system’s outputs are accurate, relevant, and aligned with your objectives, while mitigating potential risks.

With the Anthropic Claude 3 model integrated into the Amazon Bedrock IDP solution, you can use the model’s impressive visual understanding capabilities to effortlessly extract data from documents. Simply provide the image or document as input, and Anthropic Claude 3 will comprehend its contents, seamlessly extracting the desired information and presenting it in a human-readable format. All Anthropic Claude 3 models are capable of understanding non-English languages such as Spanish, Japanese, and French. In this particular use case, we demonstrate how to translate Spanish application forms into English by providing the appropriate prompt instructions.

However, LLMs like Anthropic Claude 3 can exhibit variability in their response formats. To achieve consistent and structured output, you can tailor your prompts to instruct the model to return the extracted data in a specific format, such as JSON with predefined keys. This approach enhances the interoperability of the model’s output with downstream applications and streamlines data processing workflows.

The following is the prompt with the specific JSON output format:

prompt = """
This image shows a birth certificate application form. 
Please precisely copy all the relevant information from the form.
Leave the field blank if there is no information in corresponding field.
If the image is not a birth certificate application form, simply return an empty JSON object. 
If the application form is not filled, leave the fees attributes blank. 
Translate any non-English text to English. 
Organize and return the extracted data in a JSON format with the following keys:
{
    "applicantDetails":{
        "applicantName": "",
        "dayPhoneNumber": "",
        "address": "",
        "city": "",
        "state": "",
        "zipCode": "",
        "email":""
    },
    "mailingAddress":{
        "mailingAddressApplicantName": "",
        "mailingAddress": "",
        "mailingAddressCity": "",
        "mailingAddressState": "",
        "mailingAddressZipCode": ""
    },
    "relationToApplicant":[""],
    "purposeOfRequest": "",
    
    "BirthCertificateDetails":
    {
        "nameOnBirthCertificate": "",
        "dateOfBirth": "",
        "sex": "",
        "cityOfBirth": "",
        "countyOfBirth": "",
        "mothersMaidenName": "",
        "fathersName": "",
        "mothersPlaceOfBirth": "",
        "fathersPlaceOfBirth": "",
        "parentsMarriedAtBirth": "",
        "numberOfChildrenBornInSCToMother": "",
        "diffNameAtBirth":""
    },
    "fees":{
        "searchFee": "",
        "eachAdditionalCopy": "",
        "expediteFee": "",
        "totalFees": ""
    } 
  }
""" 

Invoke the Anthropic Claude 3 Sonnet model using the Amazon Bedrock API. Pass the prompt and the base64 image data as parameters:

def invoke_claude_3_multimodal(prompt, base64_image_data):
    request_body = {
        "anthropic_version": "bedrock-2023-05-31",
        "max_tokens": 2048,
        "messages": [
            {
                "role": "user",
                "content": [
                    {
                        "type": "text",
                        "text": prompt,
                    },
                    {
                        "type": "image",
                        "source": {
                            "type": "base64",
                            "media_type": "image/png",
                            "data": base64_image_data,
                        },
                    },
                ],
            }
        ],
    }

    try:
        response = bedrock.invoke_model(modelId=MODEL_ID, body=json.dumps(request_body))
        return json.loads(response['body'].read())
    except bedrock.exceptions.ClientError as err:
        print(f"Couldn't invoke Claude 3 Sonnet. Here's why: {err.response['Error']['Code']}: {err.response['Error']['Message']}")
        raise

Send the Amazon Bedrock API response to the SQS queue using the send_message method:

def send_message_to_sqs(message_body):
    try:
        sqs.send_message(QueueUrl=QUEUE_URL, MessageBody=json.dumps(message_body))
    except sqs.exceptions.ClientError as e:
        print(f"Error sending message to SQS: {e.response['Error']['Code']}: {e.response['Error']['Message']}")

Next, modify the IAM role of the Lambda function to grant the required permissions:

  1. On the Lambda console, navigate to the function.
  2. On the Configuration tab, choose Permissions in the left pane.
  3. Choose the IAM role (for example, invoke_bedrock_claude3-role-{random chars}).

This will open the role on a new tab.

  1. In the Permissions policies section, choose Add permissions and Create inline policy.
  2. On the Create policy page, switch to the JSON tab in the policy editor.
  3. Enter the policy from the following code block, replacing {AWS Account ID} with your AWS account ID and {S3 Bucket Name} with your S3 bucket name.
  4. Choose Next.
  5. Enter a name for the policy (for example, invoke_bedrock_claude3-role-policy), and choose Create policy.
{
    "Version": "2012-10-17",
    "Statement": [{
        "Effect": "Allow",
        "Action": "bedrock:InvokeModel",
        "Resource": "arn:aws:bedrock:us-east-1::foundation-model/*"
    }, {
        "Effect": "Allow",
        "Action": "s3:GetObject",
        "Resource": "arn:aws:s3:::{S3 Bucket Name}/*"
    }, {
        "Effect": "Allow",
        "Action": "sqs:SendMessage",
        "Resource": "arn:aws:sqs:us-east-1:{AWS Account ID}:bedrock-idp-extracted-data"
    }]
}

The policy will grant the following permissions:

  • Invoke model access to Amazon Bedrock FMs
  • Retrieve objects from the bedrock-claude3-idp... S3 bucket
  • Send messages to the bedrock-idp-extracted-data SQS queue for processing the extracted data

Additionally, modify the Lambda function’s timeout to 2 minutes. By default, it’s set to 3 seconds.

Create an S3 Event Notification

To create an S3 Event Notification, complete the following steps:

  1. On the Amazon S3 console, open the bedrock-claude3-idp... S3 bucket.
  2. Navigate to Properties, and in the Event notifications section, create an event notification.
  3. Enter a name for Event name (for example, bedrock-claude3-idp-event-notification).
  4. Enter images/birth_certificates/ for the prefix.
  5. For Event Type, select Put in the Object creation section.
  6. For Destination, select Lambda function and choose invoke_bedrock_claude3.
  7. Choose Save changes.

Create a DynamoDB table

To store the extracted data in DynamoDB, you need to create a table. On the DynamoDB console, create a table called birth_certificates with Id as the partition key, and keep the remaining settings as default.

Create a Lambda function to insert records into the DynamoDB table

On the Lambda console, create a Lambda function (for example, insert_into_dynamodb), choose Python 3.12 for the runtime, and leave the remaining settings as default. You can download the entire Lambda function code from insert_into_dynamodb.py. Replace the contents of the lambda_function.py file with the code from the downloaded file and choose Deploy.

The Lambda function should perform the following actions:

Get the message from the SQS queue that contains the response from the Anthropic Claude 3 Sonnet model:

data = json.loads(event['Records'][0]['body'])['content'][0]['text']
event_id = event['Records'][0]['messageId']
data = json.loads(data)

Create objects representing DynamoDB and its table:

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('birth_certificates')

Get the key objects from the JSON data:

applicant_details = data.get('applicantDetails', {})
    mailing_address = data.get('mailingAddress', {})
    relation_to_applicant = data.get('relationToApplicant', [])
    birth_certificate_details = data.get('BirthCertificateDetails', {})
    fees = data.get('fees', {})

Insert the extracted data into DynamoDB table using put_item() method:

table.put_item(Item={
'Id': event_id,
'applicantName': applicant_details.get('applicantName', ''),
'dayPhoneNumber': applicant_details.get('dayPhoneNumber', ''),
'address': applicant_details.get('address', ''),
'city': applicant_details.get('city', ''),
'state': applicant_details.get('state', ''),
'zipCode': applicant_details.get('zipCode', ''),
'email': applicant_details.get('email', ''),
'mailingAddressApplicantName': mailing_address.get('mailingAddressApplicantName', ''),
'mailingAddress': mailing_address.get('mailingAddress', ''),
'mailingAddressCity': mailing_address.get('mailingAddressCity', ''),
'mailingAddressState': mailing_address.get('mailingAddressState', ''),
'mailingAddressZipCode': mailing_address.get('mailingAddressZipCode', ''),
'relationToApplicant': ', '.join(relation_to_applicant),
'purposeOfRequest': data.get('purposeOfRequest', ''),
'nameOnBirthCertificate': birth_certificate_details.get('nameOnBirthCertificate', ''),
'dateOfBirth': birth_certificate_details.get('dateOfBirth', ''),
'sex': birth_certificate_details.get('sex', ''),
'cityOfBirth': birth_certificate_details.get('cityOfBirth', ''),
'countyOfBirth': birth_certificate_details.get('countyOfBirth', ''),
'mothersMaidenName': birth_certificate_details.get('mothersMaidenName', ''),
'fathersName': birth_certificate_details.get('fathersName', ''),
'mothersPlaceOfBirth': birth_certificate_details.get('mothersPlaceOfBirth', ''),
'fathersPlaceOfBirth': birth_certificate_details.get('fathersPlaceOfBirth', ''),
'parentsMarriedAtBirth': birth_certificate_details.get('parentsMarriedAtBirth', ''),
'numberOfChildrenBornInSCToMother': birth_certificate_details.get('numberOfChildrenBornInSCToMother', ''),
'diffNameAtBirth': birth_certificate_details.get('diffNameAtBirth', ''),
'searchFee': fees.get('searchFee', ''),
'eachAdditionalCopy': fees.get('eachAdditionalCopy', ''),
'expediteFee': fees.get('expediteFee', ''),
'totalFees': fees.get('totalFees', '')
})

Next, modify the IAM role of the Lambda function to grant the required permissions. Follow the same steps you used to modify the permissions for the invoke_bedrock_claude3 Lambda function, but enter the following JSON as the inline policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "dynamodb:PutItem",
            "Resource": "arn:aws:dynamodb:us-east-1::{AWS Account ID}:table/birth_certificates"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "sqs:DeleteMessage",
                "sqs:ReceiveMessage",
                "sqs:GetQueueAttributes"
            ],
            "Resource": "arn:aws:sqs:us-east-1::{AWS Account ID}:bedrock-idp-extracted-data"
        }
    ]
}

Enter a policy name (for example, insert_into_dynamodb-role-policy) and choose Create policy.

The policy will grant the following permissions:

  • Put records into the DynamoDB table
  • Read and delete messages from the SQS queue

Configure the Lambda function trigger for SQS

Complete the following steps to create a trigger for the Lambda function:

  1. On the Amazon SQS console, open the bedrock-idp-extracted-data queue.
  2. On the Lambda triggers tab, choose Configure Lambda function trigger.
  3. Select the insert_into_dynamodb Lambda function and choose Save.

Test the solution

Now that you have created all the necessary resources, permissions, and code, it’s time to test the solution.

In the S3 folder birth_certificates, upload the two scanned images that you downloaded earlier. Then open the DynamoDB console and explore the items in the birth_certificates table.

If everything is configured properly, you should see two items in DynamoDB in just a few seconds, as shown in the following screenshots. For the Spanish form, Anthropic Claude 3 automatically translated the keys and labels from Spanish to English based on the prompt.

Troubleshooting

If you don’t see the extracted data in the DynamoDB table, you can investigate the issue:

  • Check CloudWatch logs – Review the Amazon CloudWatch log streams of the Lambda functions involved in the data extraction and ingestion process. Look for any error messages or exceptions that may indicate the root cause of the issue.
  • Identify missing permissions – In many cases, errors can occur due to missing permissions. Confirm that the Lambda functions have the necessary permissions to access the required AWS resources, such as DynamoDB tables, S3 buckets, or other services involved in the solution.
  • Implement a dead-letter queue – In a production-scale solution, it is recommended to implement a dead letter queue (DLQ) to catch and handle any events or messages that fail to process or encounter errors.

Clean up

Clean up the resources created as part of this post to avoid incurring ongoing charges:

  1. Delete all the objects from the bedrock-claude3-idp... S3 bucket, then delete the bucket.
  2. Delete the two Lambda functions named invoke_bedrock_claude3 and insert_into_dynamodb.
  3. Delete the SQS queue named bedrock-idp-extracted-data.
  4. Delete the DynamoDB table named birth_certificates.

Example use cases and business value

The generative AI-powered IDP solution demonstrated in this post can benefit organizations across various industries, such as:

  • Government and public sector – Process and extract data from citizen applications, immigration documents, legal contracts, and other government-related forms, enabling faster turnaround times and improved service delivery
  • Healthcare – Extract and organize patient information, medical records, insurance claims, and other health-related documents, improving data accuracy and accessibility for better patient care
  • Finance and banking – Automate the extraction and processing of financial documents, loan applications, tax forms, and regulatory filings, reducing manual effort and increasing operational efficiency
  • Logistics and supply chain – Extract and organize data from shipping documents, invoices, purchase orders, and inventory records, streamlining operations and enhancing supply chain visibility
  • Retail and ecommerce – Automate the extraction and processing of customer orders, product catalogs, and marketing materials, enabling personalized experiences and efficient order fulfillment

By using the power of generative AI and Amazon Bedrock, organizations can unlock the true potential of their data, driving operational excellence, enhancing customer experiences, and fostering continuous innovation.

Conclusion

In this post, we demonstrated how to use Amazon Bedrock and the powerful Anthropic Claude 3 Sonnet model to develop an IDP solution. By harnessing the advanced multimodal capabilities of Anthropic Claude 3, we were able to accurately extract data from scanned documents and store it in a structured format in a DynamoDB table.

Although this solution showcases the potential of generative AI in IDP, it may not be suitable for all IDP use cases. The effectiveness of the solution may vary depending on the complexity and quality of the documents, the amount of training data available, and the specific requirements of the organization.

To further enhance the solution, consider implementing a human-in-the-loop workflow to review and validate the extracted data, especially for mission-critical or sensitive applications. This will provide data accuracy and compliance with regulatory requirements. You can also explore the model evaluation feature in Amazon Bedrock to compare model outputs, and then choose the model best suited for your downstream generative AI applications.

For further exploration and learning, we recommend checking out the following resources:


About the Authors

Govind Palanisamy is a Solutions Architect at AWS, where he helps government agencies migrate and modernize their workloads to increase citizen experience. He is passionate about technology and transformation, and he helps customers transform their businesses using AI/ML and generative AI-based solutions.

Bharath Gunapati is a Sr. Solutions architect at AWS, where he helps clinicians, researchers, and staff at academic medical centers to adopt and use cloud technologies. He is passionate about technology and the impact it can make on healthcare and research.

Read More