Automate document translation and standardization with Amazon Bedrock and Amazon Translate

Automate document translation and standardization with Amazon Bedrock and Amazon Translate

Multinational organizations face the complex challenge of effectively managing a workforce and operations across different countries, cultures, and languages. Maintaining consistency and alignment across these global operations can be difficult, especially when it comes to updating and sharing business documents and processes. Delays or miscommunications can lead to productivity losses, operational inefficiencies, or potential business disruptions. Accurate and timely sharing of translated documents across the organization is an important step in making sure that employees have access to the latest information in their native language.

In this post, we show how you can automate language localization through translating documents using Amazon Web Services (AWS). The solution combines Amazon Bedrock and AWS Serverless technologies, a suite of fully managed event-driven services for running code, managing data, and integrating applications—all without managing servers. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, and Stability AI. Amazon Bedrock is accessible through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI.

Solution overview

The solution uses AWS Step Functions to orchestrate the translation of the source document into the specified language (English, French, or Spanish) using AWS Lambda functions to call Amazon Translate. Note that Amazon Translate currently supports translation of 75 languages and 3 have been chosen for this demo. It then uses Amazon Bedrock to refine the translation and create natural, flowing content.

Building this solution, shown in the following diagram, on AWS fully managed and serverless technologies eliminates the need to operate infrastructure, manage capacity, or invest significant funding upfront to evaluate the business benefit. The compute and AI services used to process documents for translation run only on demand, resulting in a consumption-based billing model where you only pay for your use.

Solution architecture

The document translation and standardization workflow consists of the following steps:

  1. The user uploads their source document requiring translation to the input Amazon Simple Storage Service (Amazon S3) bucket. The bucket has three folders: English, French, and Spanish. The user uploads the source document to the folder that matches the current language of the document. This can be done using the AWS Management Console, the AWS Command Line Interface (AWS CLI), or third-party tools that allow them to navigate an S3 bucket as a file system.
  2. The presence of a new document in the input bucket initiates the Step Functions workflow using Amazon S3 Event Notifications.
  3. The first step of this workflow is an AWS Lambda function that retrieves the source document from the bucket, saves it in temporary storage, and calls the Amazon Translate API TranslateDocument specifying the source document as the target for translation.
  4. The second step of the workflow is another Lambda function that queries Amazon Bedrock using a pre-generated prompt with the translated source document included as the target. This prompt instructs Amazon Bedrock to perform a transcreation check on the document content. This validates that the intent, style, and tone of the document is maintained. The final version of the document is now saved in the output S3 bucket.
  5. The last step of the workflow uses Amazon Simple Notification Service (Amazon SNS) to notify an SNS topic of the outcome of the workflow (success or failure). This will send an email to the subscribers to the topic.
  6. The user downloads their translated document from the output S3 bucket. This can be done using the console, the AWS CLI, or third-party tools that allow them to navigate an S3 bucket as a file system.

This solution is available on GitHub and provides the AWS Cloud Development Kit (AWS CDK) code to deploy in your own AWS account. The AWS CDK is an open source software development framework for defining cloud infrastructure as code (IaC) and provisioning it through AWS CloudFormation. This provides an automated deployment process for your AWS account.

Prerequisites

For this walkthrough, you should have the following prerequisites:

Deployment steps

To deploy this solution into your own AWS account:

  1. Open your code editor of choice and authenticate to your AWS account. Instructions for linking to Visual Studio code can be found in Authentication and access for the AWS Toolkit for Visual Studio Code.
  2. Clone the solution from the GitHub repository:
    git clone https://github.com/aws-samples/sample-document-standardization-with-bedrock-and-translate.git

  3. Follow the deployment instructions in the repository README file.
  4. After the stack is deployed, go to the S3 console. Navigate to the S3 bucket that was created — docstandardizationstack-inputbucket. Upload the word_template.docx file that’s included in the repository. English, French, and Spanish folders will automatically be created.

Folders that are created when word_template.docx is uploaded

  1. Navigate to the Amazon Simple Notification Service (Amazon SNS) console and create a subscription to the topic DocStandardizationStack-ResultTopic created by the stack. After it’s created, make sure that you confirm subscription to the topic before testing the workflow by choosing the confirm subscription link in the automated email you receive from SNS.

sns subcription creation

  1. After you have subscribed to the topic, you can test the workflow.

Language translation

To test the workflow, upload a .docx file to the folder corresponding to the document’s original language. For example, if you’re uploading a document that was written in English, this document should be uploaded to the English folder. If you don’t have a .docx file available, you can use the tone_test.docx file that’s included in the repository.

The Step Functions state machine will start after your document is uploaded. Translated versions of your source input document will be added to the other folders that were created in step 5. In this example, we uploaded a document in English and the document was translated in both Spanish and French.

document translated into Spanish

document translated into French

Transcreation process

The translated documents are then processed using Amazon Bedrock. Amazon Bedrock reviews the documents’ intent, style and tone for use in a business setting. You can customize the output tone and style by modifying the Amazon Bedrock prompt to match your specific requirements. The final documents are added to the output S3 bucket with a suffix of _corrected, and each document is added to the folder that corresponds to the document’s language. The output bucket has the same format as the input bucket, with a separate folder created for each language.

Folders in the output bucket

The prompt used to instruct the generative AI model for the transcreation task has been designed to produce consistent and valid adjustments. It includes specific instructions, covering both what type of changes are expected from the model and rules to define boundaries that control adjustments. You can adjust this prompt if required to change the outcome of the document processing workflow.

The final documents will have a suffix of _corrected.

English corrected document

French corrected document

Spanish corrected document

When the documents have been processed, you will receive an SNS notification. You will be able to download the processed documents from the S3 bucket DocStandardizationStack-OutputBucket.

Clean up

To delete the deployed resources, run the command cdk destroy in your terminal, or use the CloudFormation console to delete the CloudFormation stack DocStandardizationStack.

Conclusion

In this post, we explored how to automate the translation of business documents using AWS AI and serverless technologies. Through this automated translation process, companies can improve communication, consistency, and alignment across their global operations, making sure that employees can access the information they need when they need it. As organizations continue to expand their global footprint, tools like this will become increasingly important for maintaining a cohesive and informed workforce, no matter where in the world they might be located. By embracing the capabilities of AWS, companies can focus on their core business objectives without creating additional IT infrastructure overhead.

Bonne traduction!

Feliz traducción!

Happy translating!

Further reading

The solution includes a zero-shot prompt with specific instructions directing what the LLM should and should not modify in the source document. If you want to iterate on the provided prompt to adjust your results, you can use the Amazon Bedrock Prompt Management tool to quickly edit and test the impact of changes to the prompt text.

For additional examples using Amazon Bedrock and other services, visit the AWS Workshops page to get started.


About the Authors

author photoNadhya Polanco is an Associate Solutions Architect at AWS based in Brussels, Belgium. In this role, she supports organizations looking to incorporate AI and Machine Learning into their workloads. In her free time, Nadhya enjoys indulging in her passion for coffee and exploring new destinations.

author photo Steve Bell is a Senior Solutions Architect at AWS based in Amsterdam, Netherlands. He helps enterprise organizations navigate the complexities of migration, modernization and multicloud strategy. Outside of work he loves walking his labrador, Lily, and practicing his amateur BBQ skills.

Read More

Autonomous mortgage processing using Amazon Bedrock Data Automation and Amazon Bedrock Agents

Autonomous mortgage processing using Amazon Bedrock Data Automation and Amazon Bedrock Agents

Mortgage processing is a complex, document-heavy workflow that demands accuracy, efficiency, and compliance. Traditional mortgage operations rely on manual review, rule-based automation, and disparate systems, often leading to delays, errors, and a poor customer experience. Recent industry surveys indicate that only about half of borrowers express satisfaction with the mortgage process, with traditional banks trailing non-bank lenders in borrower satisfaction. This gap in satisfaction level is largely attributed to the manual, error-prone nature of traditional mortgage processing, where delays, inconsistencies, and fragmented workflows create frustration for borrowers and impact overall experience.

In this post, we introduce agentic automatic mortgage approval, a next-generation sample solution that uses autonomous AI agents powered by Amazon Bedrock Agents and Amazon Bedrock Data Automation. These agents orchestrate the entire mortgage approval process—intelligently verifying documents, assessing risk, and making data-driven decisions with minimal human intervention. By automating complex workflows, businesses can accelerate approvals, accelerate approvals, minimize errors, and provide consistency while enhancing scalability and compliance.

The following video shows this agentic automation in action—enabling smarter, faster, and more reliable mortgage processing at scale.

Why agentic IDP?

Agentic intelligent document processing (IDP) revolutionizes document workflows by driving efficiency and autonomy. It automates tasks with precision, enabling systems to extract, classify, and process information while identifying and correcting errors in real time.

Agentic IDP goes beyond simple extraction by grasping context and intent, adding deeper insights to documents that fuel smarter decision-making. Powered by Amazon Bedrock Data Automation, it adapts to changing document formats and data sources, further reducing manual work.

Built for speed and scale, agentic IDP processes high volumes of documents quickly, reducing delays and optimizing critical business operations. Seamlessly integrating with AI agents and enterprise systems, it automates complex workflows, cutting operational costs and freeing teams to focus on high-value strategic initiatives.

IDP in mortgage processing

Mortgage processing involves multiple steps, including loan origination, document verification, underwriting, and closing; with each step requiring significant manual effort. These steps are often disjointed, leading to slow processing times (weeks instead of minutes), high operational costs (manual document reviews), and an increased risk of human errors and fraud. Organizations face numerous technical challenges when manually managing document-intensive workflows, as depicted in the following diagram.

These challenges include:

  • Document overload – Mortgage applications require verification of extensive documentation, including tax records, income statements, property appraisals, and legal agreements. For example, a single mortgage application might require manual review and cross-validation of hundreds of pages of tax returns, pay stubs, bank statements, and legal documents, consuming significant time and resources.
  • Data entry errors – Manual processing introduces inconsistencies, inaccuracies, and missing information during data entry. Incorrect transcription of applicant income from W-2 forms or misinterpreting property appraisal data can lead to miscalculated loan eligibility, requiring costly corrections and rework.
  • Delays in decision-making – Backlogs resulting from manual review processes extend processing times and negatively affect borrower satisfaction. A lender manually reviewing income verification and credit documentation might take several weeks to work through their backlog, causing delays that result in lost opportunities or frustrated applicants who turn to competitors.
  • Regulatory compliance complexity – Evolving mortgage industry regulations introduce complexity into underwriting and verification procedures. Changes in lending regulations, such as new mandatory disclosures or updated income verification guidelines, can require extensive manual updates to processes, leading to increased processing times, higher operational costs, and elevated error rates from manual data entry.

These challenges underscore the need for automation to enhance efficiency, speed, and accuracy for both lenders and mortgage borrowers.

Solution: Agentic workflows in mortgage processing

The following solution is self-contained and the applicant only interacts with the mortgage applicant supervisor agent to upload documents and check or retrieve application status. The following diagram illustrates the workflow.

The workflow consists of the following steps:

  1. Applicant uploads documents to apply for a mortgage.
  2. The supervisor agent confirms receipt of documents. Applicant can view and retrieve application status.
  3. The underwriter updates the status of the application and sends approval documents to applicant.

At the core of the agentic mortgage processing workflow is a supervisor agent that orchestrates the entire workflow, manages sub-agents, and makes final decisions. Amazon Bedrock Agents is a capability within Amazon Bedrock that lets developers create AI-powered assistants capable of understanding user requests and executing complex tasks. These agents can break down requests into logical steps, interact with external tools and data sources, and use AI models to reason and take actions. They maintain conversation context while securely connecting to various APIs and AWS services, making them ideal for tasks like customer service automation, data analysis, and business process automation.

The supervisor agent intelligently delegates tasks to specialized sub-agents while maintaining the right balance between automated processing and human supervision. By aggregating insights and data from various sub-agents, the supervisor agent applies established business rules and risk criteria to either automatically approve qualifying loans or flag complex cases for human review, improving both efficiency and accuracy in the mortgage underwriting process.

In the following sections, we explore the sub-agents in more detail.

Data extraction agent

The data extraction agent uses Amazon Bedrock Data Automation to extract critical insights from mortgage application packages, including pay stubs, W-2 forms, bank statements, and identity documents. Amazon Bedrock Data Automation is a generative AI-powered capability of Amazon Bedrock that streamlines the development of generative AI applications and automates workflows involving documents, images, audio, and videos. The data extraction agent helps make sure that the validation, compliance, and decision-making agent receives accurate and structured data, enabling efficient validation, regulatory compliance, and informed decision-making. The following diagram illustrates the workflow.

The extraction workflow is designed to automate the process of extracting data from application packages efficiently. The workflow includes the following steps:

  1. The supervisor agent assigns the extraction task to the data extraction agent.
  2. The data extraction agent invokes Amazon Bedrock Data Automation to parse and extract applicant details from the application packages.
  3. The extracted application information is stored in the extracted documents Amazon Simple Storage Service (Amazon S3) bucket.
  4. The Amazon Bedrock Data Automation invocation response is sent back to the extraction agent.

Validation agent

The validation agent cross-checks extracted data with external resources such as IRS tax records and credit reports, flagging discrepancies for review. It flags inconsistencies such as doctored PDFs, low credit score, and also calculates debt-to-income (DTI) ratio, loan-to-value (LTV) limit, and an employment stability check. The following diagram illustrates the workflow.

The process consists of the following steps:

  1. The supervisor agent assigns the validation task to the validation agent.
  2. The validation agent retrieves the applicant details stored in the extracted documents S3 bucket.
  3. The applicant details are cross-checked against third-party resources, such as tax records and credit reports, to validate the applicant’s information.
  4. The third-party validated details are used by the validation agent to generate a status.
  5. The validation agent sends the validation status to the supervisor agent.

Compliance agent

The compliance agent verifies that the extracted and validated data adheres to regulatory requirements, reducing the risk of compliance violations. It validates against lending rules. For example, loans are approved only if the borrower’s DTI ratio is below 43%, making sure they can manage monthly payments, or applications with a credit score below 620 are declined, whereas higher scores qualify for better interest rates. The following diagram illustrates the compliance agent workflow.

The workflow includes the following steps:

  1. The supervisor agent assigns the compliance validation task to the compliance agent.
  2. The compliance agent retrieves the applicant details stored in the extracted documents S3 bucket.
  3. The applicant details are validated against mortgage processing rules.
  4. The compliance agent calculates the applicant’s DTI ratio, applying corporate policy and lending rules to the application.
  5. The compliance agent uses the validated details to generate a status.
  6. The compliance agent sends the compliance status to the supervisor agent.

Underwriting agent

The underwriting agent generates an underwriting document for the underwriter to review. The underwriting agent workflow streamlines the process of reviewing and finalizing underwriting documents, as shown in the following diagram.

The workflow consists of the following steps:

  1. The supervisor agent assigns the underwriting task to the underwriting agent.
  2. The underwriting agent verifies the information and creates a draft of the underwriting document.
  3. The draft document is sent to an underwriter for review.
  4. Updates from the underwriter are sent back to the underwriting agent.

RACI matrix

The collaboration between intelligent agents and human professionals is key to efficiency and accountability. To illustrate this, we’ve crafted a RACI (Responsible, Accountable, Consulted, and Informed) matrix that maps out how responsibilities might be shared between AI-driven agents and human roles, such as compliance officers and the underwriting officer. This mapping serves as a conceptual guide, offering a glimpse into how agentic automation can enhance human expertise, optimize workflows, and provide clear accountability. Real-world implementations will differ based on an organization’s unique structure and operational needs.

The matrix components are as follows:

  • R: Responsible (executes the work)
  • A: Accountable (owns approval authority and outcomes)
  • C: Consulted (provides input)
  • I: Informed (kept informed of progress/status)

End-to-end IDP automation architecture for mortgage processing

The following architecture diagram illustrates the AWS services powering the solution and outlines the end-to-end user journey, showcasing how each component interacts within the workflow.

In Steps 1 and 2, the process begins when a user accesses the web UI in their browser, with Amazon CloudFront maintaining low-latency content delivery worldwide. In Step 3, Amazon Cognito handles user authentication, and AWS WAF provides security against malicious threats. Steps 4 and 5 show authenticated users interacting with the web application to upload required documentation to Amazon S3. The uploaded documents in Amazon S3 trigger Amazon EventBridge, which initiates the Amazon Bedrock Data Automation workflow for document processing and information extraction.

In Step 6, AWS AppSync manages user interactions, enabling real-time communication with AWS Lambda and Amazon DynamoDB for data storage and retrieval. Steps 7, 8, and 9 demonstrate how the Amazon Bedrock multi-agent collaboration framework comes into play, where the supervisor agent orchestrates the workflow between specialized AI agents. The verification agent verifies uploaded documents, manages data collection, and uses action groups to compute DTI ratios and generate an application summary, which is stored in Amazon S3.

Step 10 shows how the validation agent (broker assistant) evaluates the application based on predefined business criteria and automatically generates a pre-approval letter, streamlining loan processing with minimal human intervention. Throughout the workflow in Step 11, Amazon CloudWatch provides comprehensive monitoring, logging, and real-time visibility into all system components, maintaining operational reliability and performance tracking.

This fully agentic and automated architecture enhances mortgage processing by improving efficiency, reducing errors, and accelerating approvals, ultimately delivering a faster, smarter, and more scalable lending experience.

Prerequisites

You need to have an AWS account and an AWS Identity and Access Management (IAM) role and user with permissions to create and manage the necessary resources and components for this solution. If you don’t have an AWS account, see How do I create and activate a new Amazon Web Services account?

Deploy the solution

To get started, clone the GitHub repository and follow the instructions in the README to deploy the solution using AWS CloudFormation. The deployment steps offer clear guidance on how to build and deploy the solution. After the solution is deployed, you can proceed with the following instructions:

  1. After you provision all the stacks, navigate to the stack AutoLoanAPPwebsitewafstackXXXXX on the AWS CloudFormation console.
  2. On the Outputs tab, locate the CloudFront endpoint for the application UI.

You can also get the endpoint using the AWS Command Line Interface (AWS CLI) and the following command:

 aws cloudformation describe-stacks 
--stack-name $(aws cloudformation list-stacks 
--stack-status-filter CREATE_COMPLETE UPDATE_COMPLETE | jq -r '.StackSummaries[] | select(.StackName | startswith("AutoLoanAPPwebsitewafstack")) | .StackName') 
--query 'Stacks[0].Outputs[?OutputKey==`configwebsitedistributiondomain`].OutputValue' 
--output text
  1. Open the (https://<domain_name>.cloudfront.net) in a new browser.

You should see the application login page.

  1. Create an Amazon Cognito user in the user pool to access the application.
  2. Sign in using your Amazon Cognito email and password credentials to access the application.

Monitoring and troubleshooting

Consider the following best practices:

  • Monitor stack creation and update status using the AWS CloudFormation console or AWS CLI
  • Monitor Amazon Bedrock model invocation metrics using CloudWatch:
    • InvokeModel requests and latency
    • Throttling exceptions
    • 4xx and 5xx errors
  • Check Amazon CloudTrail for API invocations and errors
  • Check CloudWatch for solution-specific errors and logs:

aws cloudformation describe-stacks —stack-name <stack-name>

Clean up

To avoid incurring additional costs after testing this solution, complete the following steps:

  1. Delete the relevant stacks from the AWS CloudFormation console.
  2. Verify the S3 buckets are empty before deleting them.

Conclusion

The sample automated loan application sample solution demonstrates how you can use Amazon Bedrock Agents and Amazon Bedrock Data Automation to transform mortgage loan processing workflows. Beyond mortgage processing, you can adapt this solution to streamline claims processing or address other complex document-processing scenarios. By using intelligent automation, this solution significantly reduces manual effort, shortens processing times, and accelerates decision-making. Automating these intricate workflows helps organizations achieve greater operational efficiency, maintain consistent compliance with evolving regulations, and deliver exceptional customer experiences.

The sample solution is provided as open source—use it as a starting point for your own solution, and help us make it better by contributing back fixes and features using GitHub pull requests. Browse to the GitHub repository to explore the code, click watch to be notified of new releases, and check the README for the latest documentation updates.

As next steps, we recommend assessing your current document processing workflows to identify areas suitable for automation using Amazon Bedrock Agents and Amazon Bedrock Data Automation.

For expert assistance, AWS Professional Services and other AWS Partners are here to help.

We’d love to hear from you. Let us know what you think in the comments section, or use the issues forum in the repository.


About the Authors

Wrick Talukdar is a Tech Lead – Generative AI Specialist focused on Intelligent Document Processing. He leads machine learning initiatives and projects across business domains, leveraging multimodal AI, generative models, computer vision, and natural language processing. He speaks at conferences such as AWS re:Invent, IEEE, Consumer Technology Society(CTSoc), YouTube webinars, and other industry conferences like CERAWEEK and ADIPEC. In his free time, he enjoys writing and birding photography.

Jady Liu is a Senior AI/ML Solutions Architect on the AWS GenAI Labs team based in Los Angeles, CA. With over a decade of experience in the technology sector, she has worked across diverse technologies and held multiple roles. Passionate about generative AI, she collaborates with major clients across industries to achieve their business goals by developing scalable, resilient, and cost-effective generative AI solutions on AWS. Outside of work, she enjoys traveling to explore wineries and distilleries.

Farshad Bidanjiri is a Solutions Architect focused on helping startups build scalable, cloud-native solutions. With over a decade of IT experience, he specializes in container orchestration and Kubernetes implementations. As a passionate advocate for generative AI, he helps emerging companies leverage cutting-edge AI technologies to drive innovation and growth.

Keith Mascarenhas leads worldwide GTM strategy for Generative AI at AWS, developing enterprise use cases and adoption frameworks for Amazon Bedrock. Prior to this, he drove AI/ML solutions and product growth at AWS, and held key roles in Business Development, Solution Consulting and Architecture across Analytics, CX and Information Security.

Jessie-Lee Fry is a Product and Go-to Market (GTM) Strategy executive specializing in Generative AI and Machine Learning, with over 15 years of global leadership experience in Strategy, Product, Customer success, Business Development, Business Transformation and Strategic Partnerships. Jessie has defined and delivered a broad range of products and cross-industry go- to-market strategies driving business growth, while maneuvering market complexities and C-Suite customer groups. In her current role, Jessie and her team focus on helping AWS customers adopt Amazon Bedrock at scale enterprise use cases and adoption frameworks, meeting customers where they are in their Generative AI Journey.

Raj Jayaraman is a Senior Generative AI Solutions Architect at AWS, bringing over a decade of experience in helping customers extract valuable insights from data. Specializing in AWS AI and generative AI solutions, Raj’s expertise lies in transforming business solutions through the strategic application of AWS’s AI capabilities, ensuring customers can harness the full potential of generative AI in their unique contexts. With a strong background in guiding customers across industries in adopting AWS Analytics and Business Intelligence services, Raj now focuses on assisting organizations in their generative AI journey—from initial demonstrations to proof of concepts and ultimately to production implementations.

Read More

Amazon Bedrock Model Distillation: Boost function calling accuracy while reducing cost and latency

Amazon Bedrock Model Distillation: Boost function calling accuracy while reducing cost and latency

Amazon Bedrock Model Distillation is generally available, and it addresses the fundamental challenge many organizations face when deploying generative AI: how to maintain high performance while reducing costs and latency. This technique transfers knowledge from larger, more capable foundation models (FMs) that act as teachers to smaller, more efficient models (students), creating specialized models that excel at specific tasks. In this post, we highlight the advanced data augmentation techniques and performance improvements in Amazon Bedrock Model Distillation with Meta’s Llama model family.

Agent function calling represents a critical capability for modern AI applications, allowing models to interact with external tools, databases, and APIs by accurately determining when and how to invoke specific functions. Although larger models typically excel at identifying the appropriate functions to call and constructing proper parameters, they come with higher costs and latency. Amazon Bedrock Model Distillation now enables smaller models to achieve comparable function calling accuracy while delivering substantially faster response times and lower operational costs.

The value proposition is compelling: organizations can deploy AI agents that maintain high accuracy in tool selection and parameter construction while benefiting from the reduced footprint and increased throughput of smaller models. This advancement makes sophisticated agent architectures more accessible and economically viable across a broader range of applications and scales of deployment.

Prerequisites

For a successful implementation of Amazon Bedrock Model Distillation, you’ll need to meet several requirements. We recommend referring to the Submit a model distillation job in Amazon Bedrock in the official AWS documentation for the most up-to-date and comprehensive information.

Key requirements include:

  • An active AWS account
  • Selected teacher and student models enabled in your account (verify on the Model access page of the Amazon Bedrock console)
  • An S3 bucket for storing input datasets and output artifacts
  • Appropriate IAM permissions:
  • Trust relationship allowing Amazon Bedrock to assume the role
  • Permissions to access S3 for input/output data and invocation logs
  • Permissions for model inference when using inference profiles

If you’re using historical invocation logs, confirm if model invocation logging is enabled in your Amazon Bedrock settings with S3 selected as the logging destination.

Preparing your data

Effective data preparation is crucial for successful distillation of agent function calling capabilities. Amazon Bedrock provides two primary methods for preparing your training data: uploading JSONL files to Amazon S3 or using historical invocation logs. Regardless of which method to choose, you’ll need to prepare proper formatting of tool specifications to enable successful agent function calling distillation.

Tool specification format requirements

For agent function calling distillation, Amazon Bedrock requires that tool specifications be provided as part of your training data. These specifications must be encoded as text within the system or user message of your input data. The example shown is using the Llama model family’s function calling format:

system: 'You are an expert in composing functions. You are given a question and a set of possible functions. Based on the question, you will need to make one or more function/tool calls to achieve the purpose.

Here is a list of functions in JSON format that you can invoke.
[
    {
        "name": "lookup_weather",
        "description": "Lookup weather to a specific location",
        "parameters": {
            "type": "dict",
            "required": [
                "city"
            ],
            "properties": {
                "location": {
                    "type": "string",
                },
                "date": {
                    "type": "string",
                }
            }
        }
    }
 ]'
 user: "What's the weather tomorrow?"

This approach lets the model learn how to interpret tool definitions and make appropriate function calls based on user queries. Afterwards, when calling inference on the distilled student model, we suggest keeping the prompt format consistent with the distillation input data. This provides optimal performance by maintaining the same structure the model was trained on.

Preparing data using Amazon S3 JSONL upload

When creating a JSONL file for distillation, each record must follow this structure:

{
    "schemaVersion": "bedrock-conversation-2024",
    "system": [
        {
            "text": 'You are an expert in composing functions. You are given a question and a set of possible functions. Based on the question, you will need to make one or more function/tool calls to achieve the purpose.
                    Here is a list of functions in JSON format that you can invoke.
                    [
                        {
                            "name": "lookup_weather",
                            "description": "Lookup weather to a specific location",
                            "parameters": {
                                "type": "dict",
                                "required": [
                                    "city"
                                ],
                                "properties": {
                                    "location": {
                                        "type": "string",
                                    },
                                    "date": {
                                        "type": "string",
                                    }
                                }
                            }
                        }
                    ]'
        }
    ],
    "messages": [
        {
            "role": "user",
            "content": [
                {
                    "text": "What's the weather tomorrow?"
                }
            ]
        },
        {
            "role": "assistant",
            "content": [
               {
                   "text": "[lookup_weather(location="san francisco", date="tomorrow")]"
               }
            ]
        }
    ]
}

Each record must include the schemaVersion field with the value bedrock-conversation-2024. The system field contains instructions for the model, including available tools. The messages field contains the conversation, with required user input and optional assistant responses.

Using historical invocation logs

Alternatively, you can use your historical model invocation logs on Amazon Bedrock for distillation. This approach uses actual production data from your application, capturing real-world function calling scenarios. To use this method:

  1. Enable invocation logging in your Amazon Bedrock account settings, selecting S3 as your logging destination.
  2. Add metadata to your model invocations using the requestMetadata field to categorize interactions. For example:
    "requestMetadata": { 
       "project": "WeatherAgent", 
       "intent": "LocationQuery", 
       "priority": "High"
    }

  3. When creating your distillation job, specify filters to select relevant logs based on metadata:
    "requestMetadataFilters": { 
        "equals": {"project": "WeatherAgent"} 
    }

Using historical invocation logs means that you can distill knowledge from your production workloads, allowing the model to learn from real user interactions and function calls.

Model distillation enhancements

Although the basic process for creating a model distillation job remains similar to what we described in our previous blog post, Amazon Bedrock Model Distillation introduces several enhancements with general availability that improve the experience, capabilities, and transparency of the service.

Expanded model support

With general availability, we have expanded the model options available for distillation. In addition to the models supported during preview, customers can now use:

  • Nova Premier as a teacher model for Nova Pro/Lite/Micro models distillation
  • Anthropic Claude Sonnet 3.5 v2 as a teacher model for Claude Haiku distillation
  • Meta’s Llama 3.3 70B as teacher and 3.2 1B and 3B as student models for Meta model distillation

This broader selection allows customers to find the balance between performance and efficiency across different use cases. For the most current list of supported models, refer to the Amazon Bedrock documentation.

Advanced data synthesis technology

Amazon Bedrock applies proprietary data synthesis techniques during the distillation process for certain use cases. This science innovation automatically generates additional training examples that improve the student model’s ability to generate better response.

For agent function calling with Llama models specifically, the data augmentation methods help bridge the performance gap between teacher and student models compared to vanilla distillation (vanilla distillation means directly annotating input data with teacher response and run student training with supervised fine-tuning). This makes the student models’ performance much more comparable to the teacher after distillation while maintaining the cost and latency benefits of a smaller model.

Enhanced training visibility

Amazon Bedrock model distillation now provides better visibility into the training process through multiple enhancements:

  1. Synthetic data transparency – Model distillation now provides samples of the synthetically generated training data used to enhance model performance. For most model families, up to 50 sample prompts are exported (up to 25 for Anthropic models), giving you insight into how your model was trained, which can help support internal compliance requirements.
  2. Prompt insights reporting – A summarized report of prompts accepted for distillation is provided, along with detailed visibility into prompts that were rejected and the specific reason for rejection. This feedback mechanism helps you identify and fix problematic prompts to improve your distillation success rate.

These insights are stored in the output S3 bucket specified during job creation, giving you a clearer picture of the knowledge transfer process.

Improved job status reporting

Amazon Bedrock Model Distillation also offers enhanced training job status reporting to provide more detailed information about where your model distillation job stands in the process. Rather than brief status indicators such as “In Progress” or “Complete,” the system now provides more granular status updates, helping you better track the progress of the distillation job.

You can track these job status details in both the AWS Management Console and AWS SDK.

Performance improvements and benefits

Now that we’ve explored the feature enhancements in Amazon Bedrock Model Distillation, we examine the benefits these capabilities deliver, particularly for agent function calling use cases.

Evaluation metric

We use abstract syntax tree (AST) to evaluate the function calling performance. AST parses the generated function call and performs fine-grained evaluation on the correctness of the generated function name, parameter values, and data types with the following workflow:

  1. Function matching – Checks if the predicted function name is consistent with one of the possible answers
  2. Required parameter matching – Extracts the arguments from the AST and checks if each parameter can be found and exact matched in possible answers
  3. Parameter type and value matching – Checks if the predicted parameter values and types are correct

The process is illustrated in following diagram from Gorilla: Large Language Model Connected with Massive APIs.

Experiment results

To evaluate model distillation in the function call use case, we used the BFCL v2 dataset and filtered it to specific domains (entertainment, in this case) to match a typical use case of model customization. We also split the data into training and test sets and performed distillation on the training data while we ran evaluations on the test set. Both the training set and the test set contained around 200 examples. We assessed the performance of several models, including the teacher model (Llama 405B), the base student model (Llama 3B), a vanilla distillation version where Llama 405B is distilled into Llama 3B without data augmentation, and an advanced distillation version enhanced with proprietary data augmentation techniques.

The evaluation focused on simple and multiple categories defined in the BFCL V2 dataset. As shown in the following chart, there is a performance variance between the teacher and the base student model across both categories. Vanilla distillation significantly improved the base student model’s performance. In the simple category, performance increased from 0.478 to 0.783, representing a 63.8% relative improvement. In the multiple category, the score rose from 0.586 to 0.742, which is a 26.6% relative improvement. On average, vanilla distillation led to a 45.2% improvement across the two categories.

Applying data augmentation techniques provided further gains beyond vanilla distillation. In the simple category, performance improved from 0.783 to 0.826, and in the multiple category, from 0.742 to 0.828. On average, this resulted in a 5.8% relative improvement across both categories, calculated as the mean of the relative gains in each. These results highlight the effectiveness of both distillation and augmentation strategies in enhancing student model performance for function call tasks.

We show the latency and output speed comparison for different models in the following figure. The data is gathered from Artificial Analysis, a website that provides independent analysis of AI models and providers, on April 4, 2025. We find that there is a clear trend on latency and generation speed as different size Llama models evaluated. Notably, the Llama 3.1 8B model offers the highest output speed, making it the most efficient in terms of responsiveness and throughput. Similarly, Llama 3.2 3B performs well with a slightly higher latency but still maintains a solid output speed. On the other hand, Llama 3.1 70B and Llama 3.1 405B exhibit much higher latencies with significantly lower output speeds, indicating a substantial performance cost at higher model sizes. Compared to Llama 3.1 405B, Llama 3.2 3B provides 72% latency reduction and 140% output speed improvement. These results suggest that smaller models might be more suitable for applications where speed and responsiveness are critical.

In addition, we report the comparison of cost per 1M tokens for different Llama models. As shown in the following figure, it’s evident that smaller models (Llama 3.2 3B and Llama 3.1 8B) are significantly more cost-effective. As the model size increases (Llama 3.1 70B and Llama 3.1 405B), the pricing scales steeply. This dramatic increase underscores the trade-off between model complexity and operational cost.

Real-world agent applications require LLM models that can achieve a good balance between accuracy, speed, and cost. This result shows that using a distilled model for agent applications helps developers receive the speed and cost of smaller models while getting similar accuracy as a larger teacher model.

Conclusion

Amazon Bedrock Model Distillation is now generally available, offering organizations a practical pathway for deploying capable agent experiences without compromising on performance or cost-efficiency. As our performance evaluation demonstrates, distilled models for function calling can achieve accuracy comparable to models many times their size while delivering significantly faster inference and lower operational costs. This capability enables scalable deployment of AI agents that can accurately interact with external tools and systems across enterprise applications.

Start using Amazon Bedrock Model Distillation today through the AWS Management Console or API to transform your generative AI applications, including agentic use cases, with the balance of accuracy, speed, and cost efficiency. For implementation examples, check out our code samples in the amazon-bedrock-samples GitHub repository.

Appendix

BFCL V2 simple category

Definition: The simple category consists of tasks where the user is provided with a single function documentation (that is, one JSON function definition), and the model is expected to generate exactly one function call that matches the user’s request. This is the most basic and commonly encountered scenario, focusing on whether the model can correctly interpret a straightforward user query and map it to the only available function, filling in the required parameters as needed.

# Example
{
    "id": "live_simple_0-0-0",
    "question": [
        [{
            "role": "user",
            "content": "Can you retrieve the details for the user with the ID 7890, who has black as their special request?"
        }]
    ],
    "function": [{
        "name": "get_user_info",
        "description": "Retrieve details for a specific user by their unique identifier.",
        "parameters": {
            "type": "dict",
            "required": ["user_id"],
            "properties": {
                "user_id": {
                    "type": "integer",
                    "description": "The unique identifier of the user. It is used to fetch the specific user details from the database."
                },
                "special": {
                    "type": "string",
                    "description": "Any special information or parameters that need to be considered while fetching user details.",
                    "default": "none"
                }
            }
        }
    }]
}

BFCL V2 multiple category

Definition: The multiple category presents the model with a user query and several (typically two to four) function documentations. The model must select the most appropriate function to call based on the user’s intent and context and then generate a single function call accordingly. This category evaluates the model’s ability to understand the user’s intent, distinguish between similar functions, and choose the best match from multiple options.

{
    "id": "live_multiple_3-2-0",
    "question": [
        [{
            "role": "user",
            "content": "Get weather of Ha Noi for me"
        }]
    ],
    "function": [{
        "name": "uber.ride",
        "description": "Finds a suitable Uber ride for the customer based on the starting location, the desired ride type, and the maximum wait time the customer is willing to accept.",
        "parameters": {
            "type": "dict",
            "required": ["loc", "type", "time"],
            "properties": {
                "loc": {
                    "type": "string",
                    "description": "The starting location for the Uber ride, in the format of 'Street Address, City, State', such as '123 Main St, Springfield, IL'."
                },
                "type": {
                    "type": "string",
                    "description": "The type of Uber ride the user is ordering.",
                    "enum": ["plus", "comfort", "black"]
                },
                "time": {
                    "type": "integer",
                    "description": "The maximum amount of time the customer is willing to wait for the ride, in minutes."
                }
            }
        }
    }, {
        "name": "api.weather",
        "description": "Retrieve current weather information for a specified location.",
        "parameters": {
            "type": "dict",
            "required": ["loc"],
            "properties": {
                "loc": {
                    "type": "string",
                    "description": "The location for which weather information is to be retrieved, in the format of 'City, Country' (e.g., 'Paris, France')."
                }
            }
        }
    }]
}

About the authors

Yanyan Zhang is a Senior Generative AI Data Scientist at Amazon Web Services, where she has been working on cutting-edge AI/ML technologies as a Generative AI Specialist, helping customers use generative AI to achieve their desired outcomes. Yanyan graduated from Texas A&M University with a PhD in Electrical Engineering. Outside of work, she loves traveling, working out, and exploring new things.

Ishan Singh is a Generative AI Data Scientist at Amazon Web Services, where he helps customers build innovative and responsible generative AI solutions and products. With a strong background in AI/ML, Ishan specializes in building generative AI solutions that drive business value. Outside of work, he enjoys playing volleyball, exploring local bike trails, and spending time with his wife and dog, Beau.

Yijun Tian is an Applied Scientist II at AWS Agentic AI, where he focuses on advancing fundamental research and applications in Large Language Models, Agents, and Generative AI. Prior to joining AWS, he obtained his Ph.D. in Computer Science from the University of Notre Dame.

Yawei Wang is an Applied Scientist at AWS Agentic AI, working at the forefront of generative AI technologies to build next-generation AI products within AWS. He also collaborates with AWS business partners to identify and develop machine learning solutions that address real-world industry challenges.

David Yan is a Senior Research Engineer at AWS Agentic AI, leading efforts in Agent Customization and Optimization. Prior to that, he was in AWS Bedrock, leading model distillation effort to help customers optimize LLM latency, cost and accuracy. His research interest includes AI agent, planning and prediction and inference optimization. Before joining AWS, David worked on planning and behavior prediction for autonomous driving in Waymo. Before that, he worked on nature language understanding for knowledge graph at Google. David received a M.S. in Electrical Engineering from Stanford University and a B.S. in Physics from Peking University.

Panpan Xu is a Principal Applied Scientist at AWS Agentic AI, leading a team working on Agent Customization and Optimization. Prior to that, she lead a team in AWS Bedrock working on research and development of inference optimization techniques for foundation models, covering modeling level techniques such as model distillation and sparsification to hardware-aware optimization. Her past research interest covers a broad range of topics including model interpretability, graph neural network, human-in-the-loop AI and interactive data visualization. Prior to joining AWS, she was a lead research scientist at Bosch Research and obtained her PhD in computer science from Hong Kong University of Science and Technology.

Shreeya Sharma is a Senior Technical Product Manager at AWS, where she has been working on leveraging the power of generative AI to deliver innovative and customer-centric products. Shreeya holds a master’s degree from Duke University. Outside of work, she loves traveling, dancing, and singing.

Read More

Build public-facing generative AI applications using Amazon Q Business for anonymous users

Build public-facing generative AI applications using Amazon Q Business for anonymous users

Amazon Q Business is a generative AI-powered assistant that answers question, provides summaries, generates content, and securely completes tasks based on enterprise data and information. It connects to company data sources, applications, and internal systems to provide relevant, contextual answers while maintaining organizational security and compliance standards.

Today, we’re excited to announce that Amazon Q Business now supports anonymous user access. With this new feature, you can now create Amazon Q Business applications with anonymous user mode, where user authentication is not required and content is publicly accessible. These anonymous user applications can be used in use cases such as public website Q&A, documentation portals, and customer self-service experiences.

This capability allows guest users to use Amazon Q Business generative AI capabilities to quickly find product information, get technical answers, navigate documentation, and troubleshoot issues. Your public-facing websites, documentation, and support portals can now deliver the same powerful AI-driven assistance that authenticated users receive, creating an experience that enriches the guest user journey across your digital environments.

With this launch, you can seamlessly integrate an anonymous Amazon Q Business application into your websites and web applications through two pathways: either by embedding the ready-to-use web experience into your websites using an iframe for quick deployment, or by using our Chat, ChatSync, and PutFeedback APIs to build completely customized interfaces within your own applications. For anonymous Amazon Q Business applications, we’ve implemented a simple consumption-based pricing model where you’re charged based on the number of Chat or ChatSync API operations your anonymous Amazon Q Business applications make.

In this post, we demonstrate how to build a public-facing generative AI application using Amazon Q Business for anonymous users.

Solution overview

In this solution, we walk you through creating an anonymous Amazon Q Business application using both the AWS Management Console and AWS Command Line Interface (AWS CLI). Our example demonstrates a practical scenario: helping website visitors find information on public-facing documentation websites.

We demonstrate how to test the implementation with sample queries through the built-in web experience URL. The resulting application can be customized and embedded directly into your websites (using the API or the iframe method), providing immediate value for your users.

Prerequisites

To follow along with this post, you will need the following:

  • An AWS account.
  • At least one Amazon Q Business Pro user that has admin permissions to set up and configure Amazon Q Business. For pricing information, see Amazon Q Business pricing.
  • AWS Identity and Access Management (IAM) permissions to create and manage IAM roles and policies.
  • Public content to index (documents, FAQs, knowledge base articles) that can be shared with unauthenticated users.
  • A supported data source to connect, such as an Amazon Simple Storage Service (Amazon S3) bucket containing your public documents.
  • The AWS CLI configured with appropriate permissions (if following the AWS CLI method).

Create an anonymous Amazon Q Business application using the console

In this section, we walk through the steps to implement the solution using the console.

Create an IAM role for the web experience

Before creating your Amazon Q Business application, you will need to set up an IAM role with the appropriate permissions:

  1. On the IAM console, choose Roles in the navigation pane and choose Create role.
  2. Choose AWS service as the trusted entity
  3. Select Amazon Q Business from the service list.
  4. Choose Next: Permissions.
  5. Create a custom policy or attach the necessary read-only policies, and add permissions for anonymous access.

We strongly recommend that you use a restricted policy for the role, like the one shown in the following screenshot, which will be used to create the web experience for anonymous access application environments.

An example of a restricted role policy for calling the Chat API for anonymous access application environments would be arn:aws:qbusiness:<your-region>:<your-aws-account-id>:application/<your-application-id>.

  1. Create an IAM role with a trust policy that allows the Amazon Q Business service principal to assume the role using AWS Security Token Service (AWS STS), specifically scoped to your application’s Amazon Resource Name (ARN) in the designated AWS Region.

Create an Amazon Q Business application

Now you’re ready to create your Amazon Q Business application:

  1. On the Amazon Q Business console, choose Create application.
  2. For Application name, enter a name (for example, SupportDocs-Assistant).
  3. For User access, select Anonymous access for this application environment.
  4. Select Web experience to create a managed web experience to access the Amazon Q Business application.

You will see a notice about consumption-based billing for anonymous Amazon Q Business applications. For more details on pricing, refer to Amazon Q Business pricing.

  1. Leave the default service role option unless you have specific requirements.
  2. For Encryption, use the default AWS managed key unless you need custom encryption.
  3. For Web experience settings, you can use an existing IAM role from your account or authorize Amazon Q Business to generate a new role with appropriate permissions. For this post, we select Use an existing service role and choose the IAM role created earlier (QBusinessAnonymousWebRole).
  4. Optionally, customize the web experience title and welcome message.
  5. Review all your configuration options and choose Create to create the application.

You should see a confirmation that your anonymous access application has been created successfully.

You will find the necessary parameters and details of your Amazon Q Business application on the landing page displayed after successful creation like the following screenshot, which provides comprehensive information about your newly created Amazon Q Business application.

Add data sources

After you create your application, you need to add an index and data sources. To learn more, refer to Index. You will see a pop-up like the following indicating that anonymous access is enabled.

Complete the following steps:

  1. From your application dashboard, choose Add index.
  2. Name your index (for example, Supportdocs-External) and keep the default settings.
  3. Choose Add an index.
  4. After you create the index, you can add data sources to it.

For our example, we use the Amazon Q Business public documentation as our data source by adding the URL https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/what-is.html. The Web Crawler will automatically index the content from this documentation page, making it searchable through your anonymous Amazon Q Business application.

For more information about Web Crawler configuration options and best practices, refer to Connecting Web Crawler to Amazon Q Business.

  1. From your index dashboard, choose Add data source.
  2. Enter a name for your data source and optional description.
  3. For Source, select Source URLs and enter the URLs of the public websites you want to index.
  4. For Authentication, select No authentication.
  5. Configure the sync run schedule and field mappings.
  6. Choose Add data source.

Alternatively, you can add Amazon S3 as the data source:

  1. From your index dashboard, choose Add data source.
  2. Select Amazon S3 as the source.
  3. Configure your S3 bucket settings (make sure the bucket has public access).
  4. Complete the data source creation process.

You must only ingest publicly available data sources without access control lists (ACLs).

Generate an anonymous web experience URL

After your data sources are set up, complete the following steps:

  1. From your application dashboard, choose your application.
  2. In the Web experience settings section, choose Share one-time URL.

The anonymous web experience URL can be shared as a single-use link that must be redeemed and accessed within 5 minutes. After it’s activated, the Amazon Q Business session remains active with a configurable timeout ranging from 15–60 minutes. This enables you to experience the web interface and test its functionality before deploying or offering the anonymous application to guest users.

Test your anonymous Amazon Q Business application

To test the application, choose Preview web experience.

The following screenshot shows the welcome page for your anonymous Amazon Q Business application’s web interface. Let’s begin asking Amazon Q Business some questions about the Amazon Q index.

In the first query, we ask “What is Q index? How is it useful for ISV’s?” The following screenshot shows the response.

In the following query, we ask “How can Q index enrich generative AI experiences for ISVs?”

In our next query, we ask “How is Q index priced?”

Having successfully tested our anonymous Amazon Q Business application through the console, we will now explore how to create an equivalent application using the AWS CLI.

Create your anonymous application using the AWS CLI

Make sure that your AWS CLI is configured with permissions to create Amazon Q Business resources and IAM roles.

Create an IAM role for Amazon Q Business

First, create an IAM role that Amazon Q Business can assume to access necessary resources:

# Create trust policy document
cat > trust-policy.json << 'EOF'
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "qbusiness.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF

# Create IAM role
aws iam create-role 
  --role-name QBusinessAnonymousAppRole 
  --assume-role-policy-document file://trust-policy.json

# Attach necessary permissions
aws iam attach-role-policy 
  --role-name QBusinessAnonymousAppRole

Create an anonymous Amazon Q Business application

Use the following code to create your application:

#bash
aws qbusiness create-application 
--display-name "PublicKnowledgeBase" 
--identity-type ANONYMOUS 
--role-arn "arn:aws:iam:: <ACCOUNT_ID>:role/QBusinessAnonymousAppRole" 
--description "This is the QBiz application for anonymous use-case"

Save the applicationId from the response:

#json

{
  "applicationId": "your-application-id",
  "applicationArn": "arn:aws:qbusiness:region:account-id:application/your-application-id"
}

Create a restrictive policy for anonymous access

We strongly recommend using the following restricted policy for the role that will be used to call the chat APIs for anonymous access application environments. This policy limits actions to only the necessary APIs and restricts access to only your specific application.

Create the IAM role with the following policy:

# Create restrictive policy document
cat > anonymous-access-policy.json << 'EOF'
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "QBusinessConversationPermission",
      "Effect": "Allow",
      "Action": [
        "qbusiness:Chat",
        "qbusiness:ChatSync",
        "qbusiness:PutFeedback"
      ],
      "Resource": "arn:aws:qbusiness:<REGION>:<ACCOUNT_ID>:application/<APPLICATION_ID>"
    }
  ]
}
EOF

# Attach the policy to the role
aws iam put-role-policy 
  --role-name QBusinessAnonymousAppRole 
  --policy-name QBusinessAnonymousAccessPolicy 
  --policy-document file://anonymous-access-policy.json

Create an index

Create an index for your content, then upload documents using the BatchPutDocument API. For step-by-step guidance, see Select Retriever.

Test your anonymous Amazon Q Business application

To demonstrate the chat functionality using the AWS CLI, we uploaded Amazon Q Business documentation in PDF format to our index and tested the application using the following sample queries.

The following is an example chat interaction using the IAM role credentials. We first ask “What is Amazon Q index?”

#1)
#bash
aws qbusiness chat-sync 
  --application-id <APPLICATION_ID> 
  --user-message "What is Amazon Q index?"

The following screenshot shows part of the output from the chat-sync API when executed with our anonymous Amazon Q Business application ID, as shown in the previous command.

Next, we ask “How can Q index enrich generative AI experiences for ISV’s?”

2)
#bash
aws qbusiness chat-sync 
  --application-id <APPLICATION_ID> 
  --user-message "How can Q index enrich generative AI experiences for ISV's?"

The following screenshot shows part of the output from the chat-sync API when executed with our anonymous Amazon Q Business application ID.

Create a web experience for the anonymous web application

Use the following code to create the web experience:

#bash
aws qbusiness create-web-experience 
  --application-id <APPLICATION_ID> 
  --display-name "PublicKnowledgeBaseExperience" 
  --role-arn "arn:aws:iam::<ACCOUNT_ID>:role/QBusinessAnonymousAppRole" 
  --description "Web interface for my anonymous Q Business application"

To generate an anonymous URL, use the following code:

#bash
aws qbusiness create-anonymous-web-experience-url 
  --application-id <APPLICATION_ID> 
  --web-experience-id <WEB_EXPERIENCE_ID>

You can use the web experience URL generated by the preceding command and embed it into your web applications using an iframe.

Considerations

Consider the following when using anonymous access in Amazon Q Business:

  • The following are the only chat APIs that support anonymous access application environments:
    • Chat
    • ChatSync
    • PutFeedback
  • You should only ingest publicly available data sources without ACLs. Examples of public data sources include:
    • Data from the Amazon Q Business Web Crawler
    • Amazon S3 data without ACLs
  • Amazon Q Business applications with anonymous access are billed on a consumption-based pricing model.
  • Chat history is not available for anonymous application environments.
  • Anonymous users and authenticated users are not supported on the same application environments.
  • Plugins are not supported for anonymous application environments.
  • Amazon QuickSight integration is not supported for anonymous application

Environments.

  • Amazon Q Apps are not supported for anonymous application environments.
  • Attachments are not supported for anonymous application environments.
  • Admin controls and guardrails are read-only for anonymous application environments, except for blocked words.
  • Topic rules using users and groups are not supported for anonymous application

The remaining Amazon Q Business functionality and features remain unchanged.

Clean up

When you are done with the solution, clean up the resources you created.

Conclusion

In this post, we introduced Amazon Q Business anonymous user access mode and demonstrated how to create, configure, and test an anonymous Amazon Q Business application using both the console and AWS CLI. This exciting feature extends enterprise-grade Amazon Q Business generative AI capabilities to your anonymous audiences without requiring authentication, opening up new possibilities for enhancing customer experiences on public websites, documentation portals, and self-service knowledge bases. This feature is available through a consumption pricing model that charges based on actual Chat and Chatsync API usage and index storage costs still applicable.

By following the implementation steps outlined in this post, you can quickly set up an Amazon Q Business application tailored for your external users, secured with appropriate IAM policies, and ready to embed in your end-user-facing applications.

To learn more about this anonymous access feature, see the Amazon Q Business User Guide. For detailed guidance on embedding Amazon Q Business in your web applications, see Add a generative AI experience to your website or web application with Amazon Q embedded. If you’re interested in building completely custom UI experiences with the Amazon Q Business API, check out Customizing an Amazon Q Business web experience.


About the authors

Vishnu Elangovan is a Worldwide Generative AI Solution Architect with over seven years of experience in Applied AI/ML. He holds a master’s degree in Data Science and specializes in building scalable artificial intelligence solutions. He loves building and tinkering with scalable AI/ML solutions and considers himself a lifelong learner. Outside his professional pursuits, he enjoys traveling, participating in sports, and exploring new problems to solve.

jpdJean-Pierre Dodel is a Principal Product Manager for Amazon Q Business, responsible for delivering key strategic product capabilities including structured data support in Q Business, RAG. and overall product accuracy optimizations. He brings extensive AI/ML and Enterprise search experience to the team with over 7 years of product leadership at AWS.

Read More

FloQast builds an AI-powered accounting transformation solution with Anthropic’s Claude 3 on Amazon Bedrock

FloQast builds an AI-powered accounting transformation solution with Anthropic’s Claude 3 on Amazon Bedrock

With the advent of generative AI solutions, a paradigm shift is underway across industries, driven by organizations embracing foundation models (FMs) to unlock unprecedented opportunities. Amazon Bedrock has emerged as the preferred choice for numerous customers seeking to innovate and launch generative AI applications, leading to an exponential surge in demand for model inference capabilities. Amazon Bedrock customers aim to scale their worldwide applications to accommodate a variety of use cases. One such customer is FloQast.

Since its founding in 2013, FloQast has had the privilege of working with over 2,800 organizations across various industries and regions, helping them streamline their accounting operations. From automated reconciliations to tools that manage the entire close process, FloQast has seen firsthand how organizations, big and small, struggle to keep pace with their accounting needs as they scale. FloQast’s software (created by accountants, for accountants) brings AI and automation innovation into everyday accounting workflows. You can reconcile bank statements against internal ledgers, get real-time visibility into financial operations, and much more.

In this post, we share how FloQast built an AI-powered accounting transaction solution using Anthropic’s Claude 3 on Amazon Bedrock.

Accounting operations: Complexity amplified at scale

At the heart of every successful organization—whether small startups or large corporations—lies a well-oiled financial and accounting operation. Accounting is more than just a back-office function; it’s the backbone of every business. From processing payroll to generating financial statements, accounting is a ubiquitous force that touches every facet of business operations.

Consider this: when you sign in to a software system, a log is recorded to make sure there’s an accurate record of activity—essential for accountability and security. Similarly, when an incident occurs in IT, the responding team must provide a precise, documented history for future reference and troubleshooting. The same principle applies to accounting: when a financial event takes place, whether it’s receiving a bill from a vendor or signing a contract with a customer, it must be logged. These logs, known in accounting as journal entries, provide a clear financial record.

Now imagine this process scaled across hundreds, or even thousands, of transactions happening simultaneously in a large organization. The complexity of accounting increases exponentially with growth and diversification. As businesses expand, they encounter a vast array of transactions that require meticulous documentation, categorization, and reconciliation. At scale, upholding the accuracy of each financial event and maintaining compliance becomes a monumental challenge. With advancement in AI technology, the time is right to address such complexities with large language models (LLMs).

Amazon Bedrock has helped democratize access to LLMs, which have been challenging to host and manage. Amazon Bedrock offers a choice of industry-leading FMs along with a broad set of capabilities to build generative AI applications, simplifying development with security, privacy, and responsible AI. Because Amazon Bedrock is serverless, you don’t have to manage infrastructure to securely integrate and deploy generative AI capabilities into your application, handle spiky traffic patterns, and enable new features like cross-Region inference, which helps provide scalability and reliability across AWS Regions.

In this post, we highlight how the AI-powered accounting transformation platform uses Amazon Bedrock. FloQast addresses the most complex and custom aspects of financial processes (the final 20%)—those intricate, bespoke aspects of accounting that are highly specific to each organization and often require manual intervention. FloQast’s AI-powered solution uses advanced machine learning (ML) and natural language commands, enabling accounting teams to automate reconciliation with high accuracy and minimal technical setup.

FloQast AI Transaction Matching

Seamlessly integrated with the existing FloQast suite, the AI Transaction Matching product streamlines and automates your matching and reconciliation processes, delivering unparalleled precision and efficiency.

It offers the following key features:

  • AI-driven matching – You can automatically match transactions across multiple data sources with high accuracy
  • Flexible rule creation – You can use natural language to create custom matching rules tailored to your unique processes
  • Exception handling – You can quickly identify and manage unmatched transactions or discrepancies
  • Audit trail – You can maintain a comprehensive audit trail of matching activities for compliance and transparency
  • High-volume processing – You can efficiently handle large volumes of transactions, suitable for businesses of all sizes
  • Multi-source integration – You can seamlessly integrate and match transactions from various financial systems and data sources

Let’s review how it works:

  1. Transaction data is gathered from bank statements and enterprise resource planning (ERP) systems.
  2. An accountant will select specific transactions in both systems and choose Generate AI Rule.

The following screenshot shows the general ledger system on the left and the bank statement on the right.

general ledger system on the left and the bank statement on the right

  1. Based on the selected transactions, text is generated (see the following screenshot).

generated text from selected transactions

  1. At this point, the accountant has the option to either accept the generated text or edit the text.
  2. The accountant chooses Save and apply to generate a rule in coded format that is further used to find additional matches, helping the accountant automate transaction reconciliation.

FloQast AI Transaction Matching offers the following benefits:

  • Unified environment – It seamlessly integrates with your existing FloQast products for a single source of truth
  • AI-powered automation – It uses advanced ML to handle complex matching scenarios
  • User-friendly interface – It’s designed by accountants for how accountants work, providing ease of use and adoption
  • Real-time insights – You can gain immediate visibility into your transaction data across systems
  • Scalability – It can adapt as your transaction volumes grow and business evolves

FloQast AI Annotations

FloQast’s new AI Annotations feature empowers teams to seamlessly and automatically annotate and review sample documents, streamlining compliance and audit processes through advanced automation and ML.

It offers the following key features:

  • Automated document annotation – You can upload sample documents to automatically annotate key data points with attributes specified in your testing criteria, saving time on manual reviews
  • AI-powered analysis – You can use advanced AI and natural language models to analyze document text, highlighting relevant information according to predefined controls and testing attributes
  • Bulk annotation for efficiency – You can select multiple documents or testing controls for bulk annotation, reducing time spent on repetitive document processing
  • Structured storage and audit trail – You can maintain a structured record of each annotated document, capturing all extracted data, annotation responses, and status updates for streamlined compliance and audit trails
  • Intuitive error handling – Smart checks identify and notify users of processing errors, making sure each annotation is complete and accurate.

The following diagram illustrates the architecture using AWS services.

architecture diagram

The workflow starts with user authentication and authorization (steps 1-3). After those steps are complete, the workflow consists of the following steps:

  1. Users upload supporting documents that provide audit evidence into a secure Amazon Simple Storage Service (Amazon S3) bucket.
  2. The input documents are encrypted by Amazon S3 when consumed by Amazon Textract.
  3. Amazon Textract (encrypts data in transit and at rest) extracts the data from the documents.
  4. When complete, raw data is stored into an encrypted S3 bucket.
  5. Data sanitization workflow kicks off using AWS Step Functions consisting of AWS Lambda functions.
  6. Sanitized extracted data is written into an encrypted MongoDB.
  7. Amazon Textract is polled to update the job status and written into Mongo DB.
  8. The user starts the annotation process.
  9. Application logic consumes data from Mongo DB and provides it to Anthropic’s Claude 3.5 Sonnet on Amazon Bedrock.
  10. The LLM runs the audit rules (shown in the following screenshot) against the extracted data and generates an annotation for each audit rule, including pass/fail details of the audit rule.
  11. Annotation results are filtered using Amazon Bedrock Guardrails to enhance content safety and privacy in generative AI applications.

Annotation results

FloQast AI Annotations offers the following benefits:

  • Seamless integration with FloQast – This feature is integrated into the FloQast platform, providing access to annotation tools alongside your existing compliance and financial workflows
  • Enhanced efficiency with AI-driven workflows – FloQast’s annotation feature uses AI to reduce manual workload, helping teams focus on high-value tasks rather than repetitive document review
  • Scalable solution for high-volume document processing – Designed to handle substantial document volumes, FloQast AI Annotations adapts to the demands of growing teams and complex audit requirements
  • Real-time document processing insights – You can stay informed with live tracking of each annotation job, with built-in monitoring for smooth and efficient workflows

FloQast’s AI technology choices

FloQast selected Amazon Bedrock because of its unmatched versatility, feature sets, and the robust suite of scalable AI models from top-tier providers like Anthropic. Anthropic’s Claude 3.5 Sonnet provides the advanced reasoning and contextual understanding necessary for handling complex financial workflows. However, a key feature of Amazon Bedrock—Amazon Bedrock Agents—is a game changer for FloQast. Amazon Bedrock Agents enables generative AI applications to run multi-step tasks across company systems and data sources. To learn more, see How Amazon Bedrock Agents works.

Amazon Bedrock Agents provides an intelligent orchestration layer, allowing FloQast to automate accounting workflows efficiently. It has added significant value in the following areas:

  • Instruction handling and task automation – Amazon Bedrock Agents enables FloQast to submit natural language instructions that the AI interprets and executes autonomously.
  • Session and memory management session – Attributes and promptSessionAttributes are passed between sessions related to a single workflow, but most user requests can be singular to a session.
  • Code generation that demonstrates business understanding – Amazon Bedrock Agents offers valuable features through its secure code interpretation capabilities and flexible configuration options. Amazon Bedrock agents can be tailored to the correct persona and business context, while operating within a protected test environment. This allows accountants to submit natural language instructions and input data, which is then processed in a controlled manner that aligns with security best practices. When FloQast integrates with Amazon Bedrock Agents, accountants can submit custom requests, and the agent can generate and test code within an isolated secure environment, with appropriate technical oversight and guardrails in place. The combination of Amazon Bedrock Agents’ secure code interpretation features and FloQast’s deep knowledge of accounting practices enables financial teams to operate efficiently while maintaining proper controls.
  • Data integration and output handling – By using Amazon Bedrock Agents, information is passed from upstream integrated financial systems, allowing FloQast to automate data retrieval and transformation tasks.
  • Multi-step task orchestration – Amazon Bedrock agents are designed to handle multi-step tasks by orchestrating complex workflows. For example, after FloQast retrieves data from a financial system, that data is passed to the agent, which runs the necessary calculations, generates the output code, and presents the results for user approval—all in one automated process. This orchestration is especially useful in accounting, where multiple steps must be completed in the correct sequence to maintain compliance and accuracy.

The flexibility of Amazon Bedrock Agents to manage these tasks and integrate them seamlessly into existing workflows enables FloQast to achieve scale, reduce complexity, and implement automation required to cater to the evolving needs of FloQast’s customers.

Anthropic’s Claude 3.5 Sonnet on Amazon Bedrock provides the best results in FloQast’s evaluation of other models for the use case. FloQast doesn’t need to fine-tune the model as a model consumer, so they use Retrieval Augmented Generation (RAG) with few-shot classification on data collected on the user’s behalf, removing the overhead of fine-tuning an LLM. For this use case, this design mechanism produces a higher level of accuracy, a better security model that is understood by FloQast’s customers, and ease of use as a developer.

Conclusion

FloQast’s AI-powered accounting transformation solution has had a substantial impact on its users. By automating routine, time-consuming accounting processes, the solution has saved accounting teams countless hours, enabling them to shift away from manual spreadsheet work and focus on higher-value activities, such as reviewing financial outcomes, assessing business health, and making data-driven decisions. This solution has removed the tedium of data reconciliation, delivering measurable improvements, including a 38% reduction in reconciliation time, a 23% decrease in audit process duration and discrepancies, and a 44% improvement in workload management.

Learn more about the FloQast platform at FloQast.com. Contact evelyn.cantu@floqast.com for more information about the FloQast and AWS partnership.


About the authors

Kartik BhatnagarKartik Bhatnagar is a data security-focused Solutions Architect at AWS, based in San Francisco, CA. He has experience working with startups and enterprises across the tech, fintech, healthcare, and media & entertainment industries, in roles including DevOps Engineer and Systems Architect. In his current role, he partners with AWS customers to design and implement scalable, secure, and cost-effective solutions on the AWS platform. Outside of work, he enjoys playing cricket and tennis, food hopping, and hiking.

Aidan AndersonAidan Anderson is a dynamic technology leader with over a decade of experience in software engineering, security, and artificial intelligence. Currently serving as the Director of AI Engineering at FloQast, he is at the forefront of integrating AI and automation into accounting workflows, enhancing operational efficiency and accuracy for finance teams. Aidan’s portfolio spans leadership across security, product development, and platform engineering – where he’s consistently driven innovation, built high-performing teams, and delivered impactful solutions in fast-paced startup environments.

Read More

Insights in implementing production-ready solutions with generative AI

Insights in implementing production-ready solutions with generative AI

As generative AI revolutionizes industries, organizations are eager to harness its potential. However, the journey from production-ready solutions to full-scale implementation can present distinct operational and technical considerations. This post explores key insights and lessons learned from AWS customers in Europe, Middle East, and Africa (EMEA) who have successfully navigated this transition, providing a roadmap for others looking to follow suit.

Building a solid business case: Operational excellence drives customer experience

The foundation of successful generative AI implementations are business cases with clear value propositions that fit with organizational goals, for example, improving efficiency, cost savings, or revenue growth. Typical examples include enhancing customer experience, optimizing operations, maintaining compliance with legal standards, improving level of services, or increasing employee productivity.

Companies in EMEA have used AWS services to transform their operations and improve customer experience using generative AI, with their stories illustrating how a strong business case can lead to tangible results across various industry verticals.

Il Sole 24 Ore, Italy’s leading multimedia publishing group, partnered with AWS Professional Services to boost the efficiency of a historic service, L’Esperto Risponde, where users can ask fiscal questions and receive responses from a team of experts. Il Sole 24 Ore leveraged its vast internal knowledge with a Retrieval Augmented Generation (RAG) solution powered by AWS. This solution maintained over 90% accuracy in responses and reduced the time spent by experts in searching and processing information, empowering them to focus on more strategic tasks. Additionally, the company is continuously incorporating end-user feedback to keep the service tailored to customer needs. For more information, you can watch the AWS Summit Milan 2024 presentation.

Booking.com, one of the world’s leading digital travel services, is using AWS to power emerging generative AI technology at scale, creating personalized customer experiences while achieving greater scalability and efficiency in its operations. Booking.com uses Amazon SageMaker AI to provide highly personalized customer accommodation recommendations.

“One of the things we really like about AWS’s approach to generative AI is choice. We love open source, and we feel it will play an important role in the evolution of generative AI,”

– Rob Francis, Chief Technology Officer of Booking.com.

With AWS support, Booking.com is enhancing its generative AI capabilities and positioning itself for future growth in the travel and hospitality industry. For more details, you can watch Booking.com’s keynote at AWS re:Invent 2023, their presentation on generative AI from idea to production on AWS at AWS London Summit 2024, and read the case study on how Booking.com helps customers experience a new world of travel using AWS and generative AI.

ENGIE is a global power and utilities company, with 25 business units operating worldwide. ENGIE’s One Data team partnered with AWS Professional Services to develop an AI-powered chatbot that enables natural language conversation search within ENGIE’s Common Data Hub data lake, over 3 petabytes of data. The solution complements traditional keyword-based search by allowing users to discover datasets through simple conversational queries, making it easier to find relevant data among tens of thousands of datasets. This dual approach to data discovery has accelerated the development of data-driven products and enhanced data assets sharing across the organization.

These examples demonstrate how companies across various sectors have successfully used AWS generative AI capabilities to address specific business challenges.

Getting ahead of implementation challenges

Though essential, a solid business case is only the first step. As organizations move their generative AI initiatives forward, they often encounter new challenges related to making the solution scalable, reliable, and compliant. Let’s explore what it takes to successfully advance generative AI projects from the preproduction phase, making sure that the original value of the business case is then fully realized in real-world application.

Achieving scale, reliability, and compliance

Factors to consider in transitioning to full-scale production include scalability, data governance, privacy, consistent and responsible AI behaviors, security, integration with existing systems, monitoring, end-user feedback collection, and business impact measurement. As organizations in EMEA have discovered, success in this transition requires a holistic approach that goes beyond mere technological considerations. With a multitude of customer learnings, paired with AWS expertise, we can identify key strategies for implementation.

Production-ready infrastructure, applications, and processes in the cloud

With the increase in scope, number, and complexity of generative AI applications, organizations have an increased need to reduce undifferentiated effort and set a high-quality bar for production-ready applications. Standard development best practices and effective cloud operating models, like AWS Well-Architected and the AWS Cloud Adoption Framework for Artificial Intelligence, Machine Learning, and Generative AI, are key to enabling teams to spend most of their time on tasks with high business value, rather than on recurrent, manual operations. Such an approach should include established industry standards such as infrastructure as code (IaC), continuous integration and continuous delivery (CI/CD), monitoring and observability, logging and auditing, and solutions for scalability and high availability.

For instance, Iveco Group, a global automotive leader active in the Commercial and Specialty Vehicles, Powertrain, adopted a structured cloud-operating model, leveraging IaC via Terraform for consistent and repeatable deployments across environments. A DevOps environment, via CI/CD pipelines, allows for frequent updates and testing of generative AI models and applications, allowing the developers to focus on improving and expanding the solutions rather then spending time on manual operations. This also helps make sure that generative AI solutions are optimized for performance, security, and cost-efficiency. This integrated approach not only accelerates the path from pre-production to full-scale implementation, but also enables them to adapt quickly to new generative AI advancements, manage complex dependencies, and scale resources as needed, ultimately driving innovation and competitive advantage in the rapidly evolving field of generative AI. See the re:Invent 2024 session for more information.

Accor Group, a major hospitality company that developed a generative AI-powered booking application, showcased how, even when working with new technologies like generative AI, fundamental software development principles remain a must-have. They implemented a three-layered comprehensive testing strategy. First, unit tests verify that the prompts consistently generate acceptable responses from the chatbot, even upon prompt modifications. Second, integration tests verify the end-to-end flow of the REST API and the chatbot’s interaction with the large language model (LLM). The final step is functional testing with predefined scenarios for manual testing and validation. They also implemented feedback systems, essential for the improvement flywheel of customer-facing applications, in the form of in-app surveys, instant feedback options (thumbs-up or thumbs-down), and a dedicated feedback portal for detailed user input. Finally, to measure the effectiveness of the solution and its business impact, they established a system to track room bookings made through the generative AI application.

Danske Bank, a leading Nordic bank, transitioned from a container-based on-premises setup to Amazon Elastic Container Service (Amazon ECS) with AWS Fargate. This allowed them to quickly move their API-based backend services to a cloud-native environment. This decoupled architecture, designed to be provider-agnostic, set them up for flexibility in leveraging different cloud-based generative AI tools and services as needed. The integration with Amazon Bedrock was seamless and impactful, as it provided faster access to multiple foundational models from more providers. This allowed the customer to rapidly experiment, iterate, and evaluate different models for their specific use cases. This case demonstrates how the combination of generative AI services and a cloud-native, API-driven architecture allowed this customer to iterate faster, and keep the focus on business value rather than integration of technologies.

The Schaeffler Group has been driving forward groundbreaking inventions and developments in the field of motion technology for over 75 years. The company developed a comprehensive generative AI framework, which establishes enterprise-grade governance and security guardrails for generative AI use case roll-out at scale with infrastructure blueprints. A generative AI inference gateway is integrated within the solution, offering centralized access to numerous foundational models while tracking usage and costs. Going forward, Schaeffler envisions to further integrate these capabilities into their wider generative AI and data landscape, including more fine-grained access controls to data assets and the adoption of generative AI agents.

These examples highlight a key theme for organizations across industries: Success in generative AI goes beyond developing standalone applications. A thorough cloud-based operating model is crucial for enterprises looking to keep pace with the rapidly evolving technology, with minimal operational overhead.

Security, compliance, and responsible AI

As an organization’s generative AI applications expand to handle increasingly sensitive data, security, compliance, and governance must be prioritized accordingly. This includes implementing authentication and access control, encrypting data at rest and in transit, monitoring and auditing of system access and usage, maintaining compliance with regulations (such as GDPR and the recent EU AI Act), as well as establishing clear policies for data handling and model usage.

Here are some examples of customers who have successfully navigated these critical requirements.

Il Sole24 Ore implemented a code of self-discipline for ethical AI application. It prescribes retention of high-quality standards and the centrality of trustworthy data. The principles include regulatory compliance, maintaining data provenance and reliability, incorporating human oversight via human-in-the-loop, inclusivity and diversity in data usage and algorithm adoption, responsibility and accountability, and digital education and communicative transparency. By adhering to these principles, Il Sole 24 Ore Group demonstrates its commitment to leveraging innovative technologies like generative AI in a safe and responsible manner, particularly in sensitive areas such as providing expert legal and tax advice. This approach allows them to harness the benefits of AI while mitigating potential risks and maintaining the trust of their users.

For Accor Group, the implementation of their next-generation booking application required direct customer interaction, emphasizing the critical need for responsible AI practices. To make sure the chatbot would deliver effective customer service while operating within strict ethical boundaries, they established specific safeguards to minimize misuse:

  • Blocking responses to discriminatory queries
  • Withholding responses to illegal activities
  • Implementing guardrails to keep conversations within appropriate business context
  • Installing protections against role-switching or tone-changing attempts during conversations
  • Implementing robust technical defenses against prompt injections

Conclusion

The transition from preproduction to full-scale implementation for generative AI applications presents new challenges and opportunities. It requires identifying a solid business case, maintaining high standards for infrastructure and processes, strategic thinking in choosing an efficient cloud operating model, robust data governance, security, compliance, ethical AI practices, and more.

Organizations across EMEA have demonstrated how using AWS services can help overcome hurdles and accelerate the advantages of generative AI by embracing a holistic approach. By learning from these use cases, more enterprises can achieve successful deployments of generative AI solutions, and benefit from this transformative technology in a reliable, productive, and responsible manner.

Explore more generative AI use cases and customer succcess stories and discover how to accelerate your AI adoption on the cloud with specialized training and the support of AWS Professional Services and the Generative AI Innovation Center.


About the Authors

Dr. Giorgio Pessot is a Machine Learning Engineer at Amazon Web Services Professional Services. With a background in computational physics, he specializes in architecting enterprise-grade AI systems at the confluence of mathematical theory, DevOps, and cloud technologies, where technology and organizational processes converge to achieve business objectives. When he’s not whipping up cloud solutions, you’ll find Giorgio engineering culinary creations in his kitchen.

Daniel Zagyva is a Senior ML Engineer at AWS Professional Services. He specializes in developing scalable, production-grade machine learning solutions for AWS customers. His experience extends across different areas, including natural language processing, generative AI and machine learning operations.

Nicolò Cosimo Albanese is a Data Scientist and Machine Learning Engineer at Amazon Web Services Professional Services. With a Master of Science in Engineering and postgraduate degrees in Machine Learning and Biostatistics, he specializes in developing AI/ML solutions that drive business value for enterprise customers. His expertise lies at the intersection of statistical modeling, cloud technologies, and scalable machine learning systems.

Subhro Bose is a Data Architect in Emergent Technologies and Intelligence Platform in Amazon. He loves working on ways for emergent technologies such as AI/ML, big data, quantum, and more to help businesses across different industry verticals succeed within their innovation journey.

Diar Sabri is a Machine Learning Engineer at AWS who helps organizations transform their business through innovative AI solutions. With experience across multiple industries, he excels at bridging the gap between strategic vision and practical technology implementation, enabling customers to achieve meaningful business outcomes.

Aamna Najmi is a GenAI and Data Specialist at AWS. She assists customers across industries and regions in operationalizing and governing their generative AI systems at scale, ensuring they meet the highest standards of performance, safety, and ethical considerations, bringing a unique perspective of modern data strategies to complement the field of AI. In her spare time, she pursues her passion of experimenting with food and discovering new places.

Anwar Rizal is a Senior Machine Learning consultant for AWS Professional Services based in Paris. He works with AWS customers to develop data and AI solutions to sustainably grow their business.

Amer Elhabbash is a Senior Data & AI Delivery Consultant with AWS Professional Services. With over 25 years of international experience in IT spanning multiple fields and domains; Telecommunication, Software Engineering , Database, Data Analytics and AI. He helps AWS’ customers migrating their legacy data systems and building innovative cloud-native data-driven solutions.

Hassen Riahi is a Delivery Practice Manager Data & AI at AWS Professional Services. He holds a PhD in Mathematics & Computer Science on large-scale data management. He collaborates with AWS customers to build data-driven solutions.

Dr. Marco Guerriero leads Data and GenAI at AWS Professional Services for France and Europe South, holding a Ph.D. in Electrical and Computer Engineering from the University of Connecticut. His expertise spans machine learning, statistical inference, and mathematical optimization, with experience at organizations like NATO, GE, and ABB across defense, manufacturing, energy, and industrial automation sectors. With over 60 publications and five US patents to his name, Dr. Guerriero focuses on leveraging emerging technologies like GenAI and Quantum computing to drive business innovation across industries.

Sri Elaprolu is Director of the AWS Generative AI Innovation Center, where he leads a global team implementing cutting-edge AI solutions for enterprise and government organizations. During his 12-year tenure at AWS, he has led ML science teams partnering with organizations like the NFL, Cerner, and NASA. Prior to AWS, he spent 14 years at Northrop Grumman in product development and software engineering leadership roles. Sri holds a Master’s in Engineering Science and an MBA.

Dragica Boca is Managing Director of Professional Services EMEA at Amazon Web Services (AWS), leading enterprise cloud migration and generative AI transformation initiatives. With 30 years of technology consulting experience across Microsoft and IBM Global Business Services, she specializes in implementing production-ready AI solutions for Public Sector and Financial Services organizations. Dragica currently oversees large-scale GenAI implementations across EMEA, helping enterprises navigate the complexities of responsible AI deployment, scalable architecture, and sustainable adoption patterns.

Read More

Responsible AI in action: How Data Reply red teaming supports generative AI safety on AWS

Responsible AI in action: How Data Reply red teaming supports generative AI safety on AWS

Generative AI is rapidly reshaping industries worldwide, empowering businesses to deliver exceptional customer experiences, streamline processes, and push innovation at an unprecedented scale. However, amidst the excitement, critical questions around the responsible use and implementation of such powerful technology have started to emerge.

Although responsible AI has been a key focus for the industry over the past decade, the increasing complexity of generative AI models brings unique challenges. Risks such as hallucinations, controllability, intellectual property breaches, and unintended harmful behaviors are real concerns that must be addressed proactively.

To harness the full potential of generative AI while reducing these risks, it’s essential to adopt mitigation techniques and controls as an integral part of the build process. Red teaming, an adversarial exploit simulation of a system used to identify vulnerabilities that might be exploited by a bad actor, is a crucial component of this effort.

At Data Reply and AWS, we are committed to helping organizations embrace the transformative opportunities generative AI presents, while fostering the safe, responsible, and trustworthy development of AI systems.

In this post, we explore how AWS services can be seamlessly integrated with open source tools to help establish a robust red teaming mechanism within your organization. Specifically, we discuss Data Reply’s red teaming solution, a comprehensive blueprint to enhance AI safety and responsible AI practices.

Understanding generative AI’s security challenges

Generative AI systems, though transformative, introduce unique security challenges that require specialized approaches to address them. These challenges manifest in two key ways: through inherent model vulnerabilities and adversarial threats.

The inherent vulnerabilities of these models include their potential of producing hallucinated responses (generating plausible but false information), their risk of generating inappropriate or harmful content, and their potential for unintended disclosure of sensitive training data.

These potential vulnerabilities could be exploited by adversaries through various threat vectors. Bad actors might employ techniques such as prompt injection to trick models into bypassing safety controls, intentionally altering training data to compromise model behavior, or systematically probing models to extract sensitive information embedded in their training data. For both types of vulnerabilities, red teaming is a useful mechanism to mitigate those challenges because it can help identify and measure inherent vulnerabilities through systematic testing, while also simulating real-world adversarial exploits to uncover potential exploitation paths.

What is red teaming?

Red teaming is a methodology used to test and evaluate systems by simulating real-world adversarial conditions. In the context of generative AI, it involves rigorously stress-testing models to identify weaknesses, evaluate resilience, and mitigate risks. This practice helps develop AI systems that are functional, safe, and trustworthy. By adopting red teaming as part of the AI development lifecycle, organizations can anticipate threats, implement robust safeguards, and promote trust in their AI solutions.

Red teaming is critical for uncovering vulnerabilities before they are exploited. Data Reply has partnered with AWS to offer support and best practices to help integrate responsible AI and red teaming into your workflows, helping you build secure AI models. This unlocks the following benefits:

  • Mitigating unexpected risks – Generative AI systems can inadvertently produce harmful outputs, such as biased content or factually inaccurate information. With red teaming, Data Reply helps organizations test models for these weaknesses and identify vulnerabilities to adversarial exploitation, such as prompt injections or data poisoning.
  • Compliance with AI regulation – As global regulations around AI continue to evolve, red teaming can help organizations by setting up mechanisms to systematically test their applications and make them more resilient, or serve as a tool to adhere to transparency and accountability requirements. Additionally, it maintains detailed audit trails and documentation of testing activities, which are critical artifacts that can be used as evidence for demonstrating compliance with standards and responding to regulatory inquiries.
  • Reducing data leakage and malicious use – Although generative AI has the potential to be a force for good, models might also be exploited by adversaries looking to extract sensitive information or perform harmful actions. For instance, adversaries might craft prompts to extract private data from training sets or generate phishing emails and malicious code. Red teaming simulates such adversarial scenarios to identify vulnerabilities, enabling safeguards like prompt filtering, access controls, and output moderation.

The following chart outlines some of the common challenges in generative AI systems where red teaming can serve as a mitigation strategy.

Risk Categories for Generative AI

Before diving into specific threats, it’s important to acknowledge the value of having a systematic approach to AI security risk assessment for organizations deploying AI solutions. As an example, the OWASP Top 10 for LLMs can serve as a comprehensive framework for identifying and addressing critical AI vulnerabilities. This industry-standard framework categorizes key threats, including prompt injection, where malicious inputs manipulate model outputs; training data poisoning, which can compromise model integrity; and unauthorized disclosure of sensitive information embedded in model responses. It also addresses emerging risks such as insecure output handling and denial of service (DOS) that could disrupt AI operations. By using such frameworks alongside practical security testing approaches like red teaming exercises, organizations can implement targeted controls and monitoring to make sure their AI models remain secure, resilient, and align with regulatory requirements and responsible AI principles.

How Data Reply uses AWS services for responsible AI

Fairness is an essential component of responsible AI and, as such, part of the AWS core dimensions of responsible AI. To address potential fairness concerns, it can be helpful to evaluate disparities and imbalances in training data or outcomes. Amazon SageMaker Clarify helps identify potential biases during data preparation without requiring code. For example, you can specify input features such as gender or age, and SageMaker Clarify will run an analysis job to detect imbalances in those features. It generates a detailed visual report with metrics and measurements of potential bias, helping organizations understand and address imbalances.

During red teaming, SageMaker Clarify plays a key role by analyzing whether the model’s predictions and outputs treat all demographic groups equitably. If imbalances are identified, tools like Amazon SageMaker Data Wrangler can rebalance datasets using methods such as random undersampling, random oversampling, or Synthetic Minority Oversampling Technique (SMOTE). This supports the model’s fair and inclusive operation, even under adversarial testing conditions.

Veracity and robustness represent another critical dimension for responsible AI deployments. Tools like Amazon Bedrock provide comprehensive evaluation capabilities that enable organizations to assess model security and robustness through automated evaluation. These include specialized tasks such as question-answering assessments with adversarial inputs designed to probe model limitations. For instance, Amazon Bedrock can help you test model behavior across edge case scenarios by analyzing responses to carefully crafted inputs—from ambiguous queries to potentially misleading prompts—to evaluate if the models maintain reliability and accuracy even under challenging conditions.

Privacy and security go hand in hand when implementing responsible AI. Security at Amazon is “job zero” for all employees. Our strong security culture is reinforced from the top down with deep executive engagement and commitment, and from the bottom up with training, mentoring, and strong “see something, say something” as well as “when in doubt, escalate” and “no blame” principles. As an example of this commitment, Amazon Bedrock Guardrails provide organizations with a tool to incorporate robust content filtering mechanisms and protective measures against sensitive information disclosure.

Transparency is another best practice prescribed by industry standards, frameworks, and regulations, and is essential for building user trust in making informed decisions. LangFuse, an open source tool, plays a key role in providing transparency by keeping an audit trail of model decisions. This audit trail offers a way to trace model actions, helping organizations demonstrate accountability and adhere to evolving regulations.

Solution overview

To achieve the goals mentioned in the previous section, Data Reply has developed the Red Teaming Playground, a testing environment that combines several open source tools—like Giskard, LangFuse, and AWS FMEval—to assess the vulnerabilities of AI models. This playground allows AI builders to explore scenarios, perform white hat hacking, and evaluate how models react under adversarial conditions. The following diagram illustrates the solution architecture.

Red Teaming Solution Architecture Diagram

This playground is designed to help you responsibly develop and evaluate your generative AI systems, combining a robust multi-layered approach for authentication, user interaction, model management, and evaluation.

At the outset, the Identity Management Layer handles secure authentication, using Amazon Cognito and integration with external identity providers to help secure authorized access. Post-authentication, users access the UI Layer, a gateway to the Red Teaming Playground built on AWS Amplify and React. This UI directs traffic through an Application Load Balancer (ALB), facilitating seamless user interactions and allowing red team members to explore, interact, and stress-test models in real time. For knowledge retrieval, we use Amazon Bedrock Knowledge Bases, which integrates with Amazon Simple Storage Service (Amazon S3) for document storage, and Amazon OpenSearch Serverless for rapid and scalable search capabilities.

Central to this solution is the Foundation Model Management Layer, responsible for defining model policies and managing their deployment, using Amazon Bedrock Guardrails for safety, Amazon SageMaker services for model evaluation, and a vendor model registry comprising a range of foundation model (FM) options, including other vendor models, supporting model flexibility.

After the models are deployed, they go through online and offline evaluations to validate robustness.

Online evaluation uses AWS AppSync for WebSocket streaming to assess models in real time under adversarial conditions. A dedicated red teaming squad (authorized white hat testers) conducts evaluations focused on OWASP Top 10 for LLMs vulnerabilities, such as prompt injection, model theft, and attempts to alter model behavior. Online evaluation provides an interactive environment where human testers can pivot and respond dynamically to model answers, increasing the chances of identifying vulnerabilities or successfully jailbreaking the model.

Offline evaluation conducts a deeper analysis through services like SageMaker Clarify to check for biases and Amazon Comprehend to detect harmful content. The memory database captures interaction data, such as historical user prompts and model responses. LangFuse plays a vital role in maintaining an audit trail of model activities, allowing each model decision to be tracked for observability, accountability, and compliance. The offline evaluation pipeline uses tools like Giskard to detect performance, bias, and security issues in AI systems. It employs LLM-as-a-judge, where a large language model (LLM) evaluates AI responses for correctness, relevance, and adherence to responsible AI guidelines. Models are tested through offline evaluations first; if successful, they progress through online evaluation and ultimately move into the model registry.

The Red Teaming Playground is a dynamic environment designed to simulate scenarios and rigorously test models for vulnerabilities. Through a dedicated UI, the red team interacts with the model using a Q&A AI assistant (for instance, a Streamlit application), enabling real-time stress testing and evaluation. Team members can provide detailed feedback on model performance and log any issues or vulnerabilities encountered. This feedback is systematically integrated into the red teaming process, fostering continuous improvements and enhancing the model’s robustness and security.

Use case example: Mental health triage AI assistant

Imagine deploying a mental health triage AI assistant—an application that demands extra caution around sensitive topics like dosage information, health records, or judgement call questions. By defining a clear use case and establishing quality expectations, you can guide the model on when to answer, deflect, or provide a safe response:

  • Answer – When the bot is confident that the question is within its domain and is able to retrieve a relevant response, it can provide a direct answer. For example, if asked “What are some common symptoms of anxiety?”, the bot can respond: “Common symptoms of anxiety include restlessness, fatigue, difficulty concentrating, and excessive worry. If you’re experiencing these, consider speaking to a healthcare professional.”
  • Deflect – For questions outside the bot’s scope or purpose, the bot should deflect responsibility and guide the user toward appropriate human support. For instance, if asked “Why does life feel meaningless?”, the bot might reply: “It sounds like you’re going through a tough time. Would you like me to connect you to someone who can help?” This makes sure sensitive topics are handled carefully and responsibly.
  • Safe response – When the question requires human validation or advice that the bot can’t provide, it should offer generalized, neutral suggestions to minimize risks. For example, in response to “How can I stop feeling anxious all the time?”, the bot might say: “Some people find practices like meditation, exercise, or journaling helpful, but I recommend consulting a healthcare provider for advice tailored to your needs.”

Red teaming results help refine model outputs by identifying risks and vulnerabilities. For example, consider a medical AI assistant developed by the fictional company AnyComp. By subjecting this assistant to a red teaming exercise, AnyComp can detect potential risks, such as the assistant generating unsolicited medical advice before deployment. With this insight, AnyComp can refine the assistant to either deflect such queries or provide a safe, appropriate response.

This structured approach—answer, deflect, and safe response—provides a comprehensive strategy for managing various types of questions and scenarios effectively. By clearly defining how to handle each category, you can make sure the AI assistant fulfills its purpose while maintaining safety and reliability. Red teaming further validates these strategies by rigorously testing interactions, making sure that the assistant remains useful and trustworthy in different situations.

Conclusion

Implementing responsible AI policies involves continuous improvement. Scaling solutions, like integrating SageMaker for model lifecycle monitoring or AWS CloudFormation for controlled deployments, helps organizations maintain robust AI governance as they grow.

Integrating responsible AI through red teaming is a crucial step to assess that generative AI systems operate responsibly, securely, and remain compliant. Data Reply collaborates with AWS to industrialize these efforts, from fairness checks to security stress tests, helping organizations stay ahead of emerging threats and evolving standards.

Data Reply has extensive expertise in helping customers adopt generative AI, especially with their GenAI Factory framework, which simplifies the transition from proof of concept to production, benefiting industries such as maintenance and customer service FAQs. The GenAI Factory initiative by Data Reply France is designed to overcome integration challenges and scale generative AI applications effectively, using AWS managed services like Amazon Bedrock and OpenSearch Serverless.

To learn more about Data Reply’s work, check out their specialized offerings for red teaming in generative AI and LLMOps.


About the authors

Cassandre Vandeputte is a Solutions Architect for AWS Public Sector based in Brussels. Since her first steps into the digital world, she has been passionate about harnessing technology to drive positive societal change. Beyond her work with intergovernmental organizations, she drives responsible AI practices across AWS EMEA customers.

Davide Gallitelli is a Senior Specialist Solutions Architect for AI/ML in the EMEA region. He is based in Brussels and works closely with customers throughout Benelux. He has been a developer since he was very young, starting to code at the age of 7. He started learning AI/ML at university, and has fallen in love with it since then.

Amine Aitelharraj is a seasoned cloud leader and ex-AWS Senior Consultant with over a decade of experience driving large-scale cloud, data, and AI transformations. Currently a Principal AWS Consultant and AWS Ambassador, he combines deep technical expertise with strategic leadership to deliver scalable, secure, and cost-efficient cloud solutions across sectors. Amine is passionate about GenAI, serverless architectures, and helping organizations unlock business value through modern data platforms.

Read More

InterVision accelerates AI development using AWS LLM League and Amazon SageMaker AI

InterVision accelerates AI development using AWS LLM League and Amazon SageMaker AI

Cities and local governments are continuously seeking ways to enhance their non-emergency services, recognizing that intelligent, scalable contact center solutions play a crucial role in improving citizen experiences. InterVision Systems, LLC (InterVision), an AWS Premier Tier Services Partner and Amazon Connect Service Delivery Partner, has been at the forefront of this transformation, with their contact center solution designed specifically for city and county services called ConnectIV CX for Community Engagement. Though their solution already streamlines municipal service delivery through AI-powered automation and omnichannel engagement, InterVision recognized an opportunity for further enhancement with advanced generative AI capabilities.

InterVision used the AWS LLM League program to accelerate their generative AI development for non-emergency (311) contact centers. As AWS LLM League events began rolling out in North America, this initiative represented a strategic milestone in democratizing machine learning (ML) and enabling partners to build practical generative AI solutions for their customers.

Through this initiative, InterVision’s solutions architects, engineers, and sales teams participated in fine-tuning large language models (LLMs) using Amazon SageMaker AI specifically for municipal service scenarios. InterVision used this experience to enhance their ConnectIV CX solution and demonstrated how AWS Partners can rapidly develop and deploy domain-specific AI solutions.

This post demonstrates how AWS LLM League’s gamified enablement accelerates partners’ practical AI development capabilities, while showcasing how fine-tuning smaller language models can deliver cost-effective, specialized solutions for specific industry needs.

Understanding the AWS LLM League

The AWS LLM League represents an innovative approach to democratizing ML through gamified enablement. The program proves that with the right tools and guidance, almost any role—from solutions architects and developers to sales teams and business analysts—can successfully fine-tune and deploy generative AI models without requiring deep data science expertise. Though initially run as larger multi-organization events such as at AWS re:Invent, the program has evolved to offer focused single-partner engagements that align directly with specific business objectives. This targeted approach allows for customization of the entire experience around real-world use cases that matter most to the participating organization.

The program follows a three-stage format designed to build practical generative AI capabilities. It begins with an immersive hands-on workshop where participants learn the fundamentals of fine-tuning LLMs using Amazon SageMaker JumpStart. SageMaker JumpStart is an ML hub that can help you accelerate your ML journey.

The competition then moves into an intensive model development phase. During this phase, participants iterate through multiple fine-tuning approaches, which can include dataset preparation, data augmentation, and other techniques. Participants submit their models to a dynamic leaderboard, where each submission is evaluated by an AI system that measures the model’s performance against specific benchmarks. This creates a competitive environment that drives rapid experimentation and learning, because participants can observe how their fine-tuned models perform against larger foundation models (FMs), encouraging optimization and innovation.

The program culminates in an interactive finale structured like a live game show as seen in the following figure, where top-performing participants showcase their models’ capabilities through real-time challenges. Model responses are evaluated through a triple-judging system: an expert panel assessing technical merit, an AI benchmark measuring performance metrics, and audience participation providing real-world perspective. This multi-faceted evaluation verifies that models are assessed not just on technical performance, but also on practical applicability.

AWS LLM League finale event where top-performing participants showcase their models' capabilities through real-time challenges

The power of fine-tuning for business solutions

Fine-tuning an LLM is a type of transfer learning, a process that trains a pre-trained model on a new dataset without training from scratch. This process can produce accurate models with smaller datasets and less training time. Although FMs offer impressive general capabilities, fine-tuning smaller models for specific domains often delivers exceptional results at lower cost. For example, a fine-tuned 3B parameter model can outperform larger 70B parameter models in specialized tasks, while requiring significantly less computational resources. A 3B parameter model can run on an ml.g5.4xlarge instance, whereas a 70B parameter model would require the much more powerful and costly ml.g5.48xlarge instance. This approach aligns with recent industry developments, such as DeepSeek’s success in creating more efficient models through knowledge distillation techniques. Distillation is often implemented through a form of fine-tuning, where a smaller student model learns by mimicking the outputs of a larger, more complex teacher model.

In InterVision’s case, the AWS LLM League program was specifically tailored around their ConnectIV CX solution for community engagement services. For this use case, fine-tuning enables precise handling of municipality-specific procedures and responses aligned with local government protocols. Furthermore, the customized model provides reduced operational cost compared to using larger FMs, and faster inference times for better customer experience.

Fine-tuning with SageMaker Studio and SageMaker Jumpstart

The solution centers on SageMaker JumpStart in Amazon SageMaker Studio, which is a web-based integrated development environment (IDE) for ML that lets you build, train, debug, deploy, and monitor your ML models. With SageMaker JumpStart in SageMaker Studio, ML practitioners use a low-code/no-code (LCNC) environment to streamline the fine-tuning process and deploy their customized models into production.

Fine-tuning FMs with SageMaker Jumpstart involves a few steps in SageMaker Studio:

  • Select a model – SageMaker JumpStart provides pre-trained, publicly available FMs for a wide range of problem types. You can browse and access FMs from popular model providers for text and image generation models that are fully customizable.
  • Provide a training dataset – You select your training dataset that is saved in Amazon Simple Storage Service (Amazon S3), allowing you to use the virtually limitless storage capacity.
  • Perform fine-tuning – You can customize hyperparameters prior to the fine-tuning job, such as epochs, learning rate, and batch size. After choosing Start, SageMaker Jumpstart will handle the entire fine-tuning process.
  • Deploy the model – When the fine-tuning job is complete, you can access the model in SageMaker Studio and choose Deploy to start inferencing it. In addition, you can import the customized models to Amazon Bedrock, a managed service that enables you to deploy and scale models for production.
  • Evaluate the model and iterate – You can evaluate a model in SageMaker Studio using Amazon SageMaker Clarify, an LCNC solution to assess the model’s accuracy, explain model predictions, and review other relevant metrics. This allows you to identify areas where the model can be improved and iterate on the process.

This streamlined approach significantly reduces the complexity of developing and deploying specialized AI models while maintaining high performance standards and cost-efficiency. For the AWS LLM League model development phase, the workflow is depicted in the following figure.

The AWS LLM League Workflow

During the model development phase, you start with a default base model and initial dataset uploaded into an S3 bucket. You then use SageMaker JumpStart to fine-tune your model. You then submit the customized model to the AWS LLM League leaderboard, where it will be evaluated against a larger pre-trained model. This allows you to benchmark your model’s performance and identify areas for further improvement.

The leaderboard, as shown in the following figure, provides a ranking of how you stack up against your peers. This will motivate you to refine your dataset, adjust the training hyperparameters, and resubmit an updated version of your model. This gamified experience fosters a spirit of friendly competition and continuous learning. The top-ranked models from the leaderboard will ultimately be selected to compete in the AWS LLM League’s finale game show event.

AWS LLM League Leaderboard

Empowering InterVision’s AI capabilities

The AWS LLM League engagement provided InterVision with a practical pathway to enhance their AI capabilities while addressing specific customer needs. InterVision participants could immediately apply their learning to solve real business challenges by aligning the competition with their ConnectIV CX solution use cases.

The program’s intensive format proved highly effective, enabling InterVision to compress their AI development cycle significantly. The team successfully integrated fine-tuned models into their environment, enhancing the intelligence and context-awareness of customer interactions. This hands-on experience with SageMaker JumpStart and model fine-tuning created immediate practical value.

“This experience was a true acceleration point for us. We didn’t just experiment with AI—we compressed months of R&D into real-world impact. Now, our customers aren’t asking ‘what if?’ anymore, they’re asking ‘what’s next?’”

– Brent Lazarenko, Head of Technology and Innovation at InterVision.

Using the knowledge gained through the program, InterVision has been able to enhance their technical discussions with customers about generative AI implementation. Their ability to demonstrate practical applications of fine-tuned models has helped facilitate more detailed conversations about AI adoption in customer service scenarios. Building on this foundation, InterVision developed an internal virtual assistant using Amazon Bedrock, incorporating custom models, multi-agent collaboration, and retrieval architectures connected to their knowledge systems. This implementation serves as a proof of concept for similar customer solutions while demonstrating practical applications of the skills gained through the AWS LLM League.

As InterVision progresses toward AWS Generative AI Competency, these achievements showcase how partners can use AWS services to develop and implement sophisticated AI solutions that address specific business needs.

Conclusion

The AWS LLM League program demonstrates how gamified enablement can accelerate partners’ AI capabilities while driving tangible business outcomes. Through this focused engagement, InterVision not only enhanced their technical capabilities in fine-tuning language models, but also accelerated the development of practical AI solutions for their ConnectIV CX environment. The success of this partner-specific approach highlights the value of combining hands-on learning with real-world business objectives.

As organizations continue to explore generative AI implementations, the ability to efficiently develop and deploy specialized models becomes increasingly critical. The AWS LLM League provides a structured pathway for partners and customers to build these capabilities, whether they’re enhancing existing solutions or developing new AI-powered services.

Learn more about implementing generative AI solutions:

You can also visit the AWS Machine Learning blog for more stories about partners and customers implementing generative AI solutions across various industries.


About the Authors

Vu Le is a Senior Solutions Architect at AWS with more than 20 years of experience. He works closely with AWS Partners to expand their cloud business and increase adoption of AWS services. Vu has deep expertise in storage, data modernization, and building resilient architectures on AWS, and has helped numerous organizations migrate mission-critical systems to the cloud. Vu enjoys photography, his family, and his beloved corgi.

Jaya Padma Mutta is a Manager Solutions Architects at AWS based out of Seattle. She is focused on helping AWS Partners build their cloud strategy. She enables and mentors a team of technical Solution Architects aligned to multiple global strategic partners. Prior to joining this team, Jaya spent over 5 years in AWS Premium Support Engineering leading global teams, building processes and tools to improve customer experience. Outside of work, she loves traveling, nature, and is an ardent dog-lover.

Mohan CV is a Principal Solutions Architect at AWS, based in Northern Virginia. He has an extensive background in large-scale enterprise migrations and modernization, with a specialty in data analytics. Mohan is passionate about working with new technologies and enjoys assisting customers in adapting them to meet their business needs.

Rajesh Babu Nuvvula is a Solutions Architect in the Worldwide Public Sector team at AWS. He collaborates with public sector partners and customers to design and scale well-architected solutions. Additionally, he supports their cloud migrations and application modernization initiatives. His areas of expertise include designing distributed enterprise applications and databases.

Brent Lazarenko is the Head of Technology & AI at InterVision Systems, where he’s shaping the future of AI, cloud, and data modernization for over 1,700 clients. A founder, builder, and innovator, he scaled Virtuosity into a global powerhouse before a successful private equity exit. Armed with an MBA, MIT AI & leadership creds, and PMP/PfMP certifications, he thrives at the intersection of tech and business. When he’s not driving digital transformation, he’s pushing the limits of what’s next in AI, Web3, and the cloud.

Read More

Improve Amazon Nova migration performance with data-aware prompt optimization

Improve Amazon Nova migration performance with data-aware prompt optimization

In the era of generative AI, new large language models (LLMs) are continually emerging, each with unique capabilities, architectures, and optimizations. Among these, Amazon Nova foundation models (FMs) deliver frontier intelligence and industry-leading cost-performance, available exclusively on Amazon Bedrock. Since its launch in 2024, generative AI practitioners, including the teams in Amazon, have started transitioning their workloads from existing FMs and adopting Amazon Nova models.

However, when transitioning between different foundation models, the prompts created for your original model might not be as performant for Amazon Nova models without prompt engineering and optimization. Amazon Bedrock prompt optimization offers a tool to automatically optimize prompts for your specified target models (in this case, Amazon Nova models). It can convert your original prompts to Amazon Nova-style prompts. Additionally, during the migration to Amazon Nova, a key challenge is making sure that performance after migration is at least as good as or better than prior to the migration. To mitigate this challenge, thorough model evaluation, benchmarking, and data-aware optimization are essential, to compare the Amazon Nova model’s performance against the model used before the migration, and optimize the prompts on Amazon Nova to align performance with that of the previous workload or improve upon them.

In this post, we present an LLM migration paradigm and architecture, including a continuous process of model evaluation, prompt generation using Amazon Bedrock, and data-aware optimization. The solution evaluates the model performance before migration and iteratively optimizes the Amazon Nova model prompts using user-provided dataset and objective metrics. We demonstrate successful migration to Amazon Nova for three LLM tasks: text summarization, multi-class text classification, and question-answering implemented by Retrieval Augmented Generation (RAG). We also discuss the lessons learned and best practices for you to implement the solution for your real-world use cases.

Migrating your generative AI workloads to Amazon Nova

Migrating the model from your generative AI workload to Amazon Nova requires a structured approach to achieve performance consistency and improvement. It includes evaluating and benchmarking the old and new models, optimizing prompts on the new model, and testing and deploying the new models in your production. In this section, we present a four-step workflow and a solution architecture, as shown in the following architecture diagram.

model migration process

The workflow includes the following steps:

  1. Evaluate the source model and collect key performance metrics based on your business use case, such as response accuracy, response format correctness, latency, and cost, to set a performance baseline as the model migration target.
  2. Automatically update the structure, instruction, and language of your prompts to adapt to the Amazon Nova model for accurate, relevant, and faithful outputs. We will discuss this more in the next section.
  3. Evaluate the optimized prompts on the migrated Amazon Nova model to meet the performance target defined in Step 1. You can conduct the optimization in Step 2 as an iterative process until the optimized prompts meet your business criteria.
  4. Conduct A/B testing to validate the Amazon Nova model performance in your testing and production environment. When you’re satisfied, you can deploy the Amazon Nova model, settings, and prompts in production. 

This four-step workflow needs to run continuously, to adapt to variations in both the model and the data, driven by the changes in business use cases. The continuous adaptation provides ongoing optimization and helps maximize overall model performance.

Data-aware prompt optimization on Amazon Nova

In this section, we present a comprehensive optimization methodology, taking two steps. The first step is to use Amazon Bedrock prompt optimization to refine your prompt structure, and then use an innovative data-aware prompt optimization approach to further optimize the prompt to improve the Amazon Nova model performance.

Amazon Bedrock prompt optimization

Amazon Bedrock provides a prompt optimization feature that rewrites prompts to improve performance for your use cases. Prompt optimization streamlines the way that AWS developers interact with FMs on Amazon Bedrock, automatically adapts the prompts to the selected models, and generates for better performance.

As the first step, you can use prompt optimization to adapt your prompt to Amazon Nova. By analyzing the prompt you provide, the feature interprets the task, system prompt, and instruction within the prompt, and automatically crafts the prompt with Amazon Nova specific format and appropriate words, phrases, and sentences. The following example shows how prompt optimization converts a typical prompt for a summarization task on Anthropic’s Claude Haiku into a well-structured prompt for an Amazon Nova model, with sections that begin with special markdown tags such as ## Task, ### Summarization Instructions, and ### Document to Summarize.

Model Prompt
Anthropic’s Claude 3 Haiku Human:
Act like you are an intelligent AI assistant. You are required to provide a summarization based on given document. Please use below instructions when generating the response.
The document is provided in <doc></doc> XML tags.
Please be brief and concise in your answer. Do not add any information that is not mentioned in the document.
Do not provide any preamble and directly start with the summarization.
Do not make up the answer, If you don’t know the answer, just say that I don’t know.<doc>{document}</doc>Assistant:
Amazon Nova Lite with Amazon
Bedrock prompt optimization
### Task
Your task is to summarize the given document enclosed in <doc></doc> tags in a brief and concise manner. Follow these guidelines:### Summarization Instructions
– Read the document carefully to understand its main points and key information.
– Identify the core ideas, arguments, and supporting details presented in the document.
– Synthesize the essential information into a clear and succinct summary.
– Use your own words to paraphrase the key points – do not copy verbatim from the original text.
– Omit any extraneous or redundant information not central to the main ideas.
– Do not introduce new information or make up content not present in the original document.
– If you cannot summarize the document due to lack of understanding, simply respond “I don’t know.”### Document to Summarize
<doc>{document}</doc>Please provide your concise summary immediately without any preamble.

We applied the preceding prompts to the Anthropic Claude 3 Haiku and Amazon Nova Lite models, respectively, using the public xsum dataset. To evaluate the model performance, because the summarization task doesn’t have a predefined ground truth, we designed an LLM judge as shown in the following prompt to validate the summarization quality:

You are an AI assistant, your task is to compare the following LLM-generated summary with the original document, rate how well it captures the key points and conveys the most critical information, on a scale of 1-5.
    
    The score should be based on the following performance criteria:
    - Consistency: characterizes the summary’s factual and logical correctness. It should stay true to the original text, not introduce additional information, and use the same terminology.
    - Relevance: captures whether the summary is limited to the most pertinent information in the original text. A relevant summary focuses on the essential facts and key messages, omitting unnecessary details or trivial information.
    - Fluency: describes the readability of the summary. A fluent summary is well-written and uses proper syntax, vocabulary, and grammar.
    - Coherence: measures the logical flow and connectivity of ideas. A coherent summary presents the information in a structured, logical, and easily understandable manner.
    
    Score 5 means the LLM-generated summary is the best summary fully aligned with the original document,
    Score 1 means the LLM-generated summary is the worst summary completely irrelevant to the original document.  

    Please also provide an explanation on why you provide the score. Keep the explanation as concise as possible.

    The LLM-generated summary is provided within the <summary> XML tag,
    The original document is provided within the <document> XML tag,

    In your response, present the score within the <score> XML tag, and the explanation within the <thinking> XML tag.

    DO NOT nest <score> and <thinking> element.
    DO NOT put any extra attribute in the <score> and <thinking> tag.
    
    <document>
    {document}
    </document>

    LLM generated summary:
    <summary>
    {summary}
    </summary>

The experiment, using 80 data samples, shows that the accuracy is improved on the Amazon Nova Lite model from 77.75% to 83.25% using prompt optimization.

Data-aware optimization

Although Amazon Bedrock prompt optimization supports the basic needs of prompt engineering, other prompt optimization techniques are available to maximize LLM performance, such as Multi-Aspect Critique, Self-Reflection, Gradient Descent and Beam Search, and Meta Prompting. Specifically, we observed requirements from users that they need to fine-tune their prompts against their optimization objective metrics they define, such as ROUGE, BERT-F1, or an LLM judge score, by using a dataset they provide. To meet these needs, we designed a data-aware optimization architecture as shown in the following diagram.

data-aware-optimization

The data-aware optimization takes inputs. The first input is the user-defined optimization objective metrics; for the summarization task discussed in the previous section, you can use the BERT-F1 score or create your own LLM judge. The second input is a training dataset (DevSet) provided by the user to validate the response quality, for example, a summarization data sample with the following format.

Source Document Summarization
Officers searched properties in the Waterfront Park and Colonsay View areas of the city on Wednesday. Detectives said three firearms, ammunition and a five-figure sum of money were recovered. A 26-year-old man who was arrested and charged appeared at Edinburgh Sheriff Court on Thursday. A man has appeared in court after firearms, ammunition and cash were seized by police in Edinburgh.
<another document ...> <another summarization ...>

The data-aware optimization uses these two inputs to improve the prompt for better Amazon Nova response quality. In this work, we use the DSPy (Declarative Self-improving Python) optimizer for the data-aware optimization. DSPy is a widely used framework for programming language models. It offers algorithms for optimizing the prompts for multiple LLM tasks, from simple classifiers and summarizers to sophisticated RAG pipelines. The dspy.MIPROv2 optimizer intelligently explores better natural language instructions for every prompt using the DevSet, to maximize the metrics you define.

We applied the MIPROv2 optimizer on top of the results optimized by Amazon Bedrock in the previous section for better Amazon Nova performance. In the optimizer, we specify the number of the instruction candidates in the generation space, use Bayesian optimization to effectively search over the space, and run it iteratively to generate instructions and few-shot examples for the prompt in each step:

# Initialize optimizer
teleprompter = MIPROv2(
    metric=metric,
    num_candidates=5,
    auto="light", 
    verbose=False,
)

With the setting of num_candidates=5, the optimizer generates five candidate instructions:

0: Given the fields `question`, produce the fields `answer`.

1: Given a complex question that requires a detailed reasoning process, produce a structured response that includes a step-by-step reasoning and a final answer. Ensure the reasoning clearly outlines each logical step taken to arrive at the answer, maintaining clarity and neutrality throughout.

2: Given the fields `question` and `document`, produce the fields `answer`. Read the document carefully to understand its main points and key information. Identify the core ideas, arguments, and supporting details presented in the document. Synthesize the essential information into a clear and succinct summary. Use your own words to paraphrase the key points without copying verbatim from the original text. Omit any extraneous or redundant information not central to the main ideas. Do not introduce new information or make up content not present in the original document. If you cannot summarize the document due to lack of understanding, simply respond "I don't know.

3: In a high-stakes scenario where you must summarize critical documents for an international legal case, use the Chain of Thought approach to process the question. Carefully read and understand the document enclosed in <doc></doc> tags, identify the core ideas and key information, and synthesize this into a clear and concise summary. Ensure that the summary is neutral, precise, and omits any extraneous details. If the document is too complex or unclear, respond with "I don't know.

4: Given the fields `question` and `document`, produce the fields `answer`. The `document` field contains the text to be summarized. The `answer` field should include a concise summary of the document, following the guidelines provided. Ensure the summary is clear, accurate, and captures the core ideas without introducing new information.

We set other parameters for the optimization iteration, including the number of trials, the number of few-shot examples, and the batch size for the optimization process:

# Optimize program
optimized_program = teleprompter.compile(
        program.deepcopy(),
        trainset=trainset,
        num_trials=7,
        minibatch_size=20,
        minibatch_full_eval_steps=7,
        max_bootstrapped_demos=2,
        max_labeled_demos=2,
        requires_permission_to_run=False,
)

When the optimization starts, MIPROv2 uses each instruction candidate along with the mini-batch of the testing dataset we provided to infer the LLM and calculate the metrics we defined. After the loop is complete, the optimizer evaluates the best instruction by using the full testing dataset and calculates the full evaluation score. Based on the iterations, the optimizer provides the improved instruction for the prompt:

Given the fields `question` and `document`, produce the fields `answer`.
The `document` field contains the text to be summarized.
The `answer` field should include a concise summary of the document, following the guidelines provided.
Ensure the summary is clear, accurate, and captures the core ideas without introducing new information.

Applying the optimized prompt, the summarization accuracy generated by the LLM judge on Amazon Nova Lite model is further improved from 83.25% to 87.75%.

We also applied the optimization process on other LLM tasks, including a multi-class text classification task, and a question-answering task using RAG. In all the tasks, our approach optimized the migrated Amazon Nova model to out-perform the Anthropic Claude Haiku and Meta Llama models before migration. The following table and chart illustrate the optimization results.

Task DevSet Evaluation Before Migration After Migration (Amazon Bedrock Prompt Optimization) After Migration (DSPy with Amazon Bedrock Prompt Optimization)
Summarization (Anthropic Claude 3 Haiku to Amazon Nova Lite) 80 samples LLM Judge 77.75 83.25 87.75
Classification (Meta Llama 3.2 3B to Amazon Nova Micro) 80 samples Accuracy 81.25 81.25 87.5
QA-RAG (Anthropic Claude 3 Haiku to Amazon Nova Lite) 50 samples Semantic Similarity 52.71 51.6 57.15

Migration results

For the text classification use case, we optimized the Amazon Nova Micro model using 80 samples, using the accuracy metrics to evaluate the optimization performance in each step. After seven iterations, the optimized prompt provides 87.5% accuracy, improved from the accuracy of 81.25% running on the Meta Llama 3.2 3B model.

For the question-answering use case, we used 50 samples to optimize the prompt for an Amazon Nova Lite model in the RAG pipeline, and evaluated the performance using a semantic similarity score. The score compares the cosine distance between the model’s answer and the ground truth answer. Comparing to the testing data running on Anthropic’s Claude 3 Haiku, the optimizer improved the score from 52.71 to 57.15 after migrating to the Amazon Nova Lite model and prompt optimization.

You can find more details of these examples in the GitHub repository.

Lessons learned and best practices

Through the solution design, we have identified best practices that can help you properly configure your prompt optimization to maximize the metrics you specify for your use case:

  • Your dataset for optimizer should be of high quality and relevancy, and well-balanced to cover the data patterns and edge cases of your use case, and nuances to minimize biases.
  • The metrics you defined as the target of optimization should be use case specific. For example, if your dataset has ground truth, then you can use statistical and programmatical machine learning (ML) metrics such as accuracy and semantic similarity If your dataset doesn’t include ground truth, a well-designed and human-aligned LLM judge can provide a reliable evaluation score for the optimizer.
  • The optimizer runs with a number of prompt candidates (parameter dspy.num_candidates) and uses the evaluation metric you defined to select the optimal prompt as the output. Avoid setting too few candidates that might miss opportunity for improvement. In the previous summarization example, we set five prompt candidates for optimizing through 80 training samples, and received good optimization performance.
  • The prompt candidates include a combination of prompt instructions and few-shot examples. You can specify the number of examples (parameter dspy.max_labeled_demos for examples from labeled samples, and parameter dspy.max_bootstrapped_demos for examples from unlabeled samples); we recommend the example number be no less than 2.
  • The optimization runs in iteration (parameter dspy.num_trials); you should set enough iterations that allow you to refine prompts based on different scenarios and performance metrics, and gradually enhance clarity, relevance, and adaptability. If you optimize both the instructions and the few-shot examples in the prompt, we recommend you set the iteration number to no less than 2, preferably between 5–10.

In your use case, if your prompt structure is complex with chain-of-thoughts or tree-of-thoughts, long instructions in the system prompt, and multiple inputs in the user prompt, you can use a task-specific class to abstract the DSPy optimizer. The class helps encapsulate the optimization logic, standardize the prompt structure and optimization parameters, and allow straightforward implementation of different optimization strategies. The following is an example of the class created for text classification task:

class Classification(dspy.Signature):

""" You are a product search expert evaluating the quality of specific search results and deciding will that lead to a buying decision or not. You will be given a search query and the resulting product information and will classify the result against a provided classification class. Follow the given instructions to classify the search query using the classification scheme

   Class Categories:

   Class Label:

   Category Label: Positive Search

   The class is chosen when the search query and the product are a full match and hence the customer experience is positive

   Category Label: Negative Search

   The class is chosen when the search query and the product are fully misaligned, meaning you searched for something but the output is completely different

   Category Label: Moderate Search

   The class is chosen when the search query and the product may not be fully same, but still are complementing each other and maybe of similar category

"""

   search_query = dspy.InputField(desc="Search Query consisting of keywords")

   result_product_title = dspy.InputField(desc="This is part of Product Description and indicates the Title of the product")

   result_product_description = dspy.InputField(desc="This is part of Product Description and indicates the description of the product")

   …

   thinking = dspy.OutputField(desc="justification in the scratchpad, explaining the reasoning behind the classification choice and highlighting key factors that led to the decision")

   answer = dspy.OutputField(desc="final classification label for the product result: positive_search/negative_search/moderate_search. ")

""" Instructions:

Begin by creating a scratchpad where you can jot down your initial thoughts, observations, and any pertinent information related to the search query and product. This section is for your personal use and doesn't require a formal structure.
Proceed to examine and dissect the search query. Pinpoint essential terms, brand names, model numbers, and specifications. Assess the user's probable objective based on the query.
Subsequently, juxtapose the query with the product. Seek out precise correspondences in brand, model, and specifications. Recognize commonalities in functionality, purpose, or features. Reflect on how the product connects to or augments the item being queried.
Afterwards, employ a methodical classification approach, contemplating each step carefully
Conclude by verifying the classification. Scrutinize the selected category in relation to its description to confirm its precision. Take into account any exceptional circumstances or possible uncertainties.

"""

Conclusion

In this post, we introduced the workflow and architecture for you to migrate your current generative AI workload into Amazon Nova models, and presented a comprehensive prompt optimization approach using Amazon Bedrock prompt optimization and a data-aware prompt optimization methodology with DSPy. The results on three LLM tasks demonstrated the optimized performance of Amazon Nova in its intelligence classes and the model performance improved by Amazon Bedrock prompt optimization post-model migration, which is further enhanced by the data-aware prompt optimization methodology presented in this post.

The Python library and code examples are publicly available on GitHub. You can use this LLM migration method and the prompt optimization solution to migrate your workloads into Amazon Nova, or in other model migration processes.


About the Authors

YunfeiYunfei Bai is a Principal Solutions Architect at AWS. With a background in AI/ML, data science, and analytics, Yunfei helps customers adopt AWS services to deliver business results. He designs AI/ML and data analytics solutions that overcome complex technical challenges and drive strategic objectives. Yunfei has a PhD in Electronic and Electrical Engineering. Outside of work, Yunfei enjoys reading and music.

Anupam Dewan is a Senior Solutions Architect with a passion for generative AI and its applications in real life. He and his team enable Amazon Builders who build customer facing application using generative AI. He lives in Seattle area, and outside of work loves to go on hiking and enjoy nature.

Shuai Wang is a Senior Applied Scientist and Manager at Amazon Bedrock, specializing in natural language processing, machine learning, large language modeling, and other related AI areas. Outside work, he enjoys sports, particularly basketball, and family activities.

Kashif Imran is a seasoned engineering and product leader with deep expertise in AI/ML, cloud architecture, and large-scale distributed systems. Currently a Senior Manager at AWS, Kashif leads teams driving innovation in generative AI and Cloud, partnering with strategic cloud customers to transform their businesses. Kashif holds dual master’s degrees in Computer Science and Telecommunications, and specializes in translating complex technical capabilities into measurable business value for enterprises.

Read More

Customize Amazon Nova models to improve tool usage

Customize Amazon Nova models to improve tool usage

Modern large language models (LLMs) excel in language processing but are limited by their static training data. However, as industries require more adaptive, decision-making AI, integrating tools and external APIs has become essential. This has led to the evolution and rapid rise of agentic workflows, where AI systems autonomously plan, execute, and refine tasks. Accurate tool use is foundational for enhancing the decision-making and operational efficiency of these autonomous agents and building successful and complex agentic workflows.

In this post, we dissect the technical mechanisms of tool calling using Amazon Nova models through Amazon Bedrock, alongside methods for model customization to refine tool calling precision.

Expanding LLM capabilities with tool use

LLMs excel at natural language tasks but become significantly more powerful with tool integration, such as APIs and computational frameworks. Tools enable LLMs to access real-time data, perform domain-specific computations, and retrieve precise information, enhancing their reliability and versatility. For example, integrating a weather API allows for accurate, real-time forecasts, or a Wikipedia API provides up-to-date information for complex queries. In scientific contexts, tools like calculators or symbolic engines address numerical inaccuracies in LLMs. These integrations transform LLMs into robust, domain-aware systems capable of handling dynamic, specialized tasks with real-world utility.

Amazon Nova models and Amazon Bedrock

Amazon Nova models, unveiled at AWS re:Invent in December 2024, are optimized to deliver exceptional price-performance value, offering state-of-the-art performance on key text-understanding benchmarks at low cost. The series comprises three variants: Micro (text-only, ultra-efficient for edge use), Lite (multimodal, balanced for versatility), and Pro (multimodal, high-performance for complex tasks).

Amazon Nova models can be used for variety of tasks, from generation to developing agentic workflows. As such, these models have the capability to interface with external tools or services and use them through tool calling. This can be achieved through the Amazon Bedrock console (see Getting started with Amazon Nova in the Amazon Bedrock console) and APIs such as Converse and Invoke.

In addition to using the pre-trained models, developers have the option to fine-tune these models with multimodal data (Pro and Lite) or text data (Pro, Lite, and Micro), providing the flexibility to achieve desired accuracy, latency, and cost. Developers can also run self-service custom fine-tuning and distillation of larger models to smaller ones using the Amazon Bedrock console and APIs.

Solution overview

The following diagram illustrates the solution architecture.

Our solution consists data preparation for tool use, finetuning with prepared dataset, hosting the finetuned model and evaluation of the finetuned model

For this post, we first prepared a custom dataset for tool usage. We used the test set to evaluate Amazon Nova models through Amazon Bedrock using the Converse and Invoke APIs. We then fine-tuned Amazon Nova Micro and Amazon Nova Lite models through Amazon Bedrock with our fine-tuning dataset. After the fine-tuning process was complete, we evaluated these customized models through provisioned throughput. In the following sections, we go through these steps in more detail.

Tools

Tool usage in LLMs involves two critical operations: tool selection and argument extraction or generation. For instance, consider a tool designed to retrieve weather information for a specific location. When presented with a query such as “What’s the weather in Alexandria, VA?”, the LLM evaluates its repertoire of tools to determine whether an appropriate tool is available. Upon identifying a suitable tool, the model selects it and extracts the required arguments—here, “Alexandria” and “VA” as structured data types (for example, strings)—to construct the tool call.

Each tool is rigorously defined with a formal specification that outlines its intended functionality, the mandatory or optional arguments, and the associated data types. Such precise definitions, known as tool config, make sure that tool calls are executed correctly and that argument parsing aligns with the tool’s operational requirements. Following this requirement, the dataset used for this example defines eight tools with their arguments and configures them in a structured JSON format. We define the following eight tools (we use seven of them for fine-tuning and hold out the weather_api_call tool during testing in order to evaluate the accuracy on unseen tool use):

  • weather_api_call – Custom tool for getting weather information
  • stat_pull – Custom tool for identifying stats
  • text_to_sql – Custom text-to-SQL tool
  • terminal – Tool for executing scripts in a terminal
  • wikipidea – Wikipedia API tool to search through Wikipedia pages
  • duckduckgo_results_json – Internet search tool that executes a DuckDuckGo search
  • youtube_search – YouTube API search tool that searches video listings
  • pubmed_search – PubMed search tool that searches PubMed abstracts

The following code is an example of what a tool configuration for terminal might look like:

{'toolSpec': {'name': 'terminal',
'description': 'Run shell commands on this MacOS machine ',
'inputSchema': {'json': {'type': 'object',
'properties': {'commands': {'type': 'string',
'description': 'List of shell commands to run. Deserialized using json.loads'}},
'required': ['commands']}}}},

Dataset

The dataset is a synthetic tool calling dataset created with assistance from a foundation model (FM) from Amazon Bedrock and manually validated and adjusted. This dataset was created for our set of eight tools as discussed in the previous section, with the goal of creating a diverse set of questions and tool invocations that allow another model to learn from these examples and generalize to unseen tool invocations.

Each entry in the dataset is structured as a JSON object with key-value pairs that define the question (a natural language user query for the model), the ground truth tool required to answer the user query, its arguments (dictionary containing the parameters required to execute the tool), and additional constraints like order_matters: boolean, indicating if argument order is critical, and arg_pattern: optional, a regular expression (regex) for argument validation or formatting. Later in this post, we use these ground truth labels to supervise the training of pre-trained Amazon Nova models, adapting them for tool use. This process, known as supervised fine-tuning, will be explored in detail in the following sections.

The size of the training set is 560 questions and the test set is 120 questions. The test set consists of 15 questions per tool category, totaling 120 questions. The following are some examples from the dataset:

{
"question": "Explain the process of photosynthesis",
"answer": "wikipedia",
"args": {'query': 'process of photosynthesis'
},
"order_matters":False,
"arg_pattern":None
}
{
"question": "Display system date and time",
"answer": "terminal",
"args": {'commands': ['date'
]
},
"order_matters":True,
"arg_pattern":None
}
{
"question": "Upgrade the requests library using pip",
"answer": "terminal",
"args": {'commands': ['pip install --upgrade requests'
]
},
"order_matters":True,
"arg_pattern": [r'pip(3?) install --upgrade requests'
]
}

Prepare the dataset for Amazon Nova

To use this dataset with Amazon Nova models, we need to additionally format the data based on a particular chat template. Native tool calling has a translation layer that formats the inputs to the appropriate format before passing the model. Here, we employ a DIY tool use approach with a custom prompt template. Specifically, we need to add the system prompt, the user message embedded with the tool config, and the ground truth labels as the assistant message. The following is a training example formatted for Amazon Nova. Due to space constraints, we only show the toolspec for one tool.

{"system": [{"text": "You are a bot that can handle different requests
with tools."}],
"messages": [{"role": "user",
"content": [{"text": "Given the following functions within <tools>,
please respond with a JSON for a function call with its proper arguments
that best answers the given prompt.

Respond in the format
{"name": function name,"parameters": dictionary of argument name and
its value}.
Do not use variables. Donot give any explanations.

ONLY output the resulting
JSON structure and nothing else.

Donot use the word 'json' anywhere in the
result.
<tools>
    {"tools": [{"toolSpec":{"name":"youtube_search",
    "description": " search for youtube videos associated with a person.
    the input to this tool should be a comma separated list, the first part
    contains a person name and the second a number that is the maximum number
    of video results to return aka num_results. the second part is optional", 
    "inputSchema":
    {"json":{"type":"object","properties": {"query":
    {"type": "string",
     "description": "youtube search query to look up"}},
    "required": ["query"]}}}},]}
</tools>
Generate answer for the following question.
<question>
List any products that have received consistently negative reviews
</question>"}]},
{"role": "assistant", "content": [{"text": "{'name':text_to_sql,'parameters':
{'table': 'product_reviews','condition':
'GROUP BY product_id HAVING AVG(rating) < 2'}}"}]}],
"schemaVersion": "tooluse-dataset-2024"}

Upload dataset to Amazon S3

This step is needed later for the fine-tuning for Amazon Bedrock to access the training data. You can upload your dataset either through the Amazon Simple Storage Service (Amazon S3) console or through code.

Tool calling with base models through the Amazon Bedrock API

Now that we have created the tool use dataset and formatted it as required, let’s use it to test out the Amazon Nova models. As mentioned previously, we can use both the Converse and Invoke APIs for tool use in Amazon Bedrock. The Converse API enables dynamic, context-aware conversations, allowing models to engage in multi-turn dialogues, and the Invoke API allows the user to call and interact with the underlying models within Amazon Bedrock.

To use the Converse API, you simply send the messages, system prompt (if any), and the tool config directly in the Converse API. See the following example code:

response = bedrock_runtime.converse(
modelId=model_id,
messages=messages,
system=system_prompt,
toolConfig=tool_config,
)

To parse the tool and arguments from the LLM response, you can use the following example code:

for content_block in response['output']['message']["content"]:

if "toolUse" in content_block:
out_tool_name=content_block['toolUse']['name']
out_tool_inputs_dict=content_block['toolUse']['input']
print(out_tool_name,out_tool_inputs_dict.keys())

For the question: “Hey, what's the temperature in Paris right now?”, you get the following output:

weather_api_call dict_keys(['country', 'city'])

To execute tool use through the Invoke API, first you need to prepare the request body with the user question as well as the tool config that was prepared before. The following code snippet shows how to convert the tool config JSON to string format, which can be used in the message body:

# Convert tools configuration to JSON string
formatted_tool_config = json.dumps(tool_config, indent=2)
prompt = prompt_template.replace("{question}", question)
prompt = prompt.replace("{tool_config}", formatted_tool_config)
# message template
messages = [{"role": "user", "content": [{"text": prompt}]}]
# Prepare request body
model_kwargs = {"system":system_prompt, "messages": messages, "inferenceConfig": inferenceConfig,} body = json.dumps(model_kwargs)
response = bedrock_runtime.invoke_model(
body=body,
modelId=model_id,
accept=accept,
contentType=contentType
)

Using either of the two APIs, you can test and benchmark the base Amazon Nova models with the tool use dataset. In the next sections, we show how you can customize these base models specifically for the tool use domain.

Supervised fine-tuning using the Amazon Bedrock console

Amazon Bedrock offers three different customization techniques: supervised fine-tuning, model distillation, and continued pre-training. At the time of writing, the first two methods are available for customizing Amazon Nova models. Supervised fine-tuning is a popular method in transfer learning, where a pre-trained model is adapted to a specific task or domain by training it further on a smaller, task-specific dataset. The process uses the representations learned during pre-training on large datasets to improve performance in the new domain. During fine-tuning, the model’s parameters (either all or selected layers) are updated using backpropagation to minimize the loss.

In this post, we use the labeled datasets that we created and formatted previously to run supervised fine-tuning to adapt Amazon Nova models for the tool use domain.

Create a fine-tuning job

Complete the following steps to create a fine-tuning job:

  1. Open the Amazon Bedrock console.
  2. Choose us-east-1 as the AWS Region.
  3. Under Foundation models in the navigation pane, choose Custom models.
  4. Choose Create Fine-tuning job under Customization methods. 

At the time of writing, Amazon Nova model fine-tuning is exclusively available in the us-east-1 Region.

create finetuning job from console

  1. Choose Select model and choose Amazon as the model provider.
  2. Choose your model (for this post, Amazon Nova Micro) and choose Apply.

choose model for finetuning

  1. For Fine-tuned model name, enter a unique name.
  2. For Job name¸ enter a name for the fine-tuning job.
  3. In the Input data section, enter following details:
    • For S3 location, enter the source S3 bucket containing the training data.
    • For Validation dataset location, optionally enter the S3 bucket containing a validation dataset.

Choosing data location in console for finetuning

  1. In the Hyperparameters section, you can customize the following hyperparameters:
    • For Epochs¸ enter a value between 1–5.
    • For Batch size, the value is fixed at 1.
    • For Learning rate multiplier, enter a value between 0.000001–0.0001
    • For Learning rate warmup steps, enter a value between 0–100.

We recommend starting with the default parameter values and then changing the settings iteratively. It’s a good practice to change only one or a couple of parameters at a time, in order to isolate the parameter effects. Remember, hyperparameter tuning is model and use case specific.

  1. In the Output data section, enter the target S3 bucket for model outputs and training metrics.
  2. Choose Create fine-tuning job.

Run the fine-tuning job

After you start the fine-tuning job, you will be able to see your job under Jobs and the status as Training. When it finishes, the status changes to Complete.

tracking finetuning job progress

You can now go to the training job and optionally access the training-related artifacts that are saved in the output folder.

Check training artifacts

You can find both training and validation (we highly recommend using a validation set) artifacts here.

training and validation artifacts

You can use the training and validation artifacts to assess your fine-tuning job through loss curves (as shown in the following figure), which track training loss (orange) and validation loss (blue) over time. A steady decline in both indicates effective learning and good generalization. A small gap between them suggests minimal overfitting, whereas a rising validation loss with decreasing training loss signals overfitting. If both losses remain high, it indicates underfitting. Monitoring these curves helps you quickly diagnose model performance and adjust training strategies for optimal results.

training and validation loss curves gives insight on how the training progresses

Host the fine-tuned model and run inference

Now that you have completed the fine-tuning, you can host the model and use it for inference. Follow these steps:

  1. On the Amazon Bedrock console, under Foundation models in the navigation pane, choose Custom models
  2. On the Models tab, choose the model you fine-tuned.

starting provisioned throughtput through console to host FT model

  1. Choose Purchase provisioned throughput.

start provisioned throughput

  1. Specify a commitment term (no commitment, 1 month, 6 months) and review the associated cost for hosting the fine-tuned models.

After the customized model is hosted through provisioned throughput, a model ID will be assigned, which will be used for inference. For inference with models hosted with provisioned throughput, we have to use the Invoke API in the same way we described previously in this post—simply replace the model ID with the customized model ID.

The aforementioned fine-tuning and inference steps can also be done programmatically. Refer to the following GitHub repo for more detail.

Evaluation framework

Evaluating fine-tuned tool calling LLMs requires a comprehensive approach to assess their performance across various dimensions. The primary metric to evaluate tool calling is accuracy, including both tool selection and argument generation accuracy. This measures how effectively the model selects the correct tool and generates valid arguments. Latency and token usage (input and output tokens) are two other important metrics.

Tool call accuracy evaluates if the tool predicted by the LLM matches the ground truth tool for each question; a score of 1 is given if they match and 0 when they don’t. After processing the questions, we can use the following equation: Tool Call Accuracy=∑(Correct Tool Calls)/(Total number of test questions).

Argument call accuracy assesses whether the arguments provided to the tools are correct, based on either exact matches or regex pattern matching. For each tool call, the model’s predicted arguments are extracted. It uses the following argument matching methods:

  • Regex matching – If the ground truth includes regex patterns, the predicted arguments are matched against these patterns. A successful match increases the score.
  • Inclusive string matching – If no regex pattern is provided, the predicted argument is compared to the ground truth argument. Credit is given if the predicted argument contains the ground truth argument. This is to allow for arguments, like search terms, to not be penalized for adding additional specificity.

The score for each argument is normalized based on the number of arguments, allowing partial credit when multiple arguments are required. The cumulative correct argument scores are averaged across all questions: Argument Call Accuracy = ∑Correct Arguments/(Total Number of Questions).

Below we show some example questions and accuracy scores:

Example 1:

User question: Execute this run.py script with an argparse arg adding two gpus
GT tool: terminal   LLM output tool: terminal
Pred args:  ['python run.py —gpus 2']
Ground truth pattern: python(3?) run.py —gpus 2
Arg matching method: regex match
Arg matching score: 1.0

Example 2:

User question: Who had the most rushing touchdowns for the bengals in 2017 season?
GT tool: stat_pull   LLM output tool: stat_pull
Pred args:  ['NFL']
straight match
arg score 0.3333333333333333
Pred args:  ['2017']
Straight match
Arg score 0.6666666666666666
Pred args:  ['Cincinnati Bengals']
Straight match
Arg score 1.0

Results

We are now ready to visualize the results and compare the performance of base Amazon Nova models to their fine-tuned counterparts.

Base models

The following figures illustrate the performance comparison of the base Amazon Nova models.

Improvement of finetuned models over base models in tool use

The comparison reveals a clear trade-off between accuracy and latency, shaped by model size. Amazon Nova Pro, the largest model, delivers the highest accuracy in both tool call and argument call tasks, reflecting its advanced computational capabilities. However, this comes with increased latency.

In contrast, Amazon Nova Micro, the smallest model, achieves the lowest latency, which ideal for fast, resource-constrained environments, though it sacrifices some accuracy compared to its larger counterparts.

Fine-tuned models vs. base models

The following figure visualizes accuracy improvement after fine-tuning.

Improvement of finetuned models over base models in tool use

The comparative analysis of the Amazon Nova model variants reveals substantial performance improvements through fine-tuning, with the most significant gains observed in the smaller Amazon Nova Micro model. The fine-tuned Amazon Nova model showed remarkable growth in tool call accuracy, increasing from 75.8% to 95%, which is a 25.38% improvement. Similarly, its argument call accuracy rose from 77.8% to 87.7%, reflecting a 12.74% increase.

In contrast, the fine-tuned Amazon Nova Lite model exhibited more modest gains, with tool call accuracy improving from 90.8% to 96.66%—a 6.46% increase—and argument call accuracy rising from 85% to 89.9%, marking a 5.76% improvement. Both fine-tuned models surpassed the accuracy achieved by the Amazon Nova Pro base model.

These results highlight that fine-tuning can significantly enhance the performance of lightweight models, making them strong contenders for applications where both accuracy and latency are critical.

Conclusion

In this post, we demonstrated model customization (fine-tuning) for tool use with Amazon Nova. We first introduced a tool usage use case, and gave details about the dataset. We walked through the details of Amazon Nova specific data formatting and showed how to do tool calling through the Converse and Invoke APIs in Amazon Bedrock. After getting the baseline results from Amazon Nova models, we explained in detail the fine-tuning process, hosting fine-tuned models with provisioned throughput, and using the fine-tuned Amazon Nova models for inference. In addition, we touched upon getting insights from training and validation artifacts from a fine-tuning job in Amazon Bedrock.

Check out the detailed notebook for tool usage to learn more. For more information on Amazon Bedrock and the latest Amazon Nova models, refer to the Amazon Bedrock User Guide and Amazon Nova User Guide. The Generative AI Innovation Center has a group of AWS science and strategy experts with comprehensive expertise spanning the generative AI journey, helping customers prioritize use cases, build roadmaps, and move solutions into production. See Generative AI Innovation Center for our latest work and customer success stories.


About the Authors

Baishali Chaudhury is an Applied Scientist at the Generative AI Innovation Center at AWS, where she focuses on advancing Generative AI solutions for real-world applications. She has a strong background in computer vision, machine learning, and AI for healthcare. Baishali holds a PhD in Computer Science from University of South Florida and PostDoc from Moffitt Cancer Centre.

Isaac Privitera is a Principal Data Scientist with the AWS Generative AI Innovation Center, where he develops bespoke generative AI-based solutions to address customers’ business problems. His primary focus lies in building responsible AI systems, using techniques such as RAG, multi-agent systems, and model fine-tuning. When not immersed in the world of AI, Isaac can be found on the golf course, enjoying a football game, or hiking trails with his loyal canine companion, Barry.

Mengdie (Flora) Wang is a Data Scientist at AWS Generative AI Innovation Center, where she works with customers to architect and implement scalableGenerative AI solutions that address their unique business challenges. She specializes in model customization techniques and agent-based AI systems, helping organizations harness the full potential of generative AI technology. Prior to AWS, Flora earned her Master’s degree in Computer Science from the University of Minnesota, where she developed her expertise in machine learning and artificial intelligence.

Read More