6x faster Async Checkpointing in PyTorch, using Cached Plans, no GIL contention

6x faster Async Checkpointing in PyTorch, using Cached Plans, no GIL contention

Meta: Less Wright, Meet Vadakkanchery, Saurabh Mishra, Ela Krepska, Hamid Shojanazeri, Pradeep Fernando
Crusoe: Ethan Petersen, Martin Cala, Chip Smith

PyTorch DCP (Distributed Checkpointing) has recently enabled new optimizations in asynchronous checkpointing to reduce GPU utilization drop by minimizing collective overhead and improving overall checkpointing efficiency.

Using Crusoe’s 2K H200 cluster, with TorchTitan and training a Llama3-70B, we were able to verify these new features deliver substantial speedups at 1856 GPU scale, reducing the background processing time for async DCP checkpoints from ~436 seconds to ~67 seconds.

This is roughly a 6.5x reduction in background checkpoint processing time, enabling even more total training time to proceed at full training throughput.

Fig 1: 1856 training run with high frequency checkpointing. The first checkpoint (drop down in tps) does not have a cached save plan, and the background processing takes far longer than the rest where the cached plan is used.

Background: What is Asynchronous Checkpointing?

In a standard checkpointing workflow, GPUs are blocked while the checkpointing data is offloaded from GPU to CPU and then written to storage. After the save to physical media is complete, training can resume.

Asynchronous checkpointing greatly reduces this downtime by enabling the actual saving to storage to be done via CPU threads, allowing GPU-based training to continue while the checkpoint data is being persisted in parallel. It is used primarily for intermediate/fault tolerant checkpoints as it unblocks the GPUs much faster compared to the synchronous checkpoints.
For example, in our large-scale experiment, GPU training was blocked for less than a second (.78 seconds at 1856 scale) while checkpoint data was moved from GPU to CPU (staging). At that point, GPU training immediately continues, which is a substantial training time improvement over traditional checkpointing. For reference, Async Checkpointing is covered in more detail here.

Challenges with Asynchronous Checkpointing

However, the background processing inherent in Asynchronous Checkpointing has additional challenges that result in a temporary reduction of training throughput while the storage phase is being completed. These are highlighted below.

GPU utilization drop from GIL contention:

The Global Interpreter Lock (GIL) in Python is a mechanism that prevents multiple native threads from executing Python bytecode at the same time. This lock is necessary mainly because CPython’s memory management is not thread-safe.

DCP currently uses background threads for metadata collectives and uploading to storage. Although these expensive steps are done asynchronously, it leads to contention for the GIL with the trainer threads. This causes the GPU utilization (QPS) to suffer significantly and also increases the e2e upload latency. For large-scale checkpoints, the overhead of the CPU parallel processing has a suppressive effect on net GPU training speed since CPUs also drive the training process via GPU kernel launches.

Please refer to the following figure from our experiments:

Fig 2: One can see a sustained drop in training QPS even after staging (i.e. blocking operation to trainer) is complete.

The first dip in Figure 2 (marked by the purple line) indicates that staging is complete, and training can continue. However, a second drop is evident (marked by the area between the purple and yellow lines) which is due to trainer thread and checkpointing threads contending for the Python GIL, leading to degraded training QPS until the checkpoint thread completes execution.

Collective communications cost:

DCP performs multiple collectives today for various reasons: dedupe, global metadata for the checkpoint, resharding, and distributed exception handling. Collectives are costly as these require network I/O and pickling/unpickling of the large metadata being sent across the GPU network. These collectives become extremely expensive as the job scale grows, leading to significantly higher e2e latency and potential for collective timeouts.

Solutions

Process based async checkpointing

DCP now supports async checkpoint save via a background process. This helps avoid the training QPS drop by eliminating the python GIL contention with the trainer threads. Please see Fig 2 for checkpointing via threads and Fig 3 for checkpointing via background process.

Caching of the save plans

DCP has a clear boundary between the planning and storage I/O steps. SavePlanner in DCP is a stateful component which acts as an access proxy to the state_dict. Planner manages save plans prepared by individual ranks, which carry metadata information necessary to do the write I/O. The planning step involves a collective operation to gather a comprehensive view of the checkpoint on the coordinator rank. The coordinator rank is responsible for de-duplicating parameters/weights to eliminate redundancies, validating the global plan to ensure accuracy and consistency, and creating the global metadata structs. This is followed by a scatter collective where the coordinator rank assigns I/O tasks to each rank. Any transformations done on the plans affect how the storage components finally write the data.

During the course of a training job, multiple checkpoints are saved. In the majority of these cases, only the checkpoint data changes between different save instances, and thus, the plan remains the same. This presented an opportunity for us to cache the plans, pay the planning cost only on the first save, and then amortize that cost across all the subsequent attempts. Only the updated plans (plans which changed in the next attempt) are sent via collective, thus reducing the collective overhead significantly.

Experiment Results

Set up: 1856 H200 GPUs, Llama3-70B, HSDP2 with TorchTitan

After deploying both the solutions above, the following are the key results:

  • TPS drop has significantly narrowed, with a peak dip to 372 vs 315 tps, and for a greatly reduced time window (~67 seconds vs ~437 seconds). This time window is now mostly attributed to the blocking for CPU processing.
  • Subsequent checkpoint save attempts also continue to be much faster due to very low overhead at the planning stage. E2E latency is thus improved by over 6.5x. This will allow our partners to increase the checkpointing frequency and reduce the lost training progress (i.e. wasted training time).

If you look at the very first downspike in Figure 1, this drawdown in GPU processing time takes training throughput from 700 down to 320 tps, and suppresses it for roughly 7 minutes (467 seconds). Once the CPUs have finished processing, training continues again at full speed.

Previously, this ~7 minute suppression would be repeated at every checkpoint. However, with the new process-based checkpointing feature, only the first checkpoint has the full drawdown time (mainly due to overhead from daemon process initialization), as all future checkpoints are executed via the background process, mitigating GIL contention with the trainer threads.

This is visually shown in all the subsequent checkpoints where the average MFU suppression time drops to just over a minute, reflected by the sharp spikes that almost immediately revert to full MFU throughput.

Fig 3: The red box shows the non-cached plan checkpoint, which also includes Checkpoint Background Init process overhead, while the purple box highlights the first checkpoint to run with the cached plan.

This means that even large-scale checkpointing, such as shown in Fig 2 at 1856 GPU scale, can be done with ~6x reduced training throughput impact. This enables Asynchronous DCP checkpointing to be run more frequently (thus better rollback protection) while enhancing total training throughput relative to previous Async Checkpointing overhead.

Using DCP’s cached checkpointing:

This feature is already available as part of the PyTorch nightly builds, and you can test out PyTorch’s Asynchronous DCP checkpointing directly in TorchTitan. Following are the instructions to enable these features:

  • Process-based asynchronous checkpointing:
    • Set the async_checkpointer_type to AsyncCheckpointerType.PROCESS in the async_save API. (file: pytorch/torch/distributed/checkpoint/state_dict_saver.py)
  • Save plan caching:
    • Set the enable_plan_caching flag to true in the DefaultSavePlanner. (file: pytorch/torch/distributed/checkpoint/default_planner.py)

Future work

DCP will be rolling out additional optimizations to further improve the checkpointing cost. Currently even though the save plans are cached, coordinator rank still prepares the metadata. For larger jobs and models with many tensors, this overhead is non-trivial. In the next iteration, DCP will eliminate the metadata overhead and improve the e2e latency further. DCP will also introduce additional optimizations, such as zero-overhead checkpointing, to enable efficient checkpointing in large-scale jobs.

Stay tuned!

Read More

6x faster Async Checkpointing in PyTorch, using Cached Plans, no GIL contention

6x faster Async Checkpointing in PyTorch, using Cached Plans, no GIL contention

Meta: Less Wright, Meet Vadakkanchery, Saurabh Mishra, Ela Krepska, Hamid Shojanazeri, Pradeep Fernando
Crusoe: Ethan Petersen, Martin Cala, Chip Smith

PyTorch DCP (Distributed Checkpointing) has recently enabled new optimizations in asynchronous checkpointing to reduce GPU utilization drop by minimizing collective overhead and improving overall checkpointing efficiency.

Using Crusoe’s 2K H200 cluster, with TorchTitan and training a Llama3-70B, we were able to verify these new features deliver substantial speedups at 1856 GPU scale, reducing the background processing time for async DCP checkpoints from ~436 seconds to ~67 seconds.

This is roughly a 6.5x reduction in background checkpoint processing time, enabling even more total training time to proceed at full training throughput.

Fig 1: 1856 training run with high frequency checkpointing. The first checkpoint (drop down in tps) does not have a cached save plan, and the background processing takes far longer than the rest where the cached plan is used.

Background: What is Asynchronous Checkpointing?

In a standard checkpointing workflow, GPUs are blocked while the checkpointing data is offloaded from GPU to CPU and then written to storage. After the save to physical media is complete, training can resume.

Asynchronous checkpointing greatly reduces this downtime by enabling the actual saving to storage to be done via CPU threads, allowing GPU-based training to continue while the checkpoint data is being persisted in parallel. It is used primarily for intermediate/fault tolerant checkpoints as it unblocks the GPUs much faster compared to the synchronous checkpoints.
For example, in our large-scale experiment, GPU training was blocked for less than a second (.78 seconds at 1856 scale) while checkpoint data was moved from GPU to CPU (staging). At that point, GPU training immediately continues, which is a substantial training time improvement over traditional checkpointing. For reference, Async Checkpointing is covered in more detail here.

Challenges with Asynchronous Checkpointing

However, the background processing inherent in Asynchronous Checkpointing has additional challenges that result in a temporary reduction of training throughput while the storage phase is being completed. These are highlighted below.

GPU utilization drop from GIL contention:

The Global Interpreter Lock (GIL) in Python is a mechanism that prevents multiple native threads from executing Python bytecode at the same time. This lock is necessary mainly because CPython’s memory management is not thread-safe.

DCP currently uses background threads for metadata collectives and uploading to storage. Although these expensive steps are done asynchronously, it leads to contention for the GIL with the trainer threads. This causes the GPU utilization (QPS) to suffer significantly and also increases the e2e upload latency. For large-scale checkpoints, the overhead of the CPU parallel processing has a suppressive effect on net GPU training speed since CPUs also drive the training process via GPU kernel launches.

Please refer to the following figure from our experiments:

Fig 2: One can see a sustained drop in training QPS even after staging (i.e. blocking operation to trainer) is complete.

The first dip in Figure 2 (marked by the purple line) indicates that staging is complete, and training can continue. However, a second drop is evident (marked by the area between the purple and yellow lines) which is due to trainer thread and checkpointing threads contending for the Python GIL, leading to degraded training QPS until the checkpoint thread completes execution.

Collective communications cost:

DCP performs multiple collectives today for various reasons: dedupe, global metadata for the checkpoint, resharding, and distributed exception handling. Collectives are costly as these require network I/O and pickling/unpickling of the large metadata being sent across the GPU network. These collectives become extremely expensive as the job scale grows, leading to significantly higher e2e latency and potential for collective timeouts.

Solutions

Process based async checkpointing

DCP now supports async checkpoint save via a background process. This helps avoid the training QPS drop by eliminating the python GIL contention with the trainer threads. Please see Fig 2 for checkpointing via threads and Fig 3 for checkpointing via background process.

Caching of the save plans

DCP has a clear boundary between the planning and storage I/O steps. SavePlanner in DCP is a stateful component which acts as an access proxy to the state_dict. Planner manages save plans prepared by individual ranks, which carry metadata information necessary to do the write I/O. The planning step involves a collective operation to gather a comprehensive view of the checkpoint on the coordinator rank. The coordinator rank is responsible for de-duplicating parameters/weights to eliminate redundancies, validating the global plan to ensure accuracy and consistency, and creating the global metadata structs. This is followed by a scatter collective where the coordinator rank assigns I/O tasks to each rank. Any transformations done on the plans affect how the storage components finally write the data.

During the course of a training job, multiple checkpoints are saved. In the majority of these cases, only the checkpoint data changes between different save instances, and thus, the plan remains the same. This presented an opportunity for us to cache the plans, pay the planning cost only on the first save, and then amortize that cost across all the subsequent attempts. Only the updated plans (plans which changed in the next attempt) are sent via collective, thus reducing the collective overhead significantly.

Experiment Results

Set up: 1856 H200 GPUs, Llama3-70B, HSDP2 with TorchTitan

After deploying both the solutions above, the following are the key results:

  • TPS drop has significantly narrowed, with a peak dip to 372 vs 315 tps, and for a greatly reduced time window (~67 seconds vs ~437 seconds). This time window is now mostly attributed to the blocking for CPU processing.
  • Subsequent checkpoint save attempts also continue to be much faster due to very low overhead at the planning stage. E2E latency is thus improved by over 6.5x. This will allow our partners to increase the checkpointing frequency and reduce the lost training progress (i.e. wasted training time).

If you look at the very first downspike in Figure 1, this drawdown in GPU processing time takes training throughput from 700 down to 320 tps, and suppresses it for roughly 7 minutes (467 seconds). Once the CPUs have finished processing, training continues again at full speed.

Previously, this ~7 minute suppression would be repeated at every checkpoint. However, with the new process-based checkpointing feature, only the first checkpoint has the full drawdown time (mainly due to overhead from daemon process initialization), as all future checkpoints are executed via the background process, mitigating GIL contention with the trainer threads.

This is visually shown in all the subsequent checkpoints where the average MFU suppression time drops to just over a minute, reflected by the sharp spikes that almost immediately revert to full MFU throughput.

Fig 3: The red box shows the non-cached plan checkpoint, which also includes Checkpoint Background Init process overhead, while the purple box highlights the first checkpoint to run with the cached plan.

This means that even large-scale checkpointing, such as shown in Fig 2 at 1856 GPU scale, can be done with ~6x reduced training throughput impact. This enables Asynchronous DCP checkpointing to be run more frequently (thus better rollback protection) while enhancing total training throughput relative to previous Async Checkpointing overhead.

Using DCP’s cached checkpointing:

This feature is already available as part of the PyTorch nightly builds, and you can test out PyTorch’s Asynchronous DCP checkpointing directly in TorchTitan. Following are the instructions to enable these features:

  • Process-based asynchronous checkpointing:
    • Set the async_checkpointer_type to AsyncCheckpointerType.PROCESS in the async_save API. (file: pytorch/torch/distributed/checkpoint/state_dict_saver.py)
  • Save plan caching:
    • Set the enable_plan_caching flag to true in the DefaultSavePlanner. (file: pytorch/torch/distributed/checkpoint/default_planner.py)

Future work

DCP will be rolling out additional optimizations to further improve the checkpointing cost. Currently even though the save plans are cached, coordinator rank still prepares the metadata. For larger jobs and models with many tensors, this overhead is non-trivial. In the next iteration, DCP will eliminate the metadata overhead and improve the e2e latency further. DCP will also introduce additional optimizations, such as zero-overhead checkpointing, to enable efficient checkpointing in large-scale jobs.

Stay tuned!

Read More

Amazon Bedrock Model Distillation: Boost function calling accuracy while reducing cost and latency

Amazon Bedrock Model Distillation: Boost function calling accuracy while reducing cost and latency

Amazon Bedrock Model Distillation is generally available, and it addresses the fundamental challenge many organizations face when deploying generative AI: how to maintain high performance while reducing costs and latency. This technique transfers knowledge from larger, more capable foundation models (FMs) that act as teachers to smaller, more efficient models (students), creating specialized models that excel at specific tasks. In this post, we highlight the advanced data augmentation techniques and performance improvements in Amazon Bedrock Model Distillation with Meta’s Llama model family.

Agent function calling represents a critical capability for modern AI applications, allowing models to interact with external tools, databases, and APIs by accurately determining when and how to invoke specific functions. Although larger models typically excel at identifying the appropriate functions to call and constructing proper parameters, they come with higher costs and latency. Amazon Bedrock Model Distillation now enables smaller models to achieve comparable function calling accuracy while delivering substantially faster response times and lower operational costs.

The value proposition is compelling: organizations can deploy AI agents that maintain high accuracy in tool selection and parameter construction while benefiting from the reduced footprint and increased throughput of smaller models. This advancement makes sophisticated agent architectures more accessible and economically viable across a broader range of applications and scales of deployment.

Prerequisites

For a successful implementation of Amazon Bedrock Model Distillation, you’ll need to meet several requirements. We recommend referring to the Submit a model distillation job in Amazon Bedrock in the official AWS documentation for the most up-to-date and comprehensive information.

Key requirements include:

  • An active AWS account
  • Selected teacher and student models enabled in your account (verify on the Model access page of the Amazon Bedrock console)
  • An S3 bucket for storing input datasets and output artifacts
  • Appropriate IAM permissions:
  • Trust relationship allowing Amazon Bedrock to assume the role
  • Permissions to access S3 for input/output data and invocation logs
  • Permissions for model inference when using inference profiles

If you’re using historical invocation logs, confirm if model invocation logging is enabled in your Amazon Bedrock settings with S3 selected as the logging destination.

Preparing your data

Effective data preparation is crucial for successful distillation of agent function calling capabilities. Amazon Bedrock provides two primary methods for preparing your training data: uploading JSONL files to Amazon S3 or using historical invocation logs. Regardless of which method to choose, you’ll need to prepare proper formatting of tool specifications to enable successful agent function calling distillation.

Tool specification format requirements

For agent function calling distillation, Amazon Bedrock requires that tool specifications be provided as part of your training data. These specifications must be encoded as text within the system or user message of your input data. The example shown is using the Llama model family’s function calling format:

system: 'You are an expert in composing functions. You are given a question and a set of possible functions. Based on the question, you will need to make one or more function/tool calls to achieve the purpose.

Here is a list of functions in JSON format that you can invoke.
[
    {
        "name": "lookup_weather",
        "description": "Lookup weather to a specific location",
        "parameters": {
            "type": "dict",
            "required": [
                "city"
            ],
            "properties": {
                "location": {
                    "type": "string",
                },
                "date": {
                    "type": "string",
                }
            }
        }
    }
 ]'
 user: "What's the weather tomorrow?"

This approach lets the model learn how to interpret tool definitions and make appropriate function calls based on user queries. Afterwards, when calling inference on the distilled student model, we suggest keeping the prompt format consistent with the distillation input data. This provides optimal performance by maintaining the same structure the model was trained on.

Preparing data using Amazon S3 JSONL upload

When creating a JSONL file for distillation, each record must follow this structure:

{
    "schemaVersion": "bedrock-conversation-2024",
    "system": [
        {
            "text": 'You are an expert in composing functions. You are given a question and a set of possible functions. Based on the question, you will need to make one or more function/tool calls to achieve the purpose.
                    Here is a list of functions in JSON format that you can invoke.
                    [
                        {
                            "name": "lookup_weather",
                            "description": "Lookup weather to a specific location",
                            "parameters": {
                                "type": "dict",
                                "required": [
                                    "city"
                                ],
                                "properties": {
                                    "location": {
                                        "type": "string",
                                    },
                                    "date": {
                                        "type": "string",
                                    }
                                }
                            }
                        }
                    ]'
        }
    ],
    "messages": [
        {
            "role": "user",
            "content": [
                {
                    "text": "What's the weather tomorrow?"
                }
            ]
        },
        {
            "role": "assistant",
            "content": [
               {
                   "text": "[lookup_weather(location="san francisco", date="tomorrow")]"
               }
            ]
        }
    ]
}

Each record must include the schemaVersion field with the value bedrock-conversation-2024. The system field contains instructions for the model, including available tools. The messages field contains the conversation, with required user input and optional assistant responses.

Using historical invocation logs

Alternatively, you can use your historical model invocation logs on Amazon Bedrock for distillation. This approach uses actual production data from your application, capturing real-world function calling scenarios. To use this method:

  1. Enable invocation logging in your Amazon Bedrock account settings, selecting S3 as your logging destination.
  2. Add metadata to your model invocations using the requestMetadata field to categorize interactions. For example:
    "requestMetadata": { 
       "project": "WeatherAgent", 
       "intent": "LocationQuery", 
       "priority": "High"
    }

  3. When creating your distillation job, specify filters to select relevant logs based on metadata:
    "requestMetadataFilters": { 
        "equals": {"project": "WeatherAgent"} 
    }

Using historical invocation logs means that you can distill knowledge from your production workloads, allowing the model to learn from real user interactions and function calls.

Model distillation enhancements

Although the basic process for creating a model distillation job remains similar to what we described in our previous blog post, Amazon Bedrock Model Distillation introduces several enhancements with general availability that improve the experience, capabilities, and transparency of the service.

Expanded model support

With general availability, we have expanded the model options available for distillation. In addition to the models supported during preview, customers can now use:

  • Nova Premier as a teacher model for Nova Pro/Lite/Micro models distillation
  • Anthropic Claude Sonnet 3.5 v2 as a teacher model for Claude Haiku distillation
  • Meta’s Llama 3.3 70B as teacher and 3.2 1B and 3B as student models for Meta model distillation

This broader selection allows customers to find the balance between performance and efficiency across different use cases. For the most current list of supported models, refer to the Amazon Bedrock documentation.

Advanced data synthesis technology

Amazon Bedrock applies proprietary data synthesis techniques during the distillation process for certain use cases. This science innovation automatically generates additional training examples that improve the student model’s ability to generate better response.

For agent function calling with Llama models specifically, the data augmentation methods help bridge the performance gap between teacher and student models compared to vanilla distillation (vanilla distillation means directly annotating input data with teacher response and run student training with supervised fine-tuning). This makes the student models’ performance much more comparable to the teacher after distillation while maintaining the cost and latency benefits of a smaller model.

Enhanced training visibility

Amazon Bedrock model distillation now provides better visibility into the training process through multiple enhancements:

  1. Synthetic data transparency – Model distillation now provides samples of the synthetically generated training data used to enhance model performance. For most model families, up to 50 sample prompts are exported (up to 25 for Anthropic models), giving you insight into how your model was trained, which can help support internal compliance requirements.
  2. Prompt insights reporting – A summarized report of prompts accepted for distillation is provided, along with detailed visibility into prompts that were rejected and the specific reason for rejection. This feedback mechanism helps you identify and fix problematic prompts to improve your distillation success rate.

These insights are stored in the output S3 bucket specified during job creation, giving you a clearer picture of the knowledge transfer process.

Improved job status reporting

Amazon Bedrock Model Distillation also offers enhanced training job status reporting to provide more detailed information about where your model distillation job stands in the process. Rather than brief status indicators such as “In Progress” or “Complete,” the system now provides more granular status updates, helping you better track the progress of the distillation job.

You can track these job status details in both the AWS Management Console and AWS SDK.

Performance improvements and benefits

Now that we’ve explored the feature enhancements in Amazon Bedrock Model Distillation, we examine the benefits these capabilities deliver, particularly for agent function calling use cases.

Evaluation metric

We use abstract syntax tree (AST) to evaluate the function calling performance. AST parses the generated function call and performs fine-grained evaluation on the correctness of the generated function name, parameter values, and data types with the following workflow:

  1. Function matching – Checks if the predicted function name is consistent with one of the possible answers
  2. Required parameter matching – Extracts the arguments from the AST and checks if each parameter can be found and exact matched in possible answers
  3. Parameter type and value matching – Checks if the predicted parameter values and types are correct

The process is illustrated in following diagram from Gorilla: Large Language Model Connected with Massive APIs.

Experiment results

To evaluate model distillation in the function call use case, we used the BFCL v2 dataset and filtered it to specific domains (entertainment, in this case) to match a typical use case of model customization. We also split the data into training and test sets and performed distillation on the training data while we ran evaluations on the test set. Both the training set and the test set contained around 200 examples. We assessed the performance of several models, including the teacher model (Llama 405B), the base student model (Llama 3B), a vanilla distillation version where Llama 405B is distilled into Llama 3B without data augmentation, and an advanced distillation version enhanced with proprietary data augmentation techniques.

The evaluation focused on simple and multiple categories defined in the BFCL V2 dataset. As shown in the following chart, there is a performance variance between the teacher and the base student model across both categories. Vanilla distillation significantly improved the base student model’s performance. In the simple category, performance increased from 0.478 to 0.783, representing a 63.8% relative improvement. In the multiple category, the score rose from 0.586 to 0.742, which is a 26.6% relative improvement. On average, vanilla distillation led to a 45.2% improvement across the two categories.

Applying data augmentation techniques provided further gains beyond vanilla distillation. In the simple category, performance improved from 0.783 to 0.826, and in the multiple category, from 0.742 to 0.828. On average, this resulted in a 5.8% relative improvement across both categories, calculated as the mean of the relative gains in each. These results highlight the effectiveness of both distillation and augmentation strategies in enhancing student model performance for function call tasks.

We show the latency and output speed comparison for different models in the following figure. The data is gathered from Artificial Analysis, a website that provides independent analysis of AI models and providers, on April 4, 2025. We find that there is a clear trend on latency and generation speed as different size Llama models evaluated. Notably, the Llama 3.1 8B model offers the highest output speed, making it the most efficient in terms of responsiveness and throughput. Similarly, Llama 3.2 3B performs well with a slightly higher latency but still maintains a solid output speed. On the other hand, Llama 3.1 70B and Llama 3.1 405B exhibit much higher latencies with significantly lower output speeds, indicating a substantial performance cost at higher model sizes. Compared to Llama 3.1 405B, Llama 3.2 3B provides 72% latency reduction and 140% output speed improvement. These results suggest that smaller models might be more suitable for applications where speed and responsiveness are critical.

In addition, we report the comparison of cost per 1M tokens for different Llama models. As shown in the following figure, it’s evident that smaller models (Llama 3.2 3B and Llama 3.1 8B) are significantly more cost-effective. As the model size increases (Llama 3.1 70B and Llama 3.1 405B), the pricing scales steeply. This dramatic increase underscores the trade-off between model complexity and operational cost.

Real-world agent applications require LLM models that can achieve a good balance between accuracy, speed, and cost. This result shows that using a distilled model for agent applications helps developers receive the speed and cost of smaller models while getting similar accuracy as a larger teacher model.

Conclusion

Amazon Bedrock Model Distillation is now generally available, offering organizations a practical pathway for deploying capable agent experiences without compromising on performance or cost-efficiency. As our performance evaluation demonstrates, distilled models for function calling can achieve accuracy comparable to models many times their size while delivering significantly faster inference and lower operational costs. This capability enables scalable deployment of AI agents that can accurately interact with external tools and systems across enterprise applications.

Start using Amazon Bedrock Model Distillation today through the AWS Management Console or API to transform your generative AI applications, including agentic use cases, with the balance of accuracy, speed, and cost efficiency. For implementation examples, check out our code samples in the amazon-bedrock-samples GitHub repository.

Appendix

BFCL V2 simple category

Definition: The simple category consists of tasks where the user is provided with a single function documentation (that is, one JSON function definition), and the model is expected to generate exactly one function call that matches the user’s request. This is the most basic and commonly encountered scenario, focusing on whether the model can correctly interpret a straightforward user query and map it to the only available function, filling in the required parameters as needed.

# Example
{
    "id": "live_simple_0-0-0",
    "question": [
        [{
            "role": "user",
            "content": "Can you retrieve the details for the user with the ID 7890, who has black as their special request?"
        }]
    ],
    "function": [{
        "name": "get_user_info",
        "description": "Retrieve details for a specific user by their unique identifier.",
        "parameters": {
            "type": "dict",
            "required": ["user_id"],
            "properties": {
                "user_id": {
                    "type": "integer",
                    "description": "The unique identifier of the user. It is used to fetch the specific user details from the database."
                },
                "special": {
                    "type": "string",
                    "description": "Any special information or parameters that need to be considered while fetching user details.",
                    "default": "none"
                }
            }
        }
    }]
}

BFCL V2 multiple category

Definition: The multiple category presents the model with a user query and several (typically two to four) function documentations. The model must select the most appropriate function to call based on the user’s intent and context and then generate a single function call accordingly. This category evaluates the model’s ability to understand the user’s intent, distinguish between similar functions, and choose the best match from multiple options.

{
    "id": "live_multiple_3-2-0",
    "question": [
        [{
            "role": "user",
            "content": "Get weather of Ha Noi for me"
        }]
    ],
    "function": [{
        "name": "uber.ride",
        "description": "Finds a suitable Uber ride for the customer based on the starting location, the desired ride type, and the maximum wait time the customer is willing to accept.",
        "parameters": {
            "type": "dict",
            "required": ["loc", "type", "time"],
            "properties": {
                "loc": {
                    "type": "string",
                    "description": "The starting location for the Uber ride, in the format of 'Street Address, City, State', such as '123 Main St, Springfield, IL'."
                },
                "type": {
                    "type": "string",
                    "description": "The type of Uber ride the user is ordering.",
                    "enum": ["plus", "comfort", "black"]
                },
                "time": {
                    "type": "integer",
                    "description": "The maximum amount of time the customer is willing to wait for the ride, in minutes."
                }
            }
        }
    }, {
        "name": "api.weather",
        "description": "Retrieve current weather information for a specified location.",
        "parameters": {
            "type": "dict",
            "required": ["loc"],
            "properties": {
                "loc": {
                    "type": "string",
                    "description": "The location for which weather information is to be retrieved, in the format of 'City, Country' (e.g., 'Paris, France')."
                }
            }
        }
    }]
}

About the authors

Yanyan Zhang is a Senior Generative AI Data Scientist at Amazon Web Services, where she has been working on cutting-edge AI/ML technologies as a Generative AI Specialist, helping customers use generative AI to achieve their desired outcomes. Yanyan graduated from Texas A&M University with a PhD in Electrical Engineering. Outside of work, she loves traveling, working out, and exploring new things.

Ishan Singh is a Generative AI Data Scientist at Amazon Web Services, where he helps customers build innovative and responsible generative AI solutions and products. With a strong background in AI/ML, Ishan specializes in building generative AI solutions that drive business value. Outside of work, he enjoys playing volleyball, exploring local bike trails, and spending time with his wife and dog, Beau.

Yijun Tian is an Applied Scientist II at AWS Agentic AI, where he focuses on advancing fundamental research and applications in Large Language Models, Agents, and Generative AI. Prior to joining AWS, he obtained his Ph.D. in Computer Science from the University of Notre Dame.

Yawei Wang is an Applied Scientist at AWS Agentic AI, working at the forefront of generative AI technologies to build next-generation AI products within AWS. He also collaborates with AWS business partners to identify and develop machine learning solutions that address real-world industry challenges.

David Yan is a Senior Research Engineer at AWS Agentic AI, leading efforts in Agent Customization and Optimization. Prior to that, he was in AWS Bedrock, leading model distillation effort to help customers optimize LLM latency, cost and accuracy. His research interest includes AI agent, planning and prediction and inference optimization. Before joining AWS, David worked on planning and behavior prediction for autonomous driving in Waymo. Before that, he worked on nature language understanding for knowledge graph at Google. David received a M.S. in Electrical Engineering from Stanford University and a B.S. in Physics from Peking University.

Panpan Xu is a Principal Applied Scientist at AWS Agentic AI, leading a team working on Agent Customization and Optimization. Prior to that, she lead a team in AWS Bedrock working on research and development of inference optimization techniques for foundation models, covering modeling level techniques such as model distillation and sparsification to hardware-aware optimization. Her past research interest covers a broad range of topics including model interpretability, graph neural network, human-in-the-loop AI and interactive data visualization. Prior to joining AWS, she was a lead research scientist at Bosch Research and obtained her PhD in computer science from Hong Kong University of Science and Technology.

Shreeya Sharma is a Senior Technical Product Manager at AWS, where she has been working on leveraging the power of generative AI to deliver innovative and customer-centric products. Shreeya holds a master’s degree from Duke University. Outside of work, she loves traveling, dancing, and singing.

Read More

Classifier-Free Guidance is a Predictor-Corrector

We investigate the theoretical foundations of classifier-free guidance (CFG). CFG is the dominant method of conditional sampling for text-to-image diffusion models, yet unlike other aspects of diffusion, it remains on shaky theoretical footing. In this paper, we disprove common misconceptions, by showing that CFG interacts differently with DDPM (Ho et al., 2020) and DDIM (Song et al., 2021), and neither sampler with CFG generates the gamma-powered distribution p(x|c)^γp(x)^{1−γ}. Then, we clarify the behavior of CFG by showing that it is a kind of predictor-corrector method (Song et al., 2020)…Apple Machine Learning Research

Improved Sample Complexity for Private Nonsmooth Nonconvex Optimization

We study differentially private (DP) optimization algorithms for stochastic and empirical objectives which are neither smooth nor convex, and propose methods that return a Goldstein-stationary point with sample complexity bounds that improve on existing works.
We start by providing a single-pass (ϵ,δ)(epsilon,delta)(ϵ,δ)-DP algorithm that returns an (α,β)(alpha,beta)(α,β)-stationary point as long as the dataset is of size…Apple Machine Learning Research

Mechanisms of Projective Composition of Diffusion Models

We study the theoretical foundations of composition in diffusion models, with a particular focus on out-of-distribution extrapolation and length-generalization. Prior work has shown that composing distributions via linear score combination can achieve promising results, including length-generalization in some cases (Du et al., 2023; Liu et al., 2022). However, our theoretical understanding of how and why such compositions work remains incomplete. In fact, it is not even entirely clear what it means for composition to “work”. This paper starts to address these fundamental gaps. We begin by…Apple Machine Learning Research

Local Pan-Privacy for Federated Analytics

Pan-privacy was proposed by Dwork et al. (2010) as an approach to designing a private analytics system that retains its privacy properties in the face of intrusions that expose the system’s internal state. Motivated by federated telemetry applications, we study local pan-privacy, where privacy should be retained under repeated unannounced intrusions on the local state. We consider the problem of monitoring the count of an event in a federated system, where event occurrences on a local device should be hidden even from an intruder on that device. We show that under reasonable constraints, the…Apple Machine Learning Research