Implement semantic video search using open source large vision models on Amazon SageMaker and Amazon OpenSearch Serverless

Implement semantic video search using open source large vision models on Amazon SageMaker and Amazon OpenSearch Serverless

As companies and individual users deal with constantly growing amounts of video content, the ability to perform low-effort search to retrieve videos or video segments using natural language becomes increasingly valuable. Semantic video search offers a powerful solution to this problem, so users can search for relevant video content based on textual queries or descriptions. This approach can be used in a wide range of applications, from personal photo and video libraries to professional video editing, or enterprise-level content discovery and moderation, where it can significantly improve the way we interact with and manage video content.

Large-scale pre-training of computer vision models with self-supervision directly from natural language descriptions of images has made it possible to capture a wide set of visual concepts, while also bypassing the need for labor-intensive manual annotation of training data. After pre-training, natural language can be used to either reference the learned visual concepts or describe new ones, effectively enabling zero-shot transfer to a diverse set of computer vision tasks, such as image classification, retrieval, and semantic analysis.

In this post, we demonstrate how to use large vision models (LVMs) for semantic video search using natural language and image queries. We introduce some use case-specific methods, such as temporal frame smoothing and clustering, to enhance the video search performance. Furthermore, we demonstrate the end-to-end functionality of this approach by using both asynchronous and real-time hosting options on Amazon SageMaker AI to perform video, image, and text processing using publicly available LVMs on the Hugging Face Model Hub. Finally, we use Amazon OpenSearch Serverless with its vector engine for low-latency semantic video search.

About large vision models

In this post, we implement video search capabilities using multimodal LVMs, which integrate textual and visual modalities during the pre-training phase, using techniques such as contrastive multimodal representation learning, Transformer-based multimodal fusion, or multimodal prefix language modeling (for more details, see, Review of Large Vision Models and Visual Prompt Engineering by J. Wang et al.). Such LVMs have recently emerged as foundational building blocks for various computer vision tasks. Owing to their capability to learn a wide variety of visual concepts from massive datasets, these models can effectively solve diverse downstream computer vision tasks across different image distributions without the need for fine-tuning. In this section, we briefly introduce some of the most popular publicly available LVMs (which we also use in the accompanying code sample).

The CLIP (Contrastive Language-Image Pre-training) model, introduced in 2021, represents a significant milestone in the field of computer vision. Trained on a collection of 400 million image-text pairs harvested from the internet, CLIP showcased the remarkable potential of using large-scale natural language supervision for learning rich visual representations. Through extensive evaluations across over 30 computer vision benchmarks, CLIP demonstrated impressive zero-shot transfer capabilities, often matching or even surpassing the performance of fully supervised, task-specific models. For instance, a notable achievement of CLIP is its ability to match the top accuracy of a ResNet-50 model trained on the 1.28 million images from the ImageNet dataset, despite operating in a true zero-shot setting without a need for fine-tuning or other access to labeled examples.

Following the success of CLIP, the open-source initiative OpenCLIP further advanced the state-of-the-art by releasing an open implementation pre-trained on the massive LAION-2B dataset, comprised of 2.3 billion English image-text pairs. This substantial increase in the scale of training data enabled OpenCLIP to achieve even better zero-shot performance across a wide range of computer vision benchmarks, demonstrating further potential of scaling up natural language supervision for learning more expressive and generalizable visual representations.

Finally, the set of SigLIP (Sigmoid Loss for Language-Image Pre-training) models, including one trained on a 10 billion multilingual image-text dataset spanning over 100 languages, further pushed the boundaries of large-scale multimodal learning. The models propose an alternative loss function for the contrastive pre-training scheme employed in CLIP and have shown superior performance in language-image pre-training, outperforming both CLIP and OpenCLIP baselines on a variety of computer vision tasks.

Solution overview

Our approach uses a multimodal LVM to enable efficient video search and retrieval based on both textual and visual queries. The approach can be logically split into an indexing pipeline, which can be carried out offline, and an online video search logic. The following diagram illustrates the pipeline workflows.

The indexing pipeline is responsible for ingesting video files and preprocessing them to construct a searchable index. The process begins by extracting individual frames from the video files. These extracted frames are then passed through an embedding module, which uses the LVM to map each frame into a high-dimensional vector representation containing its semantic information. To account for temporal dynamics and motion information present in the video, a temporal smoothing technique is applied to the frame embeddings. This step makes sure the resulting representations capture the semantic continuity across multiple subsequent video frames, rather than treating each frame independently (also see the results discussed later in this post, or consult the following paper for more details). The temporally smoothed frame embeddings are then ingested into a vector index data structure, which is designed for efficient storage, retrieval, and similarity search operations. This indexed representation of the video frames serves as the foundation for the subsequent search pipeline.

The search pipeline facilitates content-based video retrieval by accepting textual queries or visual queries (images) from users. Textual queries are first embedded into the shared multimodal representation space using the LVM’s text encoding capabilities. Similarly, visual queries (images) are processed through the LVM’s visual encoding branch to obtain their corresponding embeddings.

After the textual or visual queries are embedded, we can build a hybrid query to account for keywords or filter constraints provided by the user (for example, to search only across certain video categories, or to search within a particular video). This hybrid query is then used to retrieve the most relevant frame embeddings based on their conceptual similarity to the query, while adhering to any supplementary keyword constraints.

The retrieved frame embeddings are then subjected to temporal clustering (also see the results later in this post for more details), which aims to group contiguous frames into semantically coherent video segments, thereby returning an entire video sequence (rather than disjointed individual frames).

Furthermore, maintaining search diversity and quality is crucial when retrieving content from videos. As mentioned previously, our approach incorporates various methods to enhance search results. For example, during the video indexing phase, the following techniques are employed to control the search results (the parameters of which might need to be tuned to get the best results):

  • Adjusting the sampling rate, which determines the number of frames embedded from each second of video. Less frequent frame sampling might make sense when working with longer videos, whereas more frequent frame sampling might be needed to catch fast-occurring events.
  • Modifying the temporal smoothing parameters to, for example, remove inconsistent search hits based on just a single frame hit, or merge repeated frame hits from the same scene.

During the semantic video search phase, you can use the following methods:

  • Applying temporal clustering as a post-filtering step on the retrieved timestamps to group contiguous frames into semantically coherent video clips (that can be, in principle, directly played back by the end-users). This makes sure the search results maintain temporal context and continuity, avoiding disjointed individual frames.
  • Setting the search size, which can be effectively combined with temporal clustering. Increasing the search size makes sure the relevant frames are included in the final results, albeit at the cost of higher computational load (see, for example, this guide for more details).

Our approach aims to strike a balance between retrieval quality, diversity, and computational efficiency by employing these techniques during both the indexing and search phases, ultimately enhancing the user experience in semantic video search.

The proposed solution architecture provides efficient semantic video search by using open source LVMs and AWS services. The architecture can be logically divided into two components: an asynchronous video indexing pipeline and online content search logic. The accompanying sample code on GitHub showcases how to build, experiment locally, as well as host and invoke both parts of the workflow using several open source LVMs available on the Hugging Face Model Hub (CLIP, OpenCLIP, and SigLIP). The following diagram illustrates this architecture.

The pipeline for asynchronous video indexing is comprised of the following steps:

  1. The user uploads a video file to an Amazon Simple Storage Service (Amazon S3) bucket, which initiates the indexing process.
  2. The video is sent to a SageMaker asynchronous endpoint for processing. The processing steps involve:
    • Decoding of frames from the uploaded video file.
    • Generation of frame embeddings by LVM.
    • Application of temporal smoothing, accounting for temporal dynamics and motion information present in the video.
  3. The frame embeddings are ingested into an OpenSearch Serverless vector index, designed for efficient storage, retrieval, and similarity search operations.

SageMaker asynchronous inference endpoints are well-suited for handling requests with large payloads, extended processing times, and near real-time latency requirements. This SageMaker capability queues incoming requests and processes them asynchronously, accommodating large payloads and long processing times. Asynchronous inference enables cost optimization by automatically scaling the instance count to zero when there are no requests to process, so computational resources are used only when actively handling requests. This flexibility makes it an ideal choice for applications involving large data volumes, such as video processing, while maintaining responsiveness and efficient resource utilization.

OpenSearch Serverless is an on-demand serverless version for Amazon OpenSearch Service. We use OpenSearch Serverless as a vector database for storing embeddings generated by the LVM. The index created in the OpenSearch Serverless collection serves as the vector store, enabling efficient storage and rapid similarity-based retrieval of relevant video segments.

The online content search then can be broken down to the following steps:

  1. The user provides a textual prompt or an image (or both) representing the desired content to be searched.
  2. The user prompt is sent to a real-time SageMaker endpoint, which results in the following actions:
    • An embedding is generated for the text or image query.
    • The query with embeddings is sent to the OpenSearch vector index, which performs a k-nearest neighbors (k-NN) search to retrieve relevant frame embeddings.
    • The retrieved frame embeddings undergo temporal clustering.
  3. The final search results, comprising relevant video segments, are returned to the user.

SageMaker real-time inference suits workloads needing real-time, interactive, low-latency responses. Deploying models to SageMaker hosting services provides fully managed inference endpoints with automatic scaling capabilities, providing optimal performance for real-time requirements.

Code and environment

This post is accompanied by a sample code on GitHub that provides comprehensive annotations and code to set up the necessary AWS resources, experiment locally with sample video files, and then deploy and run the indexing and search pipelines. The code sample is designed to exemplify best practices when developing ML solutions on SageMaker, such as using configuration files to define flexible inference stack parameters and conducting local tests of the inference artifacts before deploying them to SageMaker endpoints. It also contains guided implementation steps with explanations and reference for configuration parameters. Additionally, the notebook automates the cleanup of all provisioned resources.

Prerequisites

The prerequisite to run the provided code is to have an active AWS account and set up Amazon SageMaker Studio. Refer to Use quick setup for Amazon SageMaker AI to set up SageMaker if you’re a first-time user and then follow the steps to open SageMaker Studio.

Deploy the solution

To start the implementation to clone the repository, open the notebook semantic_video_search_demo.ipynb, and follow the steps in the notebook.

In Section 2 of the notebook, install the required packages and dependencies, define global variables, set up Boto3 clients, and attach required permissions to the SageMaker AWS Identity and Access Management (IAM) role to interact with Amazon S3 and OpenSearch Service from the notebook.

In Section 3, create security components for OpenSearch Serverless (encryption policy, network policy, and data access policy) and then create an OpenSearch Serverless collection. For simplicity, in this proof of concept implementation, we allow public internet access to the OpenSearch Serverless collection resource. However, for production environments, we strongly suggest using private connections between your Virtual Private Cloud (VPC) and OpenSearch Serverless resources through a VPC endpoint. For more details, see Access Amazon OpenSearch Serverless using an interface endpoint (AWS PrivateLink).

In Section 4, import and inspect the config file, and choose an embeddings model for video indexing and corresponding embeddings dimension. In Section 5, create a vector index within the OpenSearch collection you created earlier.

To demonstrate the search results, we also provide references to a few sample videos that you can experiment with in Section 6. In Section 7, you can experiment with the proposed semantic video search approach locally in the notebook, before deploying the inference stacks.

In Sections 8, 9, and 10, we provide code to deploy two SageMaker endpoints: an asynchronous endpoint for video embedding and indexing and a real-time inference endpoint for video search. After these steps, we also test our deployed sematic video search solution with a few example queries.

Finally, Section 11 contains the code to clean up the created resources to avoid recurring costs.

Results

The solution was evaluated across a diverse range of use cases, including the identification of key moments in sports games, specific outfit pieces or color patterns on fashion runways, and other tasks in full-length films on the fashion industry. Additionally, the solution was tested for detecting action-packed moments like explosions in action movies, identifying when individuals entered video surveillance areas, and extracting specific events such as sports award ceremonies.

For our demonstration, we created a video catalog consisting of the following videos: A Look Back at New York Fashion Week: Men’s, F1 Insights powered by AWS, Amazon Air’s newest aircraft, the A330, is here, and Now Go Build with Werner Vogels – Autonomous Trucking.

To demonstrate the search capability for identifying specific objects across this video catalog, we employed four text prompts and four images. The presented results were obtained using the google/siglip-so400m-patch14-384 model, with temporal clustering enabled and a timestamp filter set to 1 second. Additionally, smoothing was enabled with a kernel size of 11, and the search size was set to 20 (which were found to be good default values for shorter videos). The left column in the subsequent figures specifies the search type, either by image or text, along with the corresponding image name or text prompt used.

The following figure shows the text prompts we used and the corresponding results.

The following figure shows the images we used to perform reverse images search and corresponding search results for each image.

As mentioned, we implemented temporal clustering in the lookup code, allowing for the grouping of frames based on their ordered timestamps. The accompanying notebook with sample code showcases the temporal clustering functionality by displaying (a few frames from) the returned video clip and highlighting the key frame with the highest search score within each group, as illustrated in the following figure. This approach facilitates a convenient presentation of the search results, enabling users to return entire playable video clips (even if not all frames were actually indexed in a vector store).

To showcase the hybrid search capabilities with OpenSearch Service, we present results for the textual prompt “sky,” with all other search parameters set identically to the previous configurations. We demonstrate two distinct cases: an unconstrained semantic search across the entire indexed video catalog, and a search confined to a specific video. The following figure illustrates the results obtained from an unconstrained semantic search query.

We conducted the same search for “sky,” but now confined to trucking videos.

To illustrate the effects of temporal smoothing, we generated search signal score charts (based on cosine similarity) for the prompt F1 crews change tyres in the formulaone video, both with and without temporal smoothing. We set a threshold of 0.315 for illustration purposes and highlighted video segments with scores exceeding this threshold. Without temporal smoothing (see the following figure), we observed two adjacent episodes around t=35 seconds and two additional episodes after t=65 seconds. Notably, the third and fourth episodes were significantly shorter than the first two, despite exhibiting higher scores. However, we can do better, if our objective is to prioritize longer semantically cohesive video episodes in the search.

To address this, we apply temporal smoothing. As shown in the following figure, now the first two episodes appear to be merged into a single, extended episode with the highest score. The third episode experienced a slight score reduction, and the fourth episode became irrelevant due to its brevity. Temporal smoothing facilitated the prioritization of longer and more coherent video moments associated with the search query by consolidating adjacent high-scoring segments and suppressing isolated, transient occurrences.

Clean up

To clean up the resources created as part of this solution, refer to the cleanup section in the provided notebook and execute the cells in this section. This will delete the created IAM policies, OpenSearch Serverless resources, and SageMaker endpoints to avoid recurring charges.

Limitations

Throughout our work on this project, we also identified several potential limitations that could be addressed through future work:

  • Video quality and resolution might impact search performance, because blurred or low-resolution videos can make it challenging for the model to accurately identify objects and intricate details.
  • Small objects within videos, such as a hockey puck or a football, might be difficult for LVMs to consistently recognize due to their diminutive size and visibility constraints.
  • LVMs might struggle to comprehend scenes that represent a temporally prolonged contextual situation, such as detecting a point-winning shot in tennis or a car overtaking another vehicle.
  • Accurate automatic measurement of solution performance is hindered without the availability of manually labeled ground truth data for comparison and evaluation.

Summary

In this post, we demonstrated the advantages of the zero-shot approach to implementing semantic video search using either text prompts or images as input. This approach readily adapts to diverse use cases without the need for retraining or fine-tuning models specifically for video search tasks. Additionally, we introduced techniques such as temporal smoothing and temporal clustering, which significantly enhance the quality and coherence of video search results.

The proposed architecture is designed to facilitate a cost-effective production environment with minimal effort, eliminating the requirement for extensive expertise in machine learning. Furthermore, the current architecture seamlessly accommodates the integration of open source LVMs, enabling the implementation of custom preprocessing or postprocessing logic during both the indexing and search phases. This flexibility is made possible by using SageMaker asynchronous and real-time deployment options, providing a powerful and versatile solution.

You can implement semantic video search using different approaches or AWS services. For related content, refer to the following AWS blog posts as examples on semantic search using proprietary ML models: Implement serverless semantic search of image and live video with Amazon Titan Multimodal Embeddings or Build multimodal search with Amazon OpenSearch Service.


About the Authors

Dr. Alexander Arzhanov is an AI/ML Specialist Solutions Architect based in Frankfurt, Germany. He helps AWS customers design and deploy their ML solutions across the EMEA region. Prior to joining AWS, Alexander was researching origins of heavy elements in our universe and grew passionate about ML after using it in his large-scale scientific calculations.

Dr. Ivan Sosnovik is an Applied Scientist in the AWS Machine Learning Solutions Lab. He develops ML solutions to help customers to achieve their business goals.

Nikita Bubentsov is a Cloud Sales Representative based in Munich, Germany, and part of Technical Field Community (TFC) in computer vision and machine learning. He helps enterprise customers drive business value by adopting cloud solutions and supports AWS EMEA organizations in the computer vision area. Nikita is passionate about computer vision and the future potential that it holds.

Read More

Multi-account support for Amazon SageMaker HyperPod task governance

Multi-account support for Amazon SageMaker HyperPod task governance

GPUs are a precious resource; they are both short in supply and much more costly than traditional CPUs. They are also highly adaptable to many different use cases. Organizations building or adopting generative AI use GPUs to run simulations, run inference (both for internal or external usage), build agentic workloads, and run data scientists’ experiments. The workloads range from ephemeral single-GPU experiments run by scientists to long multi-node continuous pre-training runs. Many organizations need to share a centralized, high-performance GPU computing infrastructure across different teams, business units, or accounts within their organization. With this infrastructure, they can maximize the utilization of expensive accelerated computing resources like GPUs, rather than having siloed infrastructure that might be underutilized. Organizations also use multiple AWS accounts for their users. Larger enterprises might want to separate different business units, teams, or environments (production, staging, development) into different AWS accounts. This provides more granular control and isolation between these different parts of the organization. It also makes it straightforward to track and allocate cloud costs to the appropriate teams or business units for better financial oversight.

The specific reasons and setup can vary depending on the size, structure, and requirements of the enterprise. But in general, a multi-account strategy provides greater flexibility, security, and manageability for large-scale cloud deployments. In this post, we discuss how an enterprise with multiple accounts can access a shared Amazon SageMaker HyperPod cluster for running their heterogenous workloads. We use SageMaker HyperPod task governance to enable this feature.

Solution overview

SageMaker HyperPod task governance streamlines resource allocation and provides cluster administrators the capability to set up policies to maximize compute utilization in a cluster. Task governance can be used to create distinct teams with their own unique namespace, compute quotas, and borrowing limits. In a multi-account setting, you can restrict which accounts have access to which team’s compute quota using role-based access control.

In this post, we describe the settings required to set up multi-account access for SageMaker HyperPod clusters orchestrated by Amazon Elastic Kubernetes Service (Amazon EKS) and how to use SageMaker HyperPod task governance to allocate accelerated compute to multiple teams in different accounts.

The following diagram illustrates the solution architecture.

Multi-account AWS architecture: EKS cluster withEKS Pod Identity accessing S3 bucket via access point

In this architecture, one organization is splitting resources across a few accounts. Account A hosts the SageMaker HyperPod cluster. Account B is where the data scientists reside. Account C is where the data is prepared and stored for training usage. In the following sections, we demonstrate how to set up multi-account access so that data scientists in Account B can train a model on Account A’s SageMaker HyperPod and EKS cluster, using the preprocessed data stored in Account C. We break down this setup in two sections: cross-account access for data scientists and cross-account access for prepared data.

Cross-account access for data scientists

When you create a compute allocation with SageMaker HyperPod task governance, your EKS cluster creates a unique Kubernetes namespace per team. For this walkthrough, we create an AWS Identity and Access Management (IAM) role per team, called cluster access roles, that are then scoped access only to the team’s task governance-generated namespace in the shared EKS cluster. Role-based access control is how we make sure the data science members of Team A will not be able to submit tasks on behalf of Team B.

To access Account A’s EKS cluster as a user in Account B, you will need to assume a cluster access role in Account A. The cluster access role will have only the needed permissions for data scientists to access the EKS cluster. For an example of IAM roles for data scientists using SageMaker HyperPod, see IAM users for scientists.

Next, you will need to assume the cluster access role from a role in Account B. The cluster access role in Account A will then need to have a trust policy for the data scientist role in Account B. The data scientist role is the role in account B that will be used to assume the cluster access role in Account A. The following code is an example of the policy statement for the data scientist role so that it can assume the cluster access role in Account A:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "sts:AssumeRole",
      "Resource": "arn:aws:iam::XXXXXXXXXXAAA:role/ClusterAccessRole"
    }
  ]
}

The following code is an example of the trust policy for the cluster access role so that it allows the data scientist role to assume it:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::XXXXXXXXXXBBB:role/DataScientistRole"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

The final step is to create an access entry for the team’s cluster access role in the EKS cluster. This access entry should also have an access policy, such as EKSEditPolicy, that is scoped to the namespace of the team. This makes sure that Team A users in Account B can’t launch tasks outside of their assigned namespace. You can also optionally set up custom role-based access control; see Setting up Kubernetes role-based access control for more information.

For users in Account B, you can repeat the same setup for each team. You must create a unique cluster access role for each team to align the access role for the team with their associated namespace. To summarize, we use two different IAM roles:

  • Data scientist role – The role in Account B used to assume the cluster access role in Account A. This role just needs to be able to assume the cluster access role.
  • Cluster access role – The role in Account A used to give access to the EKS cluster. For an example, see IAM role for SageMaker HyperPod.

Cross-account access to prepared data

In this section, we demonstrate how to set up EKS Pod Identity and S3 Access Points so that pods running training tasks in Account A’s EKS cluster have access to data stored in Account C. EKS Pod Identity allow you to map an IAM role to a service account in a namespace. If a pod uses the service account that has this association, then Amazon EKS will set the environment variables in the containers of the pod.

S3 Access Points are named network endpoints that simplify data access for shared datasets in S3 buckets. They act as a way to grant fine-grained access control to specific users or applications accessing a shared dataset within an S3 bucket, without requiring those users or applications to have full access to the entire bucket. Permissions to the access point is granted through S3 access point policies. Each S3 Access Point is configured with an access policy specific to a use case or application. Since the HyperPod cluster in this blog post can be used by multiple teams, each team could have its own S3 access point and access point policy.

Before following these steps, ensure you have the EKS Pod Identity Add-on installed on your EKS cluster.

  1. In Account A, create an IAM Role that contains S3 permissions (such as s3:ListBucket and s3:GetObject to the access point resource) and has a trust relationship with Pod Identity; this will be your Data Access Role. Below is an example of a trust policy.
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowEksAuthToAssumeRoleForPodIdentity",
      "Effect": "Allow",
      "Principal": {
        "Service": "pods.eks.amazonaws.com"
      },
      "Action": [
        "sts:AssumeRole",
        "sts:TagSession"
      ]
    }
  ]
}
  1. In Account C, create an S3 access point by following the steps here.
  2. Next, configure your S3 access point to allow access to the role created in step 1. This is an example access point policy that gives Account A permission to access points in account C.
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::<Account-A-ID>:role/<Data-Access-Role-Name>"
      },
      "Action": [
        "s3:ListBucket",
        "s3:GetObject"
      ],
      "Resource": [
        "arn:aws:s3:<Region>:<Account-C-ID>:accesspoint/<Access-Point-Name>",
        "arn:aws:s3:<Region>:<Account-C-ID>:accesspoint/<Access-Point-Name>/object/*"
      ]
    }
  ]
}
  1. Ensure your S3 bucket policy is updated to allow Account A access. This is an example S3 bucket policy:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "s3:GetObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::<bucket-name>",
        "arn:aws:s3:::<bucket-name>/*"
      ],
      "Condition": {
        "StringEquals": {
          "s3:DataAccessPointAccount": "<Account-C-ID>"
        }
      }
    }
  ]
}
  1. In Account A, create a pod identity association for your EKS cluster using the AWS CLI.
aws eks create-pod-identity-association 
--cluster-name <EKS-Cluster-Name> 
--role-arn arn:aws:iam::<Account-A-ID>:role/<Data-Access-Role-Name> 
--namespace hyperpod-ns-eng 
--service-account my-service-account
  1. Pods accessing cross-account S3 buckets will need the service account name referenced in their pod specification.

You can test cross-account data access by spinning up a test pod and the executing into the pod to run Amazon S3 commands:

kubectl exec -it aws-test -n hyperpod-ns-team-a -- aws s3 ls s3://<access-point>

This example shows creating a single data access role for a single team. For multiple teams, use a namespace-specific ServiceAccount with its own data access role to help prevent overlapping resource access across teams. You can also configure cross-account Amazon S3 access for an Amazon FSx for Lustre file system in Account A, as described in Use Amazon FSx for Lustre to share Amazon S3 data across accounts. FSx for Lustre and Amazon S3 will need to be in the same AWS Region, and the FSx for Lustre file system will need to be in the same Availability Zone as your SageMaker HyperPod cluster.

Conclusion

In this post, we provided guidance on how to set up cross-account access to data scientists accessing a centralized SageMaker HyperPod cluster orchestrated by Amazon EKS. In addition, we covered how to provide Amazon S3 data access from one account to an EKS cluster in another account. With SageMaker HyperPod task governance, you can restrict access and compute allocation to specific teams. This architecture can be used at scale by organizations wanting to share a large compute cluster across accounts within their organization. To get started with SageMaker HyperPod task governance, refer to the Amazon EKS Support in Amazon SageMaker HyperPod workshop and SageMaker HyperPod task governance documentation.


About the Authors

Nisha Nadkarni is a Senior GenAI Specialist Solutions Architect at AWS, where she guides companies through best practices when deploying large scale distributed training and inference on AWS. Prior to her current role, she spent several years at AWS focused on helping emerging GenAI startups develop models from ideation to production.

Anoop Saha is a Sr GTM Specialist at Amazon Web Services (AWS) focusing on generative AI model training and inference. He partners with top frontier model builders, strategic customers, and AWS service teams to enable distributed training and inference at scale on AWS and lead joint GTM motions. Before AWS, Anoop held several leadership roles at startups and large corporations, primarily focusing on silicon and system architecture of AI infrastructure.

Kareem Syed-Mohammed is a Product Manager at AWS. He is focused on compute optimization and cost governance. Prior to this, at Amazon QuickSight, he led embedded analytics, and developer experience. In addition to QuickSight, he has been with AWS Marketplace and Amazon retail as a Product Manager. Kareem started his career as a developer for call center technologies, Local Expert and Ads for Expedia, and management consultant at McKinsey.

Rajesh Ramchander is a Principal ML Engineer in Professional Services at AWS. He helps customers at various stages in their AI/ML and GenAI journey, from those that are just getting started all the way to those that are leading their business with an AI-first strategy.

Read More

Build a Text-to-SQL solution for data consistency in generative AI using Amazon Nova

Build a Text-to-SQL solution for data consistency in generative AI using Amazon Nova

Businesses rely on precise, real-time insights to make critical decisions. However, enabling non-technical users to access proprietary or organizational data without technical expertise remains a challenge. Text-to-SQL bridges this gap by generating precise, schema-specific queries that empower faster decision-making and foster a data-driven culture. The problem lies in obtaining deterministic answers—precise, consistent results needed for operations such as generating exact counts or detailed reports—from proprietary or organizational data. Generative AI offers several approaches to query data, but selecting the right method is critical to achieve accuracy and reliability.

This post evaluates the key options for querying data using generative AI, discusses their strengths and limitations, and demonstrates why Text-to-SQL is the best choice for deterministic, schema-specific tasks. We show how to effectively use Text-to-SQL using Amazon Nova, a foundation model (FM) available in Amazon Bedrock, to derive precise and reliable answers from your data.

Options for querying data

Organizations have multiple options for querying data, and the choice depends on the nature of the data and the required outcomes. This section evaluates the following approaches to provide clarity on when to use each and why Text-to-SQL is optimal for deterministic, schema-based tasks:

  • Retrieval Augmented Generation (RAG):
    • Use case – Ideal for extracting insights from unstructured or semi-structured sources like documents or articles.
    • Strengths – Handles diverse data formats and provides narrative-style responses.
    • Limitations – Probabilistic answers can vary, making it unsuitable for deterministic queries, such as retrieving exact counts or matching specific schema constraints.
    • Example – “Summarize feedback from product reviews.”
  • Generative business intelligence (BI):
    • Use case – Suitable for high-level insights and summary generation based on structured and unstructured data.
    • Strengths – Delivers narrative insights for decision-making and trends.
    • Limitations – Lacks the precision required for schema-specific or operational queries. Results often vary in phrasing and focus.
    • Example – “What were the key drivers of sales growth last quarter?”
  • Text-to-SQL:
    • Use case – Excels in querying structured organizational data directly from relational schemas.
    • Strengths – Provides deterministic, reproducible results for specific, schema-dependent queries. Ideal for precise operations such as filtering, counting, or aggregating data.
    • Limitations – Requires structured data and predefined schemas.
    • Example – “How many patients diagnosed with diabetes visited clinics in New York City last month?”

In scenarios demanding precision and consistency, Text-to-SQL outshines RAG and generative BI by delivering accurate, schema-driven results. These characteristics make it the ideal solution for operational and structured data queries.

Solution overview

This solution uses the Amazon Nova Lite and Amazon Nova Pro large language models (LLMs) to simplify querying proprietary data with natural language, making it accessible to non-technical users.

Amazon Bedrock is a fully managed service that simplifies building and scaling generative AI applications by providing access to leading FMs through a single API. It allows developers to experiment with and customize these models securely and privately, integrating generative AI capabilities into their applications without managing infrastructure.

Within this system, Amazon Nova represents a new generation of FMs delivering advanced intelligence and industry-leading price-performance. These models, including Amazon Nova Lite and Amazon Nova Pro, are designed to handle various tasks such as text, image, and video understanding, making them versatile tools for diverse applications.

You can find the deployment code and detailed instructions in our GitHub repo.

The solution consists of the following key features:

  • Dynamic schema context – Retrieves the database schema dynamically for precise query generation
  • SQL query generation – Converts natural language into SQL queries using the Amazon Nova Pro LLM
  • Query execution – Runs queries on organizational databases and retrieves results
  • Formatted responses – Processes raw query results into user-friendly formats using the Amazon Nova Lite LLM

The following diagram illustrates the solution architecture.

Data flow between user, Streamlit app, Amazon Bedrock, and Microsoft SQL Server, illustrating query processing and response generation

In this solution, we use Amazon Nova Pro and Amazon Nova Lite to take advantage of their respective strengths, facilitating efficient and effective processing at each stage:

  • Dynamic schema retrieval and SQL query generation – We use Amazon Nova Pro to handle the translation of natural language inputs into SQL queries. Its advanced capabilities in complex reasoning and understanding make it well-suited for accurately interpreting user intents and generating precise SQL statements.
  • Formatted response generation – After we run the SQL queries, the raw results are processed using Amazon Nova Lite. This model efficiently formats the data into user-friendly outputs, making the information accessible to non-technical users. Its speed and cost-effectiveness are advantageous for this stage, where rapid processing and straightforward presentation are key.

By strategically deploying Amazon Nova Pro and Amazon Nova Lite in this manner, the solution makes sure that each component operates optimally, balancing performance, accuracy, and cost-effectiveness.

Prerequisites

Complete the following prerequisite steps:

  1. Install the AWS Command Line Interface (AWS CLI). For instructions, refer to Installing or updating to the latest version of the AWS CLI.
  2. Configure the basic settings that the AWS CLI uses to interact with AWS. For more information, see Configuration and credential file settings in the AWS CLI.
  3. Make sure Amazon Bedrock is enabled in your AWS account.
  4. Obtain access to Amazon Nova Lite and Amazon Nova Pro.
  5. Install Python 3.9 or later, along with required libraries (Streamlit version 1.8.0 or later, Boto3, pymssql, and environment management packages).
  6. Create a Microsoft SQL Server (version 2016 or later) database with credentials to connect.
    1. Create a secret in AWS Secrets Manager for database credentials and name it mssql_secrets. For instructions, see Create an AWS Secrets Manager secret.

Our sample code uses a Microsoft SQL Server database, but this solution supports the following services:

For more information about prerequisites, refer to the GitHub repo.

Set up the development environment

In the command prompt, navigate to the folder where the code exists and run the following command:

python3.9 -m pip install -r requirements.txt --upgrade

This command installs the required libraries to run the application.

Load the sample dataset in the database

Make sure you have created a secret in Secrets Manager named mssql_secrets as mentioned in the prerequisites. If you named your secret something else, update the code in app.py (line 29) and load_data.py (line 22).

After you create the secret, run the following command from the code folder:

python load_data.py

This command creates a database named Sales with tables Products, Customers, and Orders and loads the sample data in these tables.

Run the application

To run the application, execute the following command:

streamlit run app.py

Example queries

In this section, we explore some sample queries.

For our first query, we ask “Who are the customers who bought smartphones?” This generates the following SQL:

SELECT DISTINCT CustomerName, ProductName, SUM(Quantity) AS TotalSoldQuantity FROM vw_sales WHERE ProductName LIKE '%smartphone%' GROUP BY CustomerName, ProductName, OrderDate; 

We get the following formatted response:

  1. Alice Johnson, who bought 1 smartphone on October 14th, 2023.
  2. Ivy Martinez, who bought 2 smartphones on October 15th, 2023.

Next, we ask “How many smartphones are in stock?” This generates the following SQL:

SELECT DISTINCT ProductName, StockQuantity AS AvailableQuantity FROM vw_sales WHERE ProductName LIKE '%smartphone%'; 

We get the response “There are 100 smartphones currently in stock.”

Sales-bot web interface demonstrating natural language to SQL conversion with Amazon Bedrock, showing smartphone purchase query and results

Code execution flow

In this section, we explore the code execution flow. The code reference is from the GitHub repo. Do not run the different parts of the code individually.

Retrieve schema dynamically

Use INFORMATION_SCHEMA views to extract schema details dynamically (code reference from app.py):

def get_schema_context(db_name, db_view_name):
    conn = connect_to_db()
    cursor = conn.cursor()
    cursor.execute(f"USE {db_name}")
    query = f"SELECT COLUMN_NAME, DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = '{db_view_name}'"
    cursor.execute(query)
    schema = cursor.fetchall()
    print("Schema:", schema)
    return 'n'.join([f"- {row[0]}: {row[1]}" for row in schema])

Dynamic schema retrieval adapts automatically to changes by querying metadata tables for updated schema details, such as table names and column types. This facilitates seamless integration of schema updates into the Text-to-SQL system, reducing manual effort and improving scalability.

Test this function to verify it adapts automatically when schema changes occur.

Before generating SQL, fetch schema details for the relevant tables to facilitate accurate query construction.

Generate a SQL query using Amazon Nova Pro

Send the user query and schema context to Amazon Nova Pro (code reference from sql_generator.py):

def generate_sql_query(question: str, schema_context: str, db_name: str, db_view_name: str = None) -> str:
    
    nova_client = NovaClient()
       
    # Base prompt with SQL generation rules
    base_prompt = """
    MS SQL DB {db_name} has one view names '{db_view_name}'. 
    Always use '{db_view_name}' as table name to generate your query.
    Create a MS SQL query by carefully understanding the question and generate the query between tags <begin sql> and </end sql>.
    The MS SQL query should selects all columns from a view named '{db_view_name}'
    In your SQL query always use like condition in where clasue.
    if a question is asked about product stock then always use 'distinct' in your SQL query.
    Never Generate an SQL query which gives error upon execution.
      
    
    Question: {question}
    
    Database Schema : {schema_context}
    
    Generate SQL query:
    """
    
    # Format the prompt with the question and schema context
    formatted_prompt = base_prompt.format(
        question=question,
        db_name=db_name,
        db_view_name=db_view_name if db_view_name else "No view name provided",
        schema_context=schema_context if schema_context else "No additional context provided"
    )
        
    # Invoke Nova model
    response = nova_client.invoke_model(
        model_id='amazon.nova-pro-v1:0',
        prompt=formatted_prompt,
        temperature=0.1  # Lower temperature for more deterministic SQL generation
    )
    
    # Extract SQL query from response using regex
    sql_match = extract_sql_from_nova_response(response)
    if sql_match:
        return sql_match
    else:
        raise ValueError("No SQL query found in the response")
    
def extract_sql_from_nova_response(response):
    try:
        # Navigate the nested dictionary structure
        content = response['output']['message']['content']
        # Get the text from the first content item
        text = content[0]['text']
        
        # Find the positions of begin and end tags
        begin_tag = "<begin sql>"
        end_tag = "</end sql>"
        start_pos = text.find(begin_tag)
        end_pos = text.find(end_tag)
        
        # If both tags are found, extract the SQL between them
        if start_pos != -1 and end_pos != -1:
            # Add length of begin tag to start position to skip the tag itself
            sql_query = text[start_pos + len(begin_tag):end_pos].strip()
            return sql_query
            
        return None
        
    except (KeyError, IndexError):
        # Return None if the expected structure is not found
        return None

This code establishes a structured context for a text-to-SQL use case, guiding Amazon Nova Pro to generate SQL queries based on a predefined database schema. It provides consistency by defining a static database context that clarifies table names, columns, and relationships, helping prevent ambiguity in query formation. Queries are required to reference the vw_sales view, standardizing data extraction for analytics and reporting. Additionally, whenever applicable, the generated queries must include quantity-related fields, making sure that business users receive key insights on product sales, stock levels, or transactional counts. To enhance search flexibility, the LLM is instructed to use the LIKE operator in WHERE conditions instead of exact matches, allowing for partial matches and accommodating variations in user input. By enforcing these constraints, the code optimizes Text-to-SQL interactions, providing structured, relevant, and business-aligned query generation for sales data analysis.

Execute a SQL query

Run the SQL query on the database and capture the result (code reference from app.py):

cursor.execute(sql_command)
result = cursor.fetchall()
print(result)

Format the query results using Amazon Nova Lite

Send the database result from the SQL query to Amazon Nova Lite to format it in a human-readable format and print it on the Streamlit UI (code reference from app.py):

def interact_with_nova(user_input, llm_query, query_response, model="nova"):
    session = boto3.session.Session()
    region = session.region_name
    
    nova_client = NovaClient(region_name=region)
    
    final_prompt = f"""Human: You are a expert chatbot who is happy to assist the users. User questions in given in <Question> tag and results in <query_response> tag. Understand the question and use information from <query_response> to generate an answer. If there are more than one entery, give a numbered list. Never retrun <question> and <query_response> in your response.
    for example : question - "How many mouse were sold?"
                  llm response : 
                                " There were 3 mouse sold in total. 
                                - 1 mouse sold to Mia Perez on October 2nd, 2023. 
                                - 2 mouse sold to Jack Hernandez on October 1st 2023."
    <Question>
    {user_input}
    </Question>
    <query_response>
    {query_response}
    </query_response>"""
    
    try:
        
            response = nova_client.invoke_model(
                model_id='amazon.nova-lite-v1:0',
                prompt=final_prompt,
                max_tokens=4096,
                temperature=0.7
            )
            
            content = response['output']['message']['content']
            text = content[0]['text']
            return text
            
            return "Sorry, I couldn't process your request."
    
    except Exception as e:
        print(f"Error in LLM interaction: {str(e)}")
        return "Sorry, an error occurred while processing your request."

Clean up

Follow these steps to clean up resources in your AWS environment and avoid incurring future costs:

  1. Clean up database resources:
  2. Clean up security resources:
  3. Clean up the frontend (only if hosting the Streamlit application on Amazon EC2):
    • Stop the EC2 instance hosting the Streamlit application.
    • Delete associated storage volumes.
  4. Clean up additional resources (if applicable):
    • Remove Elastic Load Balancers.
    • Delete virtual private cloud (VPC) configurations.
  5. Check the AWS Management Console to confirm all resources have been deleted.

Conclusion

Text-to-SQL with Amazon Bedrock and Amazon Nova LLMs provides a scalable solution for deterministic, schema-based querying. By delivering consistent and precise results, it empowers organizations to make informed decisions, improve operational efficiency, and reduce reliance on technical resources.

For a more comprehensive example of a Text-to-SQL solution built on Amazon Bedrock, explore the GitHub repo Setup Amazon Bedrock Agent for Text-to-SQL Using Amazon Athena with Streamlit. This open source project demonstrates how to use Amazon Bedrock and Amazon Nova LLMs to build a robust Text-to-SQL agent that can generate complex queries, self-correct, and query diverse data sources.

Start experimenting with Text-to-SQL use cases today by getting started with Amazon Bedrock.


About the authors

Mansi Sharma is a Solutions Architect for Amazon Web Services. Mansi is a trusted technical advisor helping enterprise customers architect and implement cloud solutions at scale. She drives customer success through technical leadership, architectural guidance, and innovative problem-solving while working with cutting-edge cloud technologies. Mansi specializes in generative AI application development and serverless technologies.

Marie Yap is a Principal Solutions Architect for Amazon Web Services.  In this role, she helps various organizations begin their journey to the cloud. She also specializes in analytics and modern data architectures.

Read More

Modernize and migrate on-premises fraud detection machine learning workflows to Amazon SageMaker

Modernize and migrate on-premises fraud detection machine learning workflows to Amazon SageMaker

This post is co-written with Qing Chen and Mark Sinclair from Radial.

Radial is the largest 3PL fulfillment provider, also offering integrated payment, fraud detection, and omnichannel solutions to mid-market and enterprise brands. With over 30 years of industry expertise, Radial tailors its services and solutions to align strategically with each brand’s unique needs.

Radial supports brands in tackling common ecommerce challenges, from scalable, flexible fulfillment enabling delivery consistency to providing secure transactions. With a commitment to fulfilling promises from click to delivery, Radial empowers brands to navigate the dynamic digital landscape with the confidence and capability to deliver a seamless, secure, and superior ecommerce experience.

In this post, we share how Radial optimized the cost and performance of their fraud detection machine learning (ML) applications by modernizing their ML workflow using Amazon SageMaker.

Businesses need for fraud detection models

ML has proven to be an effective approach in fraud detection compared to traditional approaches. ML models can analyze vast amounts of transactional data, learn from historical fraud patterns, and detect anomalies that signal potential fraud in real time. By continuously learning and adapting to new fraud patterns, ML can make sure fraud detection systems stay resilient and robust against evolving threats, enhancing detection accuracy and reducing false positives over time. This post showcases how companies like Radial can modernize and migrate their on-premises fraud detection ML workflows to SageMaker. By using the AWS Experience-Based Acceleration (EBA) program, they can enhance efficiency, scalability, and maintainability through close collaboration.

Challenges of on-premises ML models

Although ML models are highly effective at combating evolving fraud trends, managing these models on premises presents significant scalability and maintenance challenges.

Scalability

On-premises systems are inherently limited by the physical hardware available. During peak shopping seasons, when transaction volumes surge, the infrastructure might struggle to keep up without substantial upfront investment. This can result in slower processing times or a reduced capacity to run multiple ML applications concurrently, potentially leading to missed fraud detections. Scaling an on-premises infrastructure is typically a slow and resource-intensive process, hindering a business’s ability to adapt quickly to increased demand. On the model training side, data scientists often face bottlenecks due to limited resources, forcing them to wait for infrastructure availability or reduce the scope of their experiments. This delays innovation and can lead to suboptimal model performance, putting businesses at a disadvantage in a rapidly changing fraud landscape.

Maintenance

Maintaining an on-premises infrastructure for fraud detection requires a dedicated IT team to manage servers, storage, networking, and backups. Maintaining uptime often involves implementing and maintaining redundant systems, because a failure could result in critical downtime and an increased risk of undetected fraud. Moreover, fraud detection models naturally degrade over time and require regular retraining, deployment, and monitoring. On-premises systems typically lack the built-in automation tools needed to manage the full ML lifecycle. As a result, IT teams must manually handle tasks such as updating models, monitoring for drift, and deploying new versions. This adds operational complexity, increases the likelihood of errors, and diverts valuable resources from other business-critical activities.

Common modernization challenges in ML cloud migration

Organizations face several significant challenges when modernizing their ML workloads through cloud migration. One major hurdle is the skill gap, where developers and data scientists might lack expertise in microservices architecture, advanced ML tools, and DevOps practices for cloud environments. This can lead to development delays, complex and costly architectures, and increased security vulnerabilities. Cross-functional barriers, characterized by limited communication and collaboration between teams, can also impede modernization efforts by hindering information sharing. Slow decision-making is another critical challenge. Many organizations take too long to make choices about their cloud move. They spend too much time thinking about options instead of taking action. This delay can cause them to miss chances to speed up their modernization. It also stops them from using the cloud’s ability to quickly try new things and make changes. In the fast-moving world of ML and cloud technology, being slow to decide can put companies behind their competitors. Another significant obstacle is complex project management, because modernization initiatives often require coordinating work across multiple teams with conflicting priorities. This challenge is compounded by difficulties in aligning stakeholders on business outcomes, quantifying and tracking benefits to demonstrate value, and balancing long-term benefits with short-term goals. To address these challenges and streamline modernization efforts, AWS offers the EBA program. This methodology is designed to assist customers in aligning executives’ vision and resolving roadblocks, accelerating their cloud journey, and achieving a successful migration and modernization of their ML workloads to the cloud.

EBA: AWS team collaboration

EBA is a 3-day interactive workshop that uses SageMaker to accelerate business outcomes. It guides participants through a prescriptive ML lifecycle, starting with identifying business goals and ML problem framing, and progressing through data processing, model development, production deployment, and monitoring.

We recognize that customers have different starting points. For those beginning from scratch, it’s often simpler to start with low code or no code solutions like Amazon SageMaker Canvas and Amazon SageMaker JumpStart, gradually transitioning to developing custom models on Amazon SageMaker Studio. However, because Radial has an existing on-premises ML infrastructure, we can begin directly by using SageMaker to address challenges in their current solution.

During the EBA, experienced AWS ML subject matter experts and the AWS Account Team worked closely with Radial’s cross-functional team. The AWS team offered tailored advice, tackled obstacles, and enhanced the organization’s capacity for ongoing ML integration. Instead of concentrating solely on data and ML technology, the emphasis is on addressing critical business challenges. This strategy helps organizations extract significant value from previously underutilized resources.

Modernizing ML workflows: From a legacy on-premises data center to SageMaker

Before modernization, Radial hosted its ML applications on premises within its data center. The legacy ML workflow presented several challenges, particularly in the time-intensive model development and deployment processes.

Legacy workflow: On-premises ML development and deployment

When the data science team needed to build a new fraud detection model, the development process typically took 2–4 weeks. During this phase, data scientists performed tasks such as the following:

  • Data cleaning and exploratory data analysis (EDA)
  • Feature engineering
  • Model prototyping and training experiments
  • Model evaluation to finalize the fraud detection model

These steps were carried out using on-premises servers, which limited the number of experiments that could be run concurrently due to hardware constraints. After the model was finalized, the data science team handed over the model artifacts and implementation code—along with detailed instructions—to the software developers and DevOps teams. This transition initiated the model deployment process, which involved:

  • Provisioning infrastructure – The software team set up the necessary infrastructure to host the ML API in a test environment.
  • API implementation and testing – Extensive testing and communication between the data science and software teams were required to make sure the model inference API behaved as expected. This phase typically added 2–3 weeks to the timeline.
  • Production deployment – The DevOps and system engineering teams provisioned and scaled on-premises hardware to deploy the ML API into production, a process that could take up to several weeks depending on resource availability.

Overall, the legacy workflow was prone to delays and inefficiencies, with significant communication overhead and a reliance on manual provisioning.

Modern workflow: SageMaker and MLOps

With the migration to SageMaker and the adoption of a machine learning operations (MLOps) architecture, Radial streamlined its entire ML lifecycle—from development to deployment. The new workflow consists of the following stages:

  • Model development – The data science team continues to perform tasks such as data cleaning, EDA, feature engineering, and model training within 2–4 weeks. However, with the scalable and on-demand compute resources of SageMaker, they can conduct more training experiments in the same timeframe, leading to improved model performance and faster iterations.
  • Seamless model deployment – When a model is ready, the data science team approves it in SageMaker and triggers the MLOps pipeline to deploy the model to the test (pre-production) environment. This eliminates the need for back-and-forth communication with the software team at this stage. Key improvements include:
    • The ML API inference code is preconfigured and wrapped by the data scientists during development, providing consistent behavior between development and deployment.
    • Deployment to test environments takes minutes, because the MLOps pipeline automates infrastructure provisioning and deployment.
  • Final integration and testing – The software team quickly integrates the API and performs necessary tests, such as integration and load testing. After the tests are successful, the team triggers the pipeline to deploy the ML models into production, which takes only minutes.

The MLOps pipeline not only automates the provisioning of cloud resources, but also provides consistency between pre-production and production environments, minimizing deployment risks.

Legacy vs. modern workflow comparison

The new workflow significantly reduces time and complexity:

  • Manual provisioning and communication overheads are reduced
  • Deployment times are reduced from weeks to minutes
  • Consistency between environments provides smoother transitions from development to production

This transformation enables Radial to respond more quickly to evolving fraud trends while maintaining high standards of efficiency and reliability. The following figure provides a visual comparison of the legacy and modern ML workflows.

Solution overview

When Radial migrated their fraud detection systems to the cloud, they collaborated with AWS Machine Learning Specialists and Solutions Architects to redesign how Radial manage the lifecycle of ML models. By using AWS and integrating continuous integration and delivery (CI/CD) pipelines with GitLab, Terraform, and AWS CloudFormation, Radial developed a scalable, efficient, and secure MLOps architecture. This new design accelerates model development and deployment, so Radial can respond faster to evolving fraud detection challenges.

The architecture incorporates best practices in MLOps, making sure that the different stages of the ML lifecycle—from data preparation to production deployment—are optimized for performance and reliability. Key components of the solution include:

  • SageMaker – Central to the architecture, SageMaker facilitates model training, evaluation, and deployment with built-in tools for monitoring and version control
  • GitLab CI/CD pipelines – These pipelines automate the workflows for testing, building, and deploying ML models, reducing manual overhead and providing consistent processes across environments
  • Terraform and AWS CloudFormation – These services enable infrastructure as code (IaC) to provision and manage AWS resources, providing a repeatable and scalable setup for ML applications

The overall solution architecture is illustrated in the following figure, showcasing how each component integrates seamlessly to support Radial’s fraud detection initiatives.

Account isolation for secure and scalable MLOps

To streamline operations and enforce security, the MLOps architecture is built on a multi-account strategy that isolates environments based on their purpose. This design enforces strict security boundaries, reduces risks, and promotes efficient collaboration across teams. The accounts are as follows:

  • Development account (model development workspace) – The development account is a dedicated workspace for data scientists to experiment and develop models. Secure data management is enforced by isolating datasets within Amazon Simple Storage Service (Amazon S3) buckets. Data scientists use SageMaker Studio for data exploration, feature engineering, and scalable model training. When the model build CI/CD pipeline in GitLab is triggered, Terraform and CloudFormation scripts automate the provisioning of infrastructure and AWS resources needed for SageMaker training pipelines. Trained models that meet predefined evaluation metrics are versioned and registered in the Amazon SageMaker Model Registry. With this setup, data scientists and ML engineers can perform multiple rounds of training experiments, review results, and finalize the best model for deployment testing.
  • Pre-production account (staging environment) – After a model is validated and approved in the development account, it’s moved to the pre-production account for staging. At this stage, the data science team triggers the model deploy CI/CD pipeline in GitLab to configure the endpoint in the pre-production environment. Model artifacts and inference images are synced from the development account to the pre-production environment. The latest approved model is deployed as an API in a SageMaker endpoint, where it undergoes thorough integration and load testing to validate performance and reliability.
  • Production account (live environment) – After passing the pre-production tests, the model is promoted to the production account for live deployment. This account mirrors the configurations of the pre-production environment to maintain consistency and reliability. The MLOps production team triggers the model deploy CI/CD pipeline to launch the production ML API. When it’s live, the model is continuously monitored using Amazon SageMaker Model Monitor and Amazon CloudWatch to make sure it performs as expected. In the event of deployment issues, automated rollback mechanisms revert to a stable model version, minimizing disruptions and maintaining business continuity.

With this multi-account architecture, data scientists can work independently while providing seamless transitions between development and production. The automation of CI/CD pipelines reduces deployment cycles, enhances scalability, and provides the security and performance necessary to maintain effective fraud detection systems.

Data privacy and compliance requirements

Radial prioritizes the protection and security of their customers’ data. As a leader in ecommerce solutions, they are committed to meeting the high standards of data privacy and regulatory compliance such as CPPA and PCI. Radial fraud detection ML APIs process sensitive information such as transaction details and behavioral analytics. To meet strict compliance requirements, they use AWS Direct Connect, Amazon Virtual Private Cloud (Amazon VPC), and Amazon S3 with AWS Key Management Service (AWS KMS) encryption to build a secure and compliant architecture.

Protecting data in transit with Direct Connect

Data is never exposed to the public internet at any stage. To maintain the secure transfer of sensitive data between on-premises systems and AWS environments, Radial uses Direct Connect, which offers the following capabilities:

  • Dedicated network connection – Direct Connect establishes a private, high-speed connection between the data center and AWS, alleviating the risks associated with public internet traffic, such as interception or unauthorized access
  • Consistent and reliable performance – Direct Connect provides consistent bandwidth and low latency, making sure fraud detection APIs operate without delays, even during peak transaction volumes

Isolating workloads with Amazon VPC

When data reaches AWS, it’s processed in a VPC for maximum security. This offers the following benefits:

  • Private subnets for sensitive data – The components of the fraud detection ML API, including SageMaker endpoints and AWS Lambda functions, reside in private subnets, which are not accessible from the public internet
  • Controlled access with security groups – Strict access control is enforced through security groups and network access control lists (ACLs), allowing only authorized systems and users to interact with VPC resources
  • Data segregation by account – As mentioned previously regarding the multi-account strategy, workloads are isolated across development, staging, and production accounts, each with its own VPC, to limit cross-environment access and maintain compliance.

Securing data at rest with Amazon S3 and AWS KMS encryption

Data involved in the fraud detection workflows (for both model development and real-time inference) is securely stored in Amazon S3, with encryption powered by AWS KMS. This offers the following benefits:

  • AWS KMS encryption for sensitive data – Transaction logs, model artifacts, and prediction results are encrypted at rest using managed KMS keys
  • Encryption in transit – Interactions with Amazon S3, including uploads and downloads, are encrypted to make sure data remains secure during transfer
  • Data retention policies – Lifecycle policies enforce data retention limits, making sure sensitive data is stored only as long as necessary for compliance and business purposes before scheduled deletion

Data privacy by design

Data privacy is integrated into every step of the ML API workflow:

  • Secure inference – Incoming transaction data is processed within VPC-secured SageMaker endpoints, making sure predictions are made in a private environment
  • Minimal data retention – Real-time transaction data is anonymized where possible, and only aggregated results are stored for future analysis
  • Access control and governance – Resources are governed by AWS Identity and Access Management (IAM) policies, making sure only authorized personnel and services can access data and infrastructure

Benefits of the new ML workflow on AWS

To summarize, the implementation of the new ML workflow on AWS offers several key benefits:

  • Dynamic scalability – AWS enables Radial to scale their infrastructure dynamically to handle spikes in both model training and real-time inference traffic, providing optimal performance during peak periods.
  • Faster infrastructure provisioning – The new workflow accelerates the model deployment cycle, reducing the time to provision infrastructure and deploy new models by up to several weeks.
  • Consistency in model training and deployment – By streamlining the process, Radial achieves consistent model training and deployment across environments. This reduces communication overhead between the data science team and engineering/DevOps teams, simplifying the implementation of model deployment.
  • Infrastructure as code – With IaC, they benefit from version control and reusability, reducing manual configurations and minimizing the risk of errors during deployment.
  • Built-in model monitoring – The built-in capabilities of SageMaker, such as experiment tracking and data drift detection, help them maintain model performance and provide timely updates.

Key takeaways and lessons learned from Radial’s ML model migration

To help modernize your MLOps workflow on AWS, the following are a few key takeaways and lessons learned from Radial’s experience:

  • Collaborate with AWS for customized solutions – Engage with AWS to discuss your specific use cases and identify templates that closely match your requirements. Although AWS offers a wide range of templates for common MLOps scenarios, they might need to be customized to fit your unique needs. Explore how to adapt these templates for migrating or revamping your ML workflows.
  • Iterative customization and support – As you customize your solution, work closely with both your internal team and AWS Support to address any issues. Plan for execution-based assessments and schedule workshops with AWS to resolve challenges at each stage. This might be an iterative process, but it makes sure your modules are optimized for your environment.
  • Use account isolation for security and collaboration – Use account isolation to separate model development, pre-production, and production environments. This setup promotes seamless collaboration between your data science team and DevOps/MLOps team, while also enforcing strong security boundaries between environments.
  • Maintain scalability with proper configuration – Radial’s fraud detection models successfully handled transaction spikes during peak seasons. To maintain scalability, configure instance quota limits correctly within AWS, and conduct thorough load testing before peak traffic periods to avoid any performance issues during high-demand times.
  • Secure model metadata sharing – Consider opting out of sharing model metadata when building your SageMaker pipeline to make sure your aggregate-level model information remains secure.
  • Prevent image conflicts with proper configuration – When using an AWS managed image for model inference, specify a hash digest within your SageMaker pipeline. Because the latest hash digest might change dynamically for the same image model version, this step helps avoid conflicts when retrieving inference images during model deployment.
  • Fine-tune scaling metrics through load testing – Fine-tune scaling metrics, such as instance type and automatic scaling thresholds, based on proper load testing. Simulate your business’s traffic patterns during both normal and peak periods to confirm your infrastructure scales effectively.
  • Applicability beyond fraud detection – Although the implementation described here is tailored to fraud detection, the MLOps architecture is adaptable to a wide range of ML use cases. Companies looking to modernize their MLOps workflows can apply the same principles to various ML projects.

Conclusion

This post demonstrated the high-level approach taken by Radial’s fraud team to successfully modernize their ML workflow by implementing an MLOps pipeline and migrating from on premises to the AWS Cloud. This was achieved through close collaboration with AWS during the EBA process. The EBA process begins with 4–6 weeks of preparation, culminating in a 3-day intensive workshop where a minimum viable MLOps pipeline is created using SageMaker, Amazon S3, GitLab, Terraform, and AWS CloudFormation. Following the EBA, teams typically spend an additional 2–6 weeks to refine the pipeline and fine-tune the models through feature engineering and hyperparameter optimization before production deployment. This approach enabled Radial to effectively select relevant AWS services and features, accelerating the training, deployment, and testing of ML models in a pre-production SageMaker environment. As a result, Radial successfully deployed multiple new ML models on AWS in their production environment around Q3 2024, achieving a more than 75% reduction in ML model deployment cycle and a 9% improvement in overall model performance.

“In the ecommerce retail space, mitigating fraudulent transactions and enhancing consumer experiences are top priorities for merchants. High-performing machine learning models have become invaluable tools in achieving these goals. By leveraging AWS services, we have successfully built a modernized machine learning workflow that enables rapid iterations in a stable and secure environment.”

– Lan Zhang, Head of Data Science and Advanced Analytics

To learn more about EBAs and how this approach can benefit your organization, reach out to your AWS Account Manager or Customer Solutions Manager. For additional information, refer to Using experience-based acceleration to achieve your transformation and Get to Know EBA.


About the Authors

Jake Wen is a Solutions Architect at AWS, driven by a passion for Machine Learning, Natural Language Processing, and Deep Learning. He assists Enterprise customers in achieving modernization and scalable deployment in the Cloud. Beyond the tech world, Jake finds delight in skateboarding, hiking, and piloting air drones.

Qing Chen is a senior data scientist at Radial, a full-stack solution provider for ecommerce merchants. In his role, he modernizes and manages the machine learning framework in the payment & fraud organization, driving a solid data-driven fraud decisioning flow to balance risk & customer friction for merchants.

Mark Sinclair is a senior cloud architect at Radial, a full-stack solution provider for ecommerce merchants. In his role, he designs, implements and manages the cloud infrastructure and DevOps for Radial engineering systems, driving a solid engineering architecture and workflow to provide highly scalable transactional services for Radial clients.

Read More

Contextual retrieval in Anthropic using Amazon Bedrock Knowledge Bases

Contextual retrieval in Anthropic using Amazon Bedrock Knowledge Bases

For an AI model to perform effectively in specialized domains, it requires access to relevant background knowledge. A customer support chat assistant, for instance, needs detailed information about the business it serves, and a legal analysis tool must draw upon a comprehensive database of past cases.

To equip large language models (LLMs) with this knowledge, developers often use Retrieval Augmented Generation (RAG). This technique retrieves pertinent information from a knowledge base and incorporates it into the user’s prompt, significantly improving the model’s responses. However, a key limitation of traditional RAG systems is that they often lose contextual nuances when encoding data, leading to irrelevant or incomplete retrievals from the knowledge base.

Challenges in traditional RAG

In traditional RAG, documents are often divided into smaller chunks to optimize retrieval efficiency. Although this method performs well in many cases, it can introduce challenges when individual chunks lack the necessary context. For example, if a policy states that remote work requires “6 months of tenure” (chunk 1) and “HR approval for exceptions” (chunk 3), but omits the middle chunk linking exceptions to manager approval, a user asking about eligibility for a 3-month tenure employee might receive a misleading “No” instead of the correct “Only with HR approval.” This occurs because isolated chunks fail to preserve dependencies between clauses, highlighting a key limitation of basic chunking strategies in RAG systems.

Contextual retrieval enhances traditional RAG by adding chunk-specific explanatory context to each chunk before generating embeddings. This approach enriches the vector representation with relevant contextual information, enabling more accurate retrieval of semantically related content when responding to user queries. For instance, when asked about remote work eligibility, it fetches both the tenure requirement and the HR exception clause, enabling the LLM to provide an accurate response such as “Normally no, but HR may approve exceptions.” By intelligently stitching fragmented information, contextual retrieval mitigates the pitfalls of rigid chunking, delivering more reliable and nuanced answers.

In this post, we demonstrate how to use contextual retrieval with Anthropic and Amazon Bedrock Knowledge Bases.

Solution overview

This solution uses Amazon Bedrock Knowledge Bases, incorporating a custom Lambda function to transform data during the knowledge base ingestion process. This Lambda function processes documents from Amazon Simple Storage Service (Amazon S3), chunks them into smaller pieces, enriches each chunk with contextual information using Anthropic’s Claude in Amazon Bedrock, and then saves the results back to an intermediate S3 bucket. Here’s a step-by-step explanation:

  1. Read input files from an S3 bucket specified in the event.
  2. Chunk input data into smaller chunks.
  3. Generate contextual information for each chunk using Anthropic’s Claude 3 Haiku
  4. Write processed chunks with their metadata back to intermediate S3 bucket

The following diagram is the solution architecture.

Prerequisites

To implement the solution, complete the following prerequisite steps:

Before you begin, you can deploy this solution by downloading the required files and following the instructions in its corresponding GitHub repository. This architecture is built around using the proposed chunking solution to implement contextual retrieval using Amazon Bedrock Knowledge Bases.

Implement contextual retrieval in Amazon Bedrock

In this section, we demonstrate how to use the proposed custom chunking solution to implement contextual retrieval using Amazon Bedrock Knowledge Bases. Developers can use custom chunking strategies in Amazon Bedrock to optimize how large documents or datasets are divided into smaller, more manageable pieces for processing by foundation models (FMs). This approach enables more efficient and effective handling of long-form content, improving the quality of responses. By tailoring the chunking method to the specific characteristics of the data and the requirements of the task at hand, developers can enhance the performance of natural language processing applications built on Amazon Bedrock. Custom chunking can involve techniques such as semantic segmentation, sliding windows with overlap, or using document structure to create logical divisions in the text.

To implement contextual retrieval in Amazon Bedrock, complete the following steps, which can be found in the notebook in the GitHub repository.

To set up the environment, follow these steps:

  1. Install the required dependencies:
    %pip install --upgrade pip --quiet %pip install -r requirements.txt --no-deps

  2. Import the required libraries and set up AWS clients:
    import os
    import sys
    import time
    import boto3
    import logging
    import pprint
    import json
    from pathlib import Path
    
    # AWS Clients Setup
    s3_client = boto3.client('s3')
    sts_client = boto3.client('sts')
    session = boto3.session.Session()
    region = session.region_name
    account_id = sts_client.get_caller_identity()["Account"]
    bedrock_agent_client = boto3.client('bedrock-agent')
    bedrock_agent_runtime_client = boto3.client('bedrock-agent-runtime')
    
    # Configure logging
    logging.basicConfig(
        format='[%(asctime)s] p%(process)s {%(filename)s:%(lineno)d} %(levelname)s - %(message)s',
        level=logging.INFO
    )
    logger = logging.getLogger(__name__)

  3. Define knowledge base parameters:
    # Generate unique suffix for resource names
    timestamp_str = time.strftime("%Y%m%d%H%M%S", time.localtime(time.time()))[-7:]
    suffix = f"{timestamp_str}"
    
    # Resource names
    knowledge_base_name_standard = 'standard-kb'
    knowledge_base_name_custom = 'custom-chunking-kb'
    knowledge_base_description = "Knowledge Base containing complex PDF."
    bucket_name = f'{knowledge_base_name_standard}-{suffix}'
    intermediate_bucket_name = f'{knowledge_base_name_standard}-intermediate-{suffix}'
    lambda_function_name = f'{knowledge_base_name_custom}-lambda-{suffix}'
    foundation_model = "anthropic.claude-3-sonnet-20240229-v1:0"
    
    # Define data sources
    data_source=[{"type": "S3", "bucket_name": bucket_name}]

Create knowledge bases with different chunking strategies

To create knowledge bases with different chunking strategies, use the following code.

  1. Standard fixed chunking:
    # Create knowledge base with fixed chunking
    knowledge_base_standard = BedrockKnowledgeBase(
        kb_name=f'{knowledge_base_name_standard}-{suffix}',
        kb_description=knowledge_base_description,
        data_sources=data_source,
        chunking_strategy="FIXED_SIZE",
        suffix=f'{suffix}-f'
    )
    
    # Upload data to S3
    def upload_directory(path, bucket_name):
        for root, dirs, files in os.walk(path):
            for file in files:
                file_to_upload = os.path.join(root, file)
                if file not in ["LICENSE", "NOTICE", "README.md"]:
                    print(f"uploading file {file_to_upload} to {bucket_name}")
                    s3_client.upload_file(file_to_upload, bucket_name, file)
                else:
                    print(f"Skipping file {file_to_upload}")
    
    upload_directory("../synthetic_dataset", bucket_name)
    
    # Start ingestion job
    time.sleep(30)  # ensure KB is available
    knowledge_base_standard.start_ingestion_job()
    kb_id_standard = knowledge_base_standard.get_knowledge_base_id()

  2. Custom chunking with Lambda function
    # Create Lambda function for custom chunking
    def create_lambda_function():
        with open('lambda_function.py', 'r') as file:
            lambda_code = file.read()
       
        response = lambda_client.create_function(
            FunctionName=lambda_function_name,
            Runtime='python3.9',
            Role=lambda_role_arn,
            Handler='lambda_function.lambda_handler',
            Code={'ZipFile': lambda_code.encode()},
            Timeout=900,
            MemorySize=256
        )
        return response['FunctionArn']
    
    # Create knowledge base with custom chunking
    knowledge_base_custom = BedrockKnowledgeBase(
        kb_name=f'{knowledge_base_name_custom}-{suffix}',
        kb_description=knowledge_base_description,
        data_sources=data_source,
        lambda_function_name=lambda_function_name,
        intermediate_bucket_name=intermediate_bucket_name,
        chunking_strategy="CUSTOM",
        suffix=f'{suffix}-c'
    )
    
    # Start ingestion job
    time.sleep(30)
    knowledge_base_custom.start_ingestion_job()
    kb_id_custom = knowledge_base_custom.get_knowledge_base_id()

Evaluate performance using RAGAS framework

To evaluate performance using the RAGAS framework, follow these steps:

  1. Set up RAGAS evaluation:
    from ragas import SingleTurnSample, EvaluationDataset
    from ragas import evaluate
    from ragas.metrics import (
    context_recall,
    context_precision,
    answer_correctness
    )
    
    # Initialize Bedrock models for evaluation
    TEXT_GENERATION_MODEL_ID = "anthropic.claude-3-haiku-20240307-v1:0"
    EVALUATION_MODEL_ID = "anthropic.claude-3-sonnet-20240229-v1:0"
    
    llm_for_evaluation = ChatBedrock(model_id=EVALUATION_MODEL_ID, client=bedrock_client)
    bedrock_embeddings = BedrockEmbeddings(
    model_id="amazon.titan-embed-text-v2:0",
    client=bedrock_client
    )

  2. Prepare evaluation dataset:
    # Define test questions and ground truths
    questions = [
    "What was the primary reason for the increase in net cash provided by operating activities for Octank Financial in 2021?",
    "In which year did Octank Financial have the highest net cash used in investing activities, and what was the primary reason for this?",
    # Add more questions...
    ]
    
    ground_truths = [
    "The increase in net cash provided by operating activities was primarily due to an increase in net income and favorable changes in operating assets and liabilities.",
    "Octank Financial had the highest net cash used in investing activities in 2021, at $360 million...",
    # Add corresponding ground truths...
    ]
    
    def prepare_eval_dataset(kb_id, questions, ground_truths):
    samples = []
    for question, ground_truth in zip(questions, ground_truths):
    # Get response and context
    response = retrieve_and_generate(question, kb_id)
    answer = response["output"]["text"]
    
    # Process contexts
    contexts = []
    for citation in response["citations"]:
    context_texts = [
    ref["content"]["text"]
    for ref in citation["retrievedReferences"]
    if "content" in ref and "text" in ref["content"]
    ]
    contexts.extend(context_texts)
    
    # Create sample
    sample = SingleTurnSample(
    user_input=question,
    retrieved_contexts=contexts,
    response=answer,
    reference=ground_truth
    )
    samples.append(sample)
    
    return EvaluationDataset(samples=samples)

  3. Run evaluation and compare results:
    # Evaluate both approaches
    contextual_chunking_dataset = prepare_eval_dataset(kb_id_custom, questions, ground_truths)
    default_chunking_dataset = prepare_eval_dataset(kb_id_standard, questions, ground_truths)
    
    # Define metrics
    metrics = [context_recall, context_precision, answer_correctness]
    
    # Run evaluation
    contextual_chunking_result = evaluate(
    dataset=contextual_chunking_dataset,
    metrics=metrics,
    llm=llm_for_evaluation,
    embeddings=bedrock_embeddings,
    )
    
    default_chunking_result = evaluate(
    dataset=default_chunking_dataset,
    metrics=metrics,
    llm=llm_for_evaluation,
    embeddings=bedrock_embeddings,
    )
    
    # Compare results
    comparison_df = pd.DataFrame({
    'Default Chunking': default_chunking_result.to_pandas().mean(),
    'Contextual Chunking': contextual_chunking_result.to_pandas().mean()
    })
    
    # Visualize results
    def highlight_max(s):
    is_max = s == s.max()
    return ['background-color: #90EE90' if v else '' for v in is_max]
    
    comparison_df.style.apply(
    highlight_max,
    axis=1,
    subset=['Default Chunking', 'Contextual Chunking']

Performance benchmarks

To evaluate the performance of the proposed contextual retrieval approach, we used the AWS Decision Guide: Choosing a generative AI service as the document for RAG testing. We set up two Amazon Bedrock knowledge bases for the evaluation:

  • One knowledge base with the default chunking strategy, which uses 300 tokens per chunk with a 20% overlap
  • Another knowledge base with the custom contextual retrieval chunking approach, which has a custom contextual retrieval Lambda transformer in addition to the fixed chunking strategy that also uses 300 tokens per chunk with a 20% overlap

We used the RAGAS framework to assess the performance of these two approaches using small datasets. Specifically, we looked at the following metrics:

  • context_recall – Context recall measures how many of the relevant documents (or pieces of information) were successfully retrieved
  • context_precision – Context precision is a metric that measures the proportion of relevant chunks in the retrieved_contexts
  • answer_correctness – The assessment of answer correctness involves gauging the accuracy of the generated answer when compared to the ground truth
from ragas import SingleTurnSample, EvaluationDataset
from ragas import evaluate
from ragas.metrics import (
    context_recall,
    context_precision,
    answer_correctness
)

#specify the metrics here
metrics = [
    context_recall,
    context_precision,
    answer_correctness
]

questions = [
    "What are the main AWS generative AI services covered in this guide?",
    "How does Amazon Bedrock differ from the other generative AI services?",
    "What are some key factors to consider when choosing a foundation model for your use case?",
    "What infrastructure services does AWS offer to support training and inference of large AI models?",
    "Where can I find more resources and information related to the AWS generative AI services?"
]
ground_truths = [
    "The main AWS generative AI services covered in this guide are Amazon Q Business, Amazon Q Developer, Amazon Bedrock, and Amazon SageMaker AI.",
    "Amazon Bedrock is a fully managed service that allows you to build custom generative AI applications with a choice of foundation models, including the ability to fine-tune and customize the models with your own data.",
    "Key factors to consider when choosing a foundation model include the modality (text, image, etc.), model size, inference latency, context window, pricing, fine-tuning capabilities, data quality and quantity, and overall quality of responses.",
    "AWS offers specialized hardware like AWS Trainium and AWS Inferentia to maximize the performance and cost-efficiency of training and inference for large AI models.",
    "You can find more resources like architecture diagrams, whitepapers, and solution guides on the AWS website. The document also provides links to relevant blog posts and documentation for the various AWS generative AI services."
]

The results obtained using the default chunking strategy are presented in the following table.

The results obtained using the contextual retrieval chunking strategy are presented in the following table. It demonstrates improved performance across the key metrics evaluated, including context recall, context precision, and answer correctness.

By aggregating the results, we can observe that the contextual chunking approach outperformed the default chunking strategy across the context_recall, context_precision, and answer_correctness metrics. This indicates the benefits of the more sophisticated contextual retrieval techniques implemented.

Implementation considerations

When implementing contextual retrieval using Amazon Bedrock, several factors need careful consideration. First, the custom chunking strategy must be optimized for both performance and accuracy, requiring thorough testing across different document types and sizes. The Lambda function’s memory allocation and timeout settings should be calibrated based on the expected document complexity and processing requirements, with initial recommendations of 1024 MB memory and 900-second timeout serving as baseline configurations. Organizations must also configure IAM roles with the principle of least privilege while maintaining sufficient permissions for Lambda to interact with Amazon S3 and Amazon Bedrock services. Additionally, the vectorization process and knowledge base configuration should be fine-tuned to balance between retrieval accuracy and computational efficiency, particularly when scaling to larger datasets.

Infrastructure scalability and monitoring considerations are equally crucial for successful implementation. Organizations should implement robust error-handling mechanisms within the Lambda function to manage various document formats and potential processing failures gracefully. Monitoring systems should be established to track key metrics such as chunking performance, retrieval accuracy, and system latency, enabling proactive optimization and maintenance.

Using Langfuse with Amazon Bedrock is a good option to introduce observability to this solution. The S3 bucket structure for both source and intermediate storage should be designed with clear lifecycle policies and access controls and consider Regional availability and data residency requirements. Furthermore, implementing a staged deployment approach, starting with a subset of data before scaling to full production workloads, can help identify and address potential bottlenecks or optimization opportunities early in the implementation process.

Cleanup

When you’re done experimenting with the solution, clean up the resources you created to avoid incurring future charges.

Conclusion

By combining Anthropic’s sophisticated language models with the robust infrastructure of Amazon Bedrock, organizations can now implement intelligent systems for information retrieval that deliver deeply contextualized, nuanced responses. The implementation steps outlined in this post provide a clear pathway for organizations to use contextual retrieval capabilities through Amazon Bedrock. By following the detailed configuration process, from setting up IAM permissions to deploying custom chunking strategies, developers and organizations can unlock the full potential of context-aware AI systems.

By leveraging Anthropic’s language models, organizations can deliver more accurate and meaningful results to their users while staying at the forefront of AI innovation. You can get started today with contextual retrieval using Anthropic’s language models through Amazon Bedrock and transform how your AI processes information with a small-scale proof of concept using your existing data. For personalized guidance on implementation, contact your AWS account team.


About the Authors

Suheel Farooq is a Principal Engineer in AWS Support Engineering, specializing in Generative AI, Artificial Intelligence, and Machine Learning. As a Subject Matter Expert in Amazon Bedrock and SageMaker, he helps enterprise customers design, build, modernize, and scale their AI/ML and Generative AI workloads on AWS. In his free time, Suheel enjoys working out and hiking.

Author QingweiQingwei Li is a Machine Learning Specialist at Amazon Web Services. He received his Ph.D. in Operations Research after he broke his advisor’s research grant account and failed to deliver the Nobel Prize he promised. Currently he helps customers in the financial service and insurance industry build machine learning solutions on AWS. In his spare time, he likes reading and teaching.

Vinita is a Senior Serverless Specialist Solutions Architect at AWS. She combines AWS knowledge with strong business acumen to architect innovative solutions that drive quantifiable value for customers and has been exceptional at navigating complex challenges. Vinita’s technical expertise on application modernization, GenAI, cloud computing and ability to drive measurable business impact make her show great impact in customer’s journey with AWS.

Sharon Li is an AI/ML Specialist Solutions Architect at Amazon Web Services (AWS) based in Boston, Massachusetts. With a passion for leveraging cutting-edge technology, Sharon is at the forefront of developing and deploying innovative generative AI solutions on the AWS cloud platform.

Venkata Moparthi is a Senior Solutions Architect, specializes in cloud migrations, generative AI, and secure architecture for financial services and other industries. He combines technical expertise with customer-focused strategies to accelerate digital transformation and drive business outcomes through optimized cloud solutions.

Read More

Run small language models cost-efficiently with AWS Graviton and Amazon SageMaker AI

Run small language models cost-efficiently with AWS Graviton and Amazon SageMaker AI

As organizations look to incorporate AI capabilities into their applications, large language models (LLMs) have emerged as powerful tools for natural language processing tasks. Amazon SageMaker AI provides a fully managed service for deploying these machine learning (ML) models with multiple inference options, allowing organizations to optimize for cost, latency, and throughput. AWS has always provided customers with choice. That includes model choice, hardware choice, and tooling choice. In terms of hardware choice, in addition to NVIDIA GPUs and AWS custom AI chips, CPU-based instances represent (thanks to the latest innovations in CPU hardware) an additional choice for customers who want to run generative AI inference, like hosting small language models and asynchronous agents.

Traditional LLMs with billions of parameters require significant computational resources. For example, a 7-billion-parameter model like Meta Llama 7B at BFloat16 (2 bytes per parameter) typically needs around 14 GB of GPU memory to store the model weights—the total GPU memory requirement is usually 3–4 times bigger at long sequence lengths. However, recent developments in model quantization and knowledge distillation have made it possible to run smaller, efficient language models on CPU infrastructure. Although these models might not match the capabilities of the largest LLMs, they offer a practical alternative for many real-world applications where cost optimization is crucial.

In this post, we demonstrate how to deploy a small language model on SageMaker AI by extending our pre-built containers to be compatible with AWS Graviton instances. We first provide an overview of the solution, and then provide detailed implementation steps to help you get started. You can find the example notebook in the GitHub repo.

Solution overview

Our solution uses SageMaker AI with Graviton3 processors to run small language models cost-efficiently. The key components include:

  • SageMaker AI hosted endpoints for model serving
  • Graviton3 based instances (ml.c7g series) for computation
  • A container installed with llama.cpp for the Graviton optimized inference
  • Pre-quantized GGUF format models

Graviton processors, which are specifically designed for cloud workloads, provide an optimal platform for running these quantized models. Graviton3 based instances can deliver up to 50% better price-performance compared to traditional x86-based CPU instances for ML inference workloads.

We have used Llama.cpp as the inference framework. It supports quantized general matrix multiply-add (GEMM) kernels for faster inference and reduced memory use. The quantized GEMM kernels are optimized for Graviton processors using Arm Neon and SVE-based matrix multiply-accumulate (MMLA) instructions.

Llama.cpp uses GGUF, a special binary format for storing the model and metadata. GGUF format is optimized for quick loading and saving of models, making it highly efficient for inference purposes. Existing models need to be converted to GGUF format before they can be used for the inference. You can find most of popular GGUF format models from the following Hugging Face repo, or you can also convert your model to GGUF format using the following tool.

The following diagram illustrates the solution architecture.

Solution Architecture

To deploy your model on SageMaker with Graviton, you will need to complete the following steps:

  1. Create a Docker container compatible with ARM64 architecture.
  2. Prepare your model and inference code.
  3. Create a SageMaker model and deploy to an endpoint with a Graviton instance.

We walk through these steps in the following sections.

Prerequisites

To implement this solution, you need an AWS account with the necessary permissions.

Create a Docker container compatible with ARM64 architecture

Let’s first review how SageMaker AI works with Docker containers. Basically, by packaging an algorithm in a container, you can bring almost any code to the SageMaker environment, regardless of programming language, environment, framework, or dependencies. For more information and an example of how to build your own Docker container for training and inference with SageMaker AI, see Build your own algorithm container.

You can also extend a pre-built container to accommodate your needs. By extending a pre-built image, you can use the included deep learning libraries and settings without having to create an image from scratch. You can extend the container to add libraries, modify settings, and install additional dependencies. For a list of available pre-built containers, refer to the following GitHub repo. In this example, we show how to package a pre-built PyTorch container that supports Graviton instances, extending the SageMaker PyTorch container, with a Python example that works with the DeepSeek distilled model.

Firstly, let’s review how SageMaker AI runs your Docker container. Typically, you specify a program (such as a script) as an ENTRYPOINT in the Dockerfile; that program will run at startup and decide what to do. The original ENTRYPOINT specified within the SageMaker PyTorch is listed in the GitHub repo. To learn how to extend our pre-built container for model training, refer to Extend a Pre-built Container. In this example, we only use the inference container.

Running your container during hosting

Hosting has a very different model than training because hosting is responding to inference requests that come in through HTTP. At the time of writing, the SageMaker PyTorch containers use our TorchServe to provide robust and scalable serving of inference requests, as illustrated in the following diagram.

SageMaker PyTorch containers

SageMaker uses two URLs in the container:

  • /ping receives GET requests from the infrastructure. Your program returns 200 if the container is up and accepting requests.
  • /invocations is the endpoint that receives client inference POST The format of the request and the response is up to the algorithm. If the client supplied ContentType and Accept headers, these are passed in as well.

The container has the model files in the same place that they were written to during training:

/opt/ml
`-- model
    `-- <model files>

Custom files available to build the container used in this example

The container directory has all the components you need to extend the SageMaker PyTorch container to use as a sample algorithm:

.
|-- Dockerfile
|-- build_and_push.sh
`-- code
    `-- inference.py
    `-- requirements.txt

Let’s discuss each of these in turn:

  • Dockerfile describes how to build your Docker container image for inference.
  • sh is a script that uses the Dockerfile to build your container images and then pushes it to Amazon Elastic Container Registry (Amazon ECR). We invoke the commands directly later in this notebook, but you can copy and run the script for your own algorithms. To build a Graviton compatible Docker image, we launch a ARM64 architecture-based AWS CodeBuild environment and build the Docker image from the Dockerfile, then push the Docker image to the ECR repo. Refer to the script for more details.
  • code is the directory that contains our user code to be invoked.

In this application, we install or update a few libraries for running Llama.cpp in Python. We put the following files in the container:

  • py is the program that implements our inference code (used only for the inference container)
  • txt is the text file that contains additional Python packages that will be installed during deployment time

The Dockerfile describes the image that we want to build. We start from the SageMaker PyTorch image as the base inference one. The SageMaker PyTorch ECR image that supports Graviton in this case would be:

FROM 763104351884.dkr.ecr.{region}.amazonaws.com/pytorch-inference-arm64:2.5.1-cpu-py311-ubuntu22.04-sagemaker

Next, we install the required additional libraries and add the code that implements our specific algorithm to the container, and set up the right environment for it to run under. We recommend configuring the following optimizations for Graviton in the Dockerfile and the inference code for better performance:

  • In the Dockerfile, add compile flags like -mcpu=native -fopenmp when installing the llama.cpp Python package. The combination of these flags can lead to code optimized for the specific ARM architecture of Graviton and parallel execution that takes full advantage of the multi-core nature of Graviton processors.
  • Set n_threads to the number of vCPUs explicitly in the inference code to use all cores (vCPUs) on Graviton.
  • Use quantized q4_0 models, which minimizes accuracy loss while aligning well with CPU architectures, improving CPU inference performance by reducing memory footprint and enhancing cache utilization. For information on how to quantize models, refer to the llama.cpp README.

The build_and_push.sh script describes how to automate the setup of a CodeBuild project specifically designed for building Docker images on ARM64 architecture. It sets up essential configuration variables; creates necessary AWS Identity and Access Management (IAM) roles with appropriate permissions for Amazon CloudWatch Logs, Amazon Simple Storage Service (Amazon S3), and Amazon ECR access; and establishes a CodeBuild project using an ARM-based container environment. The script includes functions to check for project existence and wait for project readiness, while configuring the build environment with required variables and permissions for building and pushing Docker images, particularly for the llama.cpp inference code.

Prepare your model and inference code

Given the use of a pre-built SageMaker PyTorch container, we can simply write an inference script that defines the following functions to handle input data deserialization, model loading, and prediction:

  • model_fn() reads the content of an existing model weights directory from the `/opt/ml/model` or uses the model_dir parameter passed to the function, which is a directory where the model is saved
  • input_fn() is used to format the data received from a request made to the endpoint
  • predict_fn() calls the output of model_fn() to run inference on the output of input_fn()
  • output_fn() optionally serializes predictions from predict_fn to the format that can be transferred back through HTTP packages, such as JSON

Normally, you would compress model files into a TAR file; however, this can cause startup time to take longer due to having to download and untar large files. To improve startup times, SageMaker AI supports use of uncompressed files. This removes the need to untar large files. In this example, we upload all the files to an S3 prefix and then pass the location into the model with “CompressionType”: “None”.

Create a SageMaker model and deploy to an endpoint with a Graviton instance

Now we can use the PyTorchModel class provided by SageMaker Python SDK to create a PyTorch SageMaker model that can be deployed to a SageMaker endpoint:

pytorch_model = PyTorchModel(model_data={
                                "S3DataSource": {
                                    "S3Uri": model_path,
                                    "S3DataType": "S3Prefix",
                                    "CompressionType": "None",
                                }
                            },
                             role=role,
                             env={
                                 'MODEL_FILE_GGUF':file_name
                             },
                             image_uri=f"{sagemaker_session.account_id()}.dkr.ecr.{region}.amazonaws.com/llama-cpp-python:latest",
                             model_server_workers=2
)

predictor = pytorch_model.deploy(instance_type='ml.c7g.12xlarge', initial_instance_count=1)

TorchServe runs multiple workers on the container for inference, where each worker hosts a copy of the model. model_server_workers controls the number of workers that TorchServe will run by configuring the ‘SAGEMAKER_MODEL_SERVER_WORKERS‘ environment variable. Therefore, we recommend using a small number for the model server workers.

Then we can invoke the endpoint with either the predictor object returned by the deploy function or use a low-level Boto3 API as follows:

client = boto3.client('sagemaker-runtime')

response = client.invoke_endpoint(
    EndpointName=endpoint_name,
    ContentType="application/json",
    Body=json.dumps(prompt)
)
print(response['Body'].read().decode("utf-8"))

Performance optimization discussion

When you’re happy with a specific model, or a quantized version of it, for your use case, you can start tuning your compute capacity to serve your users at scale. When running LLM inference, we look at two main metrics to evaluate performance: latency and throughput. Tools like LLMPerf enable measuring these metrics on SageMaker AI endpoints.

  • Latency – Represents the per-user experience by measuring the time needed to process a user request or prompt
  • Throughput – Represents the overall token throughput, measured in tokens per seconds, aggregated for user requests

When serving users in parallel, batching those parallel requests together can improve throughput and increase compute utilization by moving the multiple inputs together with the model weights from the host memory to the CPU in order to generate the output tokens. Model serving backends like vLLM and Llama.cpp support continuous batching, which automatically adds new requests to the existing batch, replacing old requests that finished their token generation phases. However, configuring higher batch sizes comes at the expense of per-user latency, so you should tune the batch size for the best latency-throughput combination on the ML instance you’re using on SageMaker AI. In addition to batching, using prompt or prefix caching to reuse the precomputed attention matrices in similar subsequent requests can further boost latency.

When you find the optimal batch size for your use case, you can tune your endpoint’s auto scaling policy to serve your users at scale using an endpoint exposing multiple CPU-based ML instances, which scales according the application load. Let’s say you are able to successfully serve 10 requests users in parallel with one ML instance. You can scale out by increasing the number of instances to reach the number of instances needed to serve your target number of users—for example, you would need 10 instances to serve 100 users in parallel.

Clean up

To avoid unwanted charges, clean up the resources you created as part of this solution if you no longer need it.

Conclusion

SageMaker AI with Graviton processors offers a compelling solution for organizations looking to deploy AI capabilities cost-effectively. By using CPU-based inference with quantized models, this approach delivers up to 50% cost savings compared to traditional deployments while maintaining robust performance for many real-world applications. The combination of simplified operations through the fully managed SageMaker infrastructure, flexible auto scaling with zero-cost downtime, and enhanced deployment speed with container caching technology makes it an ideal platform for production AI workloads.

To get started, explore our sample notebooks on GitHub and reference documentation to evaluate whether CPU-based inference suits your use case. You can also refer to the AWS Graviton Technical Guide, which provides the list of optimized libraries and best practices that can help you achieve cost benefits with Graviton instances across different workloads.


About the Authors

Vincent Wang is an Efficient Compute Specialist Solutions Architect at AWS based in Sydney, Australia. He helps customers optimize their cloud infrastructure by leveraging AWS’s silicon innovations, including AWS Graviton processors and AWS Neuron technology. Vincent’s expertise lies in developing AI/ML applications that harness the power of open-source software combined with AWS’s specialized AI chips, enabling organizations to achieve better performance and cost-efficiency in their cloud deployments.

Andrew Smith is a Cloud Support Engineer in the SageMaker, Vision & Other team at AWS, based in Sydney, Australia. He supports customers using many AI/ML services on AWS with expertise in working with Amazon SageMaker. Outside of work, he enjoys spending time with friends and family as well as learning about different technologies.

Melanie Li, PhD, is a Senior Generative AI Specialist Solutions Architect at AWS based in Sydney, Australia, where her focus is on working with customers to build solutions leveraging state-of-the-art AI and machine learning tools. She has been actively involved in multiple Generative AI initiatives across APJ, harnessing the power of Large Language Models (LLMs). Prior to joining AWS, Dr. Li held data science roles in the financial and retail industries.

Oussama Maxime Kandakji is a Senior AI/ML Solutions Architect at AWS focusing on AI Inference and Agents. He works with companies of all sizes on solving  business and performance challenges in AI and Machine Learning workloads. He enjoys contributing to open source and working with data.

Romain Legret is a Senior Efficient Compute Specialist Solutions Architect at AWS. Romain promotes the benefits of AWS Graviton, EC2 Spot, Karpenter, or Auto-Scaling while helping French customers in their adoption journey. “Always try to achieve more with less” is his motto !

Read More

Impel enhances automotive dealership customer experience with fine-tuned LLMs on Amazon SageMaker

Impel enhances automotive dealership customer experience with fine-tuned LLMs on Amazon SageMaker

This post is co-written with Tatia Tsmindashvili, Ana Kolkhidashvili, Guram Dentoshvili, Dachi Choladze from Impel.

Impel transforms automotive retail through an AI-powered customer lifecycle management solution that drives dealership operations and customer interactions. Their core product, Sales AI, provides all-day personalized customer engagement, handling vehicle-specific questions and automotive trade-in and financing inquiries. By replacing their existing third-party large language model (LLM) with a fine-tuned Meta Llama model deployed on Amazon SageMaker AI, Impel achieved 20% improved accuracy and greater cost controls. The implementation using the comprehensive feature set of Amazon SageMaker, including model training, Activation-Aware Weight Quantization (AWQ), and Large Model Inference (LMI) containers. This domain-specific approach not only improved output quality but also enhanced security and operational overhead compared to general-purpose LLMs.

In this post, we share how Impel enhances the automotive dealership customer experience with fine-tuned LLMs on SageMaker.

Impel’s Sales AI

Impel optimizes how automotive retailers connect with customers by delivering personalized experiences at every touchpoint—from initial research to purchase, service, and repeat business, acting as a digital concierge for vehicle owners, while giving retailers personalization capabilities for customer interactions. Sales AI uses generative AI to provide instant responses around the clock to prospective customers through email and text. This maintained engagement during the early stages of a customer’s car buying journey leads to showroom appointments or direct connections with sales teams. Sales AI has three core features to provide this consistent customer engagement:

  • Summarization – Summarizes past customer engagements to derive customer intent
  • Follow-up generation – Provides consistent follow-up to engaged customers to help prevent stalled customer purchasing journeys
  • Response personalization – Personalizes responses to align with retailer messaging and customer’s purchasing specifications

Two key factors drove Impel to transition from their existing LLM provider: the need for model customization and cost optimization at scale. Their previous solution’s per-token pricing model became cost-prohibitive as transaction volumes grew, and limitations on fine-tuning prevented them from fully using their proprietary data for model improvement. By deploying a fine-tuned Meta Llama model on SageMaker, Impel achieved the following:

  • Cost predictability through hosted pricing, mitigating per-token charges
  • Greater control of model training and customization, leading to 20% improvement across core features
  • Secure processing of proprietary data within their AWS account
  • Automatic scaling to meet the spike in inference demand

Solution overview

Impel chose SageMaker AI, a fully managed cloud service that builds, trains, and deploys machine learning (ML) models using AWS infrastructure, tools, and workflows to fine-tune a Meta Llama model for Sales AI. Meta Llama is a powerful model, well-suited for industry-specific tasks due to its strong instruction-following capabilities, support for extended context windows, and efficient handling of domain knowledge.

Impel used SageMaker LMI containers to deploy LLM inference on SageMaker endpoints. These purpose-built Docker containers offer optimized performance for models like Meta Llama with support for LoRA fine-tuned models and AWQ. Impel used LoRA fine-tuning, an efficient and cost-effective technique to adapt LLMs for specialized applications, through Amazon SageMaker Studio notebooks running on ml.p4de.24xlarge instances. This managed environment simplified the development process, enabling Impel’s team to seamlessly integrate popular open source tools like PyTorch and torchtune for model training. For model optimization, Impel applied AWQ techniques to reduce model size and improve inference performance.

In production, Impel deployed inference endpoints on ml.g6e.12xlarge instances, powered by four NVIDIA GPUs and high memory capacity, suitable for serving large models like Meta Llama efficiently. Impel used the SageMaker built-in automatic scaling feature to automatically scale serving containers based on concurrent requests, which helped meet variable production traffic demands while optimizing for cost.

The following diagram illustrates the solution architecture, showcasing model fine-tuning and customer inference.

AWS ML deployment architecture showing how engineers use SageMaker to serve fine-tuned models to customers via APIs

Impel’s Sales AI reference architecture.

Impel’s R&D team partnered closely with various AWS teams, including its Account team, GenAI strategy team, and SageMaker service team. This virtual team collaborated over multiple sprints leading up to the fine-tuned Sales AI launch date to review model evaluations, benchmark SageMaker performance, optimize scaling strategies, and identify the optimal SageMaker instances. This partnership encompassed technical sessions, strategic alignment meetings, and cost and operational discussions for post-implementation. The tight collaboration between Impel and AWS was instrumental in realizing the full potential of Impel’s fine-tuned model hosted on SageMaker AI.

Fine-tuned model evaluation process

Impel’s transition to its fine-tuned Meta Llama model delivered improvements across key performance metrics with noticeable improvements in understanding automotive-specific terminology and generating personalized responses. Structured human evaluations revealed enhancements in critical customer interaction areas: personalized replies improved from 73% to 86% accuracy, conversation summarization increased from 70% to 83%, and follow-up message generation showed the most significant gain, jumping from 59% to 92% accuracy. The following screenshot shows how customers interact with Sales AI. The model evaluation process included Impel’s R&D team grading various use cases served by the incumbent LLM provider and Impel’s fine-tuned models.

Customer service interaction showing automated dealership response offering appointment scheduling for Toyota Highlander XLE

Example of a customer interaction with Sales AI.

In addition to output quality, Impel measured latency and throughput to validate the model’s production readiness. Using awscurl for SigV4-signed HTTP requests, the team confirmed these improvements in real-world performance metrics, ensuring optimal customer experience in production environments.

Using domain-specific models for better performance

Impel’s evolution of Sales AI progressed from a general-purpose LLM to a domain-specific, fine-tuned model. Using anonymized customer interaction data, Impel fine-tuned a publicly available foundation model, resulting in several key improvements. The new model exhibited a 20% increase in accuracy across core features, showcasing enhanced automotive industry comprehension and more efficient context window utilization. By transitioning to this approach, Impel achieved three primary benefits:

  • Enhanced data security through in-house processing within their AWS accounts
  • Reduced reliance on external APIs and third-party providers
  • Greater operational control for scaling and customization

These advancements, coupled with the significant output quality improvement, validated Impel’s strategic shift towards a domain-specific AI model for Sales AI.

Expanding AI innovation in automotive retail

Impel’s success deploying fine-tuned models on SageMaker has established a foundation for extending its AI capabilities to support a broader range of use cases tailored to the automotive industry. Impel is planning to transition to in-house, domain-specific models to extend the benefits of improved accuracy and performance throughout their Customer Engagement Product suite.Looking ahead, Impel’s R&D team is advancing their AI capabilities by incorporating Retrieval Augmented Generation (RAG) workflows, advanced function calling, and agentic workflows. These innovations can help deliver adaptive, context-aware systems designed to interact, reason, and act across complex automotive retail tasks.

Conclusion

In this post, we discussed how Impel has enhanced the automotive dealership customer experience with fine-tuned LLMs on SageMaker.

For organizations considering similar transitions to fine-tuned models, Impel’s experience demonstrates how working with AWS can help achieve both accuracy improvements and model customization opportunities while building long-term AI capabilities tailored to specific industry needs. Connect with your account team or visit Amazon SageMaker AI to learn how SageMaker can help you deploy and manage fine-tuned models.


About the Authors

Nicholas Scozzafava is a Senior Solutions Architect at AWS, focused on startup customers. Prior to his current role, he helped enterprise customers navigate their cloud journeys. He is passionate about cloud infrastructure, automation, DevOps, and helping customers build and scale on AWS.

Sam Sudakoff is a Senior Account Manager at AWS, focused on strategic startup ISVs. Sam specializes in technology landscapes, AI/ML, and AWS solutions. Sam’s passion lies in scaling startups and driving SaaS and AI transformations. Notably, his work with AWS’s top startup ISVs has focused on building strategic partnerships and implementing go-to-market initiatives that bridge enterprise technology with innovative startup solutions, while maintaining strict adherence with data security and privacy requirements.

Vivek Gangasani is a Lead Specialist Solutions Architect for Inference at AWS. He helps emerging generative AI companies build innovative solutions using AWS services and accelerated compute. Currently, he is focused on developing strategies for fine-tuning and optimizing the inference performance of large language models. In his free time, Vivek enjoys hiking, watching movies, and trying different cuisines.

Dmitry Soldatkin is a Senior AI/ML Solutions Architect at AWS, helping customers design and build AI/ML solutions. Dmitry’s work covers a wide range of ML use cases, with a primary interest in generative AI, deep learning, and scaling ML across the enterprise. He has helped companies in many industries, including insurance, financial services, utilities, and telecommunications. Prior to joining AWS, Dmitry was an architect, developer, and technology leader in data analytics and machine learning fields in the financial services industry.

Tatia Tsmindashvili is a Senior Deep Learning Researcher at Impel with an MSc in Biomedical Engineering and Medical Informatics. She has over 5 years of experience in AI, with interests spanning LLM agents, simulations, and neuroscience. You can find her on LinkedIn.

Ana Kolkhidashvili is the Director of R&D at Impel, where she leads AI initiatives focused on large language models and automated conversation systems. She has over 8 years of experience in AI, specializing in large language models, automated conversation systems, and NLP. You can find her on LinkedIn.

Guram Dentoshvili is the Director of Engineering and R&D at Impel, where he leads the development of scalable AI solutions and drives innovation across the company’s conversational AI products. He began his career at Pulsar AI as a Machine Learning Engineer and played a key role in building AI technologies tailored to the automotive industry. You can find him on LinkedIn.

Dachi Choladze is the Chief Innovation Officer at Impel, where he leads initiatives in AI strategy, innovation, and product development. He has over 10 years of experience in technology entrepreneurship and artificial intelligence. Dachi is the co-founder of Pulsar AI, Georgia’s first globally successful AI startup, which later merged with Impel. You can find him on LinkedIn.

Deepam Mishra is a Sr Advisor to Startups at AWS and advises startups on ML, Generative AI, and AI Safety and Responsibility. Before joining AWS, Deepam co-founded and led an AI business at Microsoft Corporation and Wipro Technologies. Deepam has been a serial entrepreneur and investor, having founded 4 AI/ML startups. Deepam is based in the NYC metro area and enjoys meeting AI founders.

Read More

How climate tech startups are building foundation models with Amazon SageMaker HyperPod

How climate tech startups are building foundation models with Amazon SageMaker HyperPod

Climate tech startups are companies that use technology and innovation to address the climate crisis, with a primary focus on either reducing greenhouse gas emissions or helping society adapt to climate change impacts. Their unifying mission is to create scalable solutions that accelerate the transition to a sustainable, low-carbon future. Solutions to the climate crisis are ever more important as climate-driven extreme weather disasters increase globally. In 2024, climate disasters caused more than $417B in damages globally, and there’s no slowing down in 2025 with LA wildfires that destroyed more than $135B in the first month of the year alone. Climate tech startups are at the forefront of building impactful solutions to the climate crisis, and they’re using generative AI to build as quickly as possible.

In this post, we show how climate tech startups are developing foundation models (FMs) that use extensive environmental datasets to tackle issues such as carbon capture, carbon-negative fuels, new materials design for microplastics destruction, and ecosystem preservation. These specialized models require advanced computational capabilities to process and analyze vast amounts of data effectively.

Amazon Web Services (AWS) provides the essential compute infrastructure to support these endeavors, offering scalable and powerful resources through Amazon SageMaker HyperPod. SageMaker HyperPod is a purpose-built infrastructure service that automates the management of large-scale AI training clusters so developers can efficiently build and train complex models such as large language models (LLMs) by automatically handling cluster provisioning, monitoring, and fault tolerance across thousands of GPUs. With SageMaker HyperPod, startups can train complex AI models on diverse environmental datasets, including satellite imagery and atmospheric measurements, with enhanced speed and efficiency. This computational backbone is vital for startups striving to create solutions that are not only innovative but also scalable and impactful.

The increasing complexity of environmental data demands robust data infrastructure and sophisticated model architectures. Integrating multimodal data, employing specialized attention mechanisms for spatial-temporal data, and using reinforcement learning are crucial for building effective climate-focused models. SageMaker HyperPod optimized GPU clustering and scalable resources help startups save time and money while meeting advanced technical requirements, which means they can focus on innovation. As climate technology demands grow, these capabilities allow startups to develop transformative environmental solutions using Amazon SageMaker HyperPod.

Trends among climate tech startups building with generative AI

Climate tech startups’ adoption of generative AI is evolving rapidly. Starting in early 2023, we saw the first wave of climate tech startups adopting generative AI to optimize operations. For example, startups such as BrainBox AI and Pendulum used Amazon Bedrock and fine-tuned existing LLMs on AWS Trainium using Amazon SageMaker to more rapidly onboard new customers through automated document ingestion and data extraction. Midway through 2023, we saw the next wave of climate tech startups building sophisticated intelligent assistants by fine-tuning existing LLMs for specific use cases. For example, NET2GRID used Amazon SageMaker for fine-tuning and deploying scale-based LLMs based on Llama 7B to build EnergyAI, an assistant that provides quick, personalized responses to utility customers’ energy-related questions.

Over the last 6 months, we’ve seen a flurry of climate tech startups building FMs that address specific climate and environmental challenges. Unlike language-based models, these startups are building models based on real-world data, like weather or geospatial earth data. Whereas LLMs such as Anthropic’s Claude or Amazon Nova have hundreds of billions of parameters, climate tech startups are building smaller models with just a few billion parameters. This means these models are faster and less expensive to train. We’re seeing some emerging trends in use cases or climate challenges that startups are addressing by building FMs. Here are the top use cases, in order of popularity:

  1. Weather – Trained on historic weather data, these models offer short-term and long-term, hyperaccurate, hyperlocal weather and climate predictions, some focusing on specific weather elements like wind, heat, or sun.
  2. Sustainable material discovery – Trained on scientific data, these models invent new sustainable material that solve specific problems, like more efficient direct air capture sorbents to reduce the cost of carbon removal or molecules to destroy microplastics from the environment.
  3. Natural ecosystems – Trained on a mix of data from satellites, lidar, and on-the ground sensors, these models offer insights into natural ecosystems, biodiversity, and wildfire predictions.
  4. Geological modeling – Trained on geological data, these models help determine the best locations for geothermal or mining operations to reduce waste and save money.

To offer a more concrete look at these trends, the following is a deep dive into how climate tech startups are building FMs on AWS.

Orbital Materials: Foundation models for sustainable material discovery

Orbital Materials has built a proprietary AI platform to design, synthesize, and test new sustainable materials. Developing new advanced materials has traditionally been a slow process of trial and error in the lab. Orbital replaces this with generative AI design, radically speeding up materials discovery and new technology commercialization. They’ve released a generative AI model called “Orb” that suggests new material design, which the team then tests and perfects in the lab.

Orb is a diffusion model that Orbital Materials trained from scratch using SageMaker HyperPod. The first product the startup designed with Orb is a sorbent for carbon capture in direct air capture facilities. Since establishing its lab in the first quarter of 2024, Orbital has achieved a tenfold improvement in its material’s performance using its AI platform—an order of magnitude faster than traditional development and breaking new ground in carbon removal efficacy. By improving the performance of the materials, the company can help drive down the costs of carbon removal, which can enable rapid scale-up. They chose to use SageMaker HyperPod because they “like the one-stop shop for control and monitoring,” explained Jonathan Godwin, CEO of Orbital Material. Orbital was able to reduce their total cost of ownership (TCO) for their GPU cluster with Amazon SageMaker HyperPod deep health checks for stress testing their GPU instances to swap out faulty nodes. Moreover, Orbital can use SageMaker HyperPod to automatically swap out failing nodes and restart model training from the last saved checkpoint, freeing up time for the Orbital Materials team. The SageMaker HyperPod monitoring agent continually monitors and detects potential issues, including memory exhaustion, disk failures, GPU anomalies, kernel deadlocks, container runtime issues, and out-of-memory (OOM) crashes. Based on the underlying issue the monitoring agent either replaces or reboots the node.

With the launch of SageMaker HyperPod on Amazon Elastic Kubernetes Service (Amazon EKS), Orbital can set up a unified control plane consisting of both CPU-based workloads and GPU-accelerated tasks within the same Kubernetes cluster. This architectural approach eliminates the traditional complexity of managing separate clusters for different compute resources, significantly reducing operational overhead. Orbital can also monitor the health status of SageMaker HyperPod nodes through Amazon CloudWatch Container Insights with enhanced observability for Amazon EKS. Amazon CloudWatch Container Insights collects, aggregates, and summarizes metrics and logs from containerized applications and microservices, providing detailed insights into performance, health, and status metrics for CPU, GPU, Trainium, or Elastic Fabric Adapter (EFA) and file system up to the container level.

AWS and Orbital Materials have established a deep partnership that enables fly-wheel growth. The companies have entered a multiyear partnership, in which Orbital Material builds its FMs with SageMaker HyperPod and other AWS services. In return, Orbital Materials is using AI to develop new data center decarbonization and efficiency technologies. To further spin the fly-wheel, Orbital will be making its market-leading open source AI model for simulating advanced materials, Orb, generally available for AWS customers by using Amazon SageMaker JumpStart and AWS Marketplace. This marks the first AI-for-materials model to be on AWS platforms. With Orb, AWS customers working on advanced materials and technologies such as semiconductors, batteries, and electronics can access market-leading accelerated research and development (R&D) within a secure and unified cloud environment.

The architectural advantages of SageMaker HyperPod on Amazon EKS are demonstrated in the following diagram. The diagram illustrates how Orbital can establish a unified control plane that manages both CPU-based workloads and GPU-accelerated tasks within a single Kubernetes cluster. This streamlined architecture eliminates the traditional complexity of managing separate clusters for different compute resources, providing a more efficient and integrated approach to resource management. The visualization shows how this consolidated infrastructure enables Orbital to seamlessly orchestrate their diverse computational needs through a single control interface.

Hum.AI: Foundation models for earth observation

Hum.AI is building generative AI FMs that provide general intelligence of the natural world. Customers can use the platform to track and predict ecosystems and biodiversity to understand business impact and better protect the environment. For example, they work with coastal communities who use the platform and insights to restore coastal ecosystems and improve biodiversity.

Hum.AI’s foundation model looks at natural world data and learns to represent it visually. They’re training on 50 years of historic data collected by satellites, which amounts to thousands of petabytes of data. To accommodate processing this massive dataset, they chose SageMaker HyperPod for its scalable infrastructure. Through their innovative model architecture, the company achieved the ability to see underwater from space for the very first time, overcoming the historical challenges posed by water reflections

Hum.AI’s FM architecture employs a variational autoencoder (VAE) and generative adversarial network (GAN) hybrid design, specifically optimized for satellite imagery analysis. It’s an encoder-decoder model, where the encoder transforms satellite data into a learned latent space, while the decoder reconstructs the imagery (after being processed in the latent space), maintaining consistency across different satellite sources. The discriminator network provides both adversarial training signals and learned feature-wise reconstruction metrics. This approach helps preserve important ecosystem details that would otherwise be lost with traditional pixel-based comparisons, particularly for underwater environments, where water reflections typically interfere with visibility.

Using SageMaker HyperPod to train such a complex model enables Hum.AI to efficiently process their personally curated SeeFar dataset through distributed training across multiple GPU-based instances. The model simultaneously optimizes both VAE and GAN objectives across GPUs. This, paired with the SageMaker HyperPod auto-resume feature that automatically resumes a training run from the latest checkpoint, provides training continuity, even through node failures.

Hum.AI also used the SageMaker HyperPod out-of-the-box comprehensive observability features through Amazon Managed Service for Prometheus and Amazon Managed Service for Grafana for metric tracking. For their distributed training needs, they used dashboards to monitor cluster performance, GPU metrics, network traffic, and storage operations. This extensive monitoring infrastructure enabled Hum.AI to optimize their training process and maintain high resource utilization throughout their model development.

“Our decision to use SageMaker HyperPod was simple; it was the only service out there where you can continue training through failure. We were able to train larger models faster by taking advantage of the large-scale clusters and redundancy offered by SageMaker HyperPod. We were able to execute experiments faster and iterate models at speeds that were impossible prior to SageMaker HyperPod. SageMaker HyperPod took all of the worry out of large-scale training failures. They’ve built the infrastructure to hot swap GPUs if anything goes wrong, and it saves thousands in lost progress between checkpoints. The SageMaker HyperPod team personally helped us set up and execute large training rapidly and easily.”

– Kelly Zheng, CEO of Hum.AI.

Hum.AI’s innovative approach to model training is illustrated in the following figure. The diagram showcases how their model simultaneously optimizes both VAE and GAN objectives across multiple GPUs. This distributed training strategy is complemented by the SageMaker HyperPod auto-resume feature, which automatically restarts training runs from the latest checkpoint. Together, these capabilities provide continual and efficient training, even in the face of potential node failures. The image provides a visual representation of this robust training process, highlighting the seamless integration between Hum.AI’s model architecture and SageMaker HyperPod infrastructure support.

How to save time and money building with Amazon SageMaker HyperPod

Amazon SageMaker HyperPod removes the undifferentiated heavy lifting for climate tech startups building FMs, saving them time and money. For more information on how SageMaker HyperPod’s resiliency helps save costs while training, check out Reduce ML training costs with Amazon SageMaker HyperPod.

At its core is deep infrastructure control optimized for processing complex environmental data, featuring secure access to Amazon Elastic Compute Cloud (Amazon EC2) instances and seamless integration with orchestration tools such as Slurm and Amazon EKS. This infrastructure excels at handling multimodal environmental inputs, from satellite imagery to sensor network data, through distributed training across thousands of accelerators.

The intelligent resource management available in SageMaker HyperPod is particularly valuable for climate modeling, automatically governing task priorities and resource allocation while reducing operational overhead by up to 40%. This efficiency is crucial for climate tech startups processing vast environmental datasets because the system maintains progress through checkpointing while making sure that critical climate modeling workloads receive necessary resources.

For climate tech innovators, the SageMaker HyperPod library of over 30 curated model training recipes accelerates development, allowing teams to begin training environmental models in minutes rather than weeks. The platform’s integration with Amazon EKS provides robust fault tolerance and high availability, essential for maintaining continual environmental monitoring and analysis.

SageMaker HyperPod flexible training plans are particularly beneficial for climate tech projects, allowing organizations to specify completion dates and resource requirements while automatically optimizing capacity for complex environmental data processing. The system’s ability to suggest alternative plans provides optimal resource utilization for computationally intensive climate modeling tasks.With support for next-generation AI accelerators such as the AWS Trainium chips and comprehensive monitoring tools, SageMaker HyperPod provides climate tech startups with a sustainable and efficient foundation for developing sophisticated environmental solutions. This infrastructure enables organizations to focus on their core mission of addressing climate challenges while maintaining operational efficiency and environmental responsibility.

Practices for sustainable computing

Climate tech companies are especially aware of the importance of sustainable computing practices. One key approach is the meticulous monitoring and optimization of energy consumption during computational processes. By adopting efficient training strategies, such as reducing the number of unnecessary training iterations and employing energy-efficient algorithms, startups can significantly lower their carbon footprint.

Additionally, the integration of renewable energy sources to power data centers plays a crucial role in minimizing environmental impact. AWS is determined to make the cloud the cleanest and the most energy-efficient way to run all our customers’ infrastructure and business. We have made significant progress over the years. For example, Amazon is the largest corporate purchaser of renewable energy in the world, every year since 2020. We’ve achieved our renewable energy goal to match all the electricity consumed across our operations—including our data centers—with 100% renewable energy, and we did this 7 years ahead of our original 2030 timeline.

Companies are also turning to carbon-aware computing principles, which involve scheduling computational tasks to coincide with periods of low carbon intensity on the grid. This practice means that the energy used for computing has a lower environmental impact. Implementing these strategies not only aligns with broader sustainability goals but also promotes cost efficiency and resource conservation. As the demand for advanced computational capabilities grows, climate tech startups are becoming vigilant in their commitment to sustainable practices so that their innovations contribute positively to both technological progress and environmental stewardship.

Conclusion

Amazon SageMaker HyperPod is emerging as a crucial tool for climate tech startups in their quest to develop innovative solutions to pressing environmental challenges. By providing scalable, efficient, and cost-effective infrastructure for training complex multimodal and multi- model architectures, SageMaker HyperPod enables these companies to process vast amounts of environmental data and create sophisticated predictive models. From Orbital Materials’ sustainable material discovery to Hum.AI’s advanced earth observation capabilities, SageMaker HyperPod is powering breakthroughs that were previously out of reach. As climate change continues to pose urgent global challenges, SageMaker HyperPod automated management of large-scale AI training clusters, coupled with its fault-tolerance and cost-optimization features, allows climate tech innovators to focus on their core mission rather than infrastructure management. By using SageMaker HyperPod, climate tech startups are not only building more efficient models—they’re accelerating the development of powerful new tools in our collective effort to address the global climate crisis.


About the authors

Ilan Gleiser is a Principal GenAI Specialist at Amazon Web Services (AWS) on the WWSO Frameworks team, focusing on developing scalable artificial general intelligence architectures and optimizing foundation model training and inference. With a rich background in AI and machine learning, Ilan has published over 30 blog posts and delivered more than 100 prototypes globally over the last 5 years. Ilan holds a master’s degree in mathematical economics.

Lisbeth Kaufman is the Head of Climate Tech BD, Startups and Venture Capital at Amazon Web Services (AWS). Her mission is to help the best climate tech startups succeed and reverse the global climate crisis. Her team has technical resources, go-to-market support, and connections to help climate tech startups overcome obstacles and scale. Lisbeth worked on climate policy as an energy/environment/agriculture policy advisor in the U.S. Senate. She has a BA from Yale and an MBA from NYU Stern, where she was a Dean’s Scholar. Lisbeth helps climate tech founders with product, growth, fundraising, and making strategic connections to teams at AWS and Amazon.

Aman Shanbhag is an Associate Specialist Solutions Architect on the ML Frameworks team at Amazon Web Services (AWS), where he helps customers and partners with deploying ML training and inference solutions at scale. Before joining AWS, Aman graduated from Rice University with degrees in computer science, mathematics, and entrepreneurship.

Rohit Talluri is a Generative AI GTM Specialist at Amazon Web Services (AWS). He is partnering with top generative AI model builders, strategic customers, key AI/ML partners, and AWS Service Teams to enable the next generation of artificial intelligence, machine learning, and accelerated computing on AWS. He was previously an Enterprise Solutions Architect and the Global Solutions Lead for AWS Mergers & Acquisitions Advisory.

Ankit Anand is a Senior Foundation Models Go-To-Market (GTM) Specialist at AWS. He partners with top generative AI model builders, strategic customers, and AWS Service Teams to enable the next generation of AI/ML workloads on AWS. Ankit’s experience includes product management expertise within the financial services industry for high-frequency/low-latency trading and business development for Amazon Alexa.

Read More

Supercharge your development with Claude Code and Amazon Bedrock prompt caching

Supercharge your development with Claude Code and Amazon Bedrock prompt caching

Prompt caching in Amazon Bedrock is now generally available, delivering performance and cost benefits for agentic AI applications. Coding assistants that process large codebases represent an ideal use case for prompt caching.

In this post, we’ll explore how to combine Amazon Bedrock prompt caching with Claude Code—a coding agent released by Anthropic that is now generally available. This powerful combination transforms your development workflow by delivering lightning-fast responses from reducing inference response latency, as well as lowering input token costs. You’ll discover how this makes AI-assisted coding not just more efficient, but also more economically viable for everyday development tasks.

What is Claude Code?

Claude Code

Claude Code is Anthropic’s AI coding assistant powered by Claude Sonnet 4. It operates directly in your terminal, your favorite IDEs such as VS Code and Jetbrains, and in the background with Claude Code SDK, understanding your project context and taking actions without requiring you to manually manipulate and add generated code to a project. Unlike traditional coding assistants, Claude Code can:

  • Write code and fix bugs spanning multiple files across your codebase
  •  Answer questions about your code’s architecture and logic
  • Execute and fix tests, linting, and other commands
  • Search through git history, resolve merge conflicts, and create commits and PRs
  • Operate all of your other command line tools, like AWS CLI, Terraform, and k8s

The most compelling aspect of Claude Code is how it integrates into your existing workflow. You simply point it to your project directory and interact with it using natural language commands. Claude Code also supports Model Context Protocol (MCP), allowing you to connect external tools and data sources directly to your terminal and customize its AI capabilities with your context.

To learn more, see Claude Code tutorials and Claude Code: Best practices for agentic coding.

Amazon Bedrock prompt caching for AI-assisted development

The prompt caching feature of Amazon Bedrock dramatically reduces both response times and costs when working with large context. Here’s how it works: When prompt caching is enabled, your agentic AI application (such as Claude Code) inserts cache checkpoint markers at specific points in your prompts. Amazon Bedrock then interprets these application-defined markers and creates cache checkpoints that save the entire model state after processing the preceding text. On subsequent requests, if your prompt reuses that same prefix, the model loads the cached state instead of recomputing.

In the context of Claude Code specifically, this means the application intelligently manages these cache points when processing your codebase, allowing Claude to “remember” previously analyzed code without incurring the full computational and financial cost of reprocessing it. When you ask multiple questions about the same code or iteratively refine solutions, Claude Code leverages these cache checkpoints to deliver faster responses while dramatically reducing token consumption and associated costs.

To learn more, see documentation for Amazon Bedrock prompt caching.

Solution overview: Try Claude Code with Amazon Bedrock prompt caching

Prerequisites

Prompt caching is automatically turned on for supported models and AWS Regions.

Setting up Claude Code with Claude Sonnet 4 on Amazon Bedrock

After configuring AWS CLI with your credentials, follow these steps:

  1. In your terminal, execute the following commands:
    # Install Claude Code
    npm install -g @anthropic-ai/claude-code
    
    # Configure for Amazon Bedrock
    export CLAUDE_CODE_USE_BEDROCK=1
    export ANTHROPIC_MODEL='us.anthropic.claude-sonnet-4-20250514-v1:0'
    export ANTHROPIC_SMALL_FAST_MODEL='us.anthropic.claude-3-5-haiku-20241022-v1:0'
    
    # Launch Claude Code
    claude
  2. Verify that Claude Code is running by checking for the Welcome to Claude Code! message in your terminal.
    Terminal - Welcome to Claude Code

To learn more about how to configure Claude Code for Amazon Bedrock, see Connect to Amazon Bedrock.

Getting started with prompt caching

To get started, let’s experiment with a simple prompt.

  1. In Claude Code, execute the prompt:
    build a basic text-based calculator
  2. Review and respond to Claude Code’s requests:
    1. When prompted with questions like Do you want to create calculator.py? select 1. Yes to continue.
      Example question:

      Do you want to create calculator.py?
      
      1. Yes
      2. Yes, and don't ask again for this session (shift+tab)
      3. No, and tell Claude what to do differently (esc)
    2. Carefully review each request before approving to maintain security.
  3. After Claude Code generates the calculator application, it will display execution instructions such as:
    Run the calculator with: python3 calculator.py
  4. Test the application by executing the instructed command above. Then, follow the on-screen prompts to perform calculations.

Claude Code automatically enables prompt caching to optimize performance and costs. To monitor token usage and costs, use the /cost command. You will receive a detailed breakdown similar to this:

/cost 
  ⎿  Total cost:            $0.0827
  ⎿  Total duration (API):  26.3s
  ⎿  Total duration (wall): 42.3s
  ⎿  Total code changes:    62 lines added, 0 lines removed

This output provides valuable insights into your session’s resource consumption, including total cost, API processing time, wall clock time, and code modifications.

Getting started with prompt caching

To understand the benefits of prompt caching, let’s try the same prompt without prompt caching for comparison:

  1. In the terminal, exit Claude Code by pressing Ctrl+C.
  2. To create a new project directory, run the command:
    mkdir test-disable-prompt-caching; cd test-disable-prompt-caching 
  3. Disable prompt caching by setting an environment variable:
    export DISABLE_PROMPT_CACHING=1
  4. Execute claude to run Claude Code.
  5. Verify prompt caching is disabled by checking the terminal output. You should see Prompt caching: off under the Overrides (via env) section.
  6. Execute the prompt:
    build a basic text-based calculator
  7. After completion, execute /cost to view resource usage.

You will see a higher resource consumption compared to when prompt caching is enabled, even with a simple prompt:

/cost 
  ⎿  Total cost:            $0.1029
  ⎿  Total duration (API):  32s
  ⎿  Total duration (wall): 1m 17.5s
  ⎿  Total code changes:    57 lines added, 0 lines removed

Without prompt caching, each interaction incurs the full cost of processing your context.

Cleanup

To re-enable prompt caching, exit Claude Code and run unset DISABLE_PROMPT_CACHING before restarting Claude. Claude Code does not incur cost when you are not using it.

Prompt caching for complex codebases and efficient iteration

When working with complex codebases, prompt caching delivers significantly greater benefits than with simple prompts. For an illustrative example, consider the initial prompt: Develop a game similar to Pac-Man. This initial prompt generates the foundational project structure and files. As you refine the application with prompts such as Implement unique chase patterns for different ghosts, the coding agent must comprehend your entire codebase to be able to make targeted changes.

Without prompt caching, you force the model to reprocess thousands of tokens representing your code structure, class relationships, and existing implementations, with each iteration.

Prompt caching alleviates this redundancy by preserving your complex context, transforming your software development workflow with:

  • Dramatically reduced token costs for repeated interactions with the same files
  • Faster response times as Claude Code doesn’t need to reprocess your entire codebase
  • Efficient development cycles as you iterate without incurring full costs each time

Prompt caching with Model Context Protocol (MCP)

Model Context Protocol (MCP) transforms your coding experience by connecting coding agents to your specific tools and information sources. You can connect Claude Code to MCP servers that integrate to your file systems, databases, development tools and other productivity tools. This transforms a generic coding assistant into a personalized assistant that can interact with your data and tools beyond your codebase, follow your organization’s best practices, accelerating your unique development processes and workflows.

When you build on AWS, you gain additional advantages by leveraging AWS open source MCP servers for code assistants that provide intelligent AWS documentation search, best-practice recommendations, and real-time cost visibility, analysis and insights – without leaving your software development workflow.

Amazon Bedrock prompt caching becomes essential when working with MCP, as it preserves complex context across multiple interactions. With MCP continuously enriching your prompts with external knowledge and tools, prompt caching alleviates the need to repeatedly process this expanded context, slashing costs by up to 90% and reducing latency by up to 85%. This optimization proves particularly valuable as your MCP servers deliver increasingly sophisticated context about your unique development environment, so you can rapidly iterate through complex coding challenges while maintaining relevant context for up to 5 minutes without performance penalties or additional costs.

Considerations when deploying Claude Code to your organization

With Claude Code now generally available, many customers are considering deployment options on AWS to take advantage of its coding capabilities. For deployments, consider your foundational architecture for security and governance:

Consider leveraging AWS IAM Identity Center, formerly AWS Single Sign On (SSO) to centrally govern identity and access to Claude Code. This verifies that only authorized developers have access. Additionally, it allows developers to access resources with temporary, role-based credentials, alleviating the need for static access keys and enhancing security. Prior to opening Claude Code, make sure that you configure AWS CLI to use an IAM Identity Center profile by using aws configure sso --profile <PROFILE_NAME>. Then, you login using the profile created aws sso login --profile <PROFILE_NAME>.

Consider implementing a generative AI gateway on AWS to track and attribute costs effectively across different teams or projects using inference profiles. For Claude Code to use a custom endpoint, configure the ANTHROPIC_BEDROCK_BASE_URL environment variable with the gateway endpoint. Note that the gateway should be a pass-through proxy, see example implementation with LiteLLM. To learn more about AI gateway solutions, contact your AWS account team.

Consider automated configuration of default environment variables. This includes the environment variables outlined in this post, such as CLAUDE_CODE_USE_BEDROCK, ANTHROPIC_MODEL, and ANTHROPIC_FAST_MODEL. This will configure Claude Code to automatically connect Bedrock, providing a consistent baseline for development across teams. To begin with, organizations can start by providing developers with self-service instructions.

Consider permissions, memory and MCP servers for your organization. Security teams can configure managed permissions for what Claude Code is and is not allowed to do, which cannot be overwritten by local configuration. In addition, you can configure memory across all projects which allows you to auto-add common bash commands files workflows, and style conventions to align with your organization’s preference. This can be done by deploying your CLAUDE.md file into an enterprise directory /<enterprise root>/CLAUDE.md or the user’s home directory ~/.claude/CLAUDE.md. Finally, we recommend that one central team configures MCP servers and checks a .mcp.json configuration into the codebase so that all users benefit.

To learn more, see Claude Code team setup documentation or contact your AWS account team.

Conclusion

In this post, you learned how Amazon Bedrock prompt caching can significantly enhance AI applications, with Claude Code’s agentic AI assistant serving as a powerful demonstration. By leveraging prompt caching, you can process large codebases more efficiently, helping to dramatically reduce costs and response times. With this technology you can have faster, more natural interactions with your code, allowing you to iterate rapidly with generative AI. You also learned about Model Context Protocol (MCP), and how the seamless integration of external tools lets you customize your AI assistant with specific context like documentation and web resources. Whether you’re tackling complex debugging, refactoring legacy systems, or developing new features, the combination of Amazon Bedrock’s prompt caching and AI coding agents like Claude Code offers a more responsive, cost-effective, and intelligent approach to software development.

Amazon Bedrock prompt caching is generally available with Claude 4 Sonnet and Claude 3.5 Haiku. To learn more, see prompt caching and Amazon Bedrock.

Anthropic Claude Code is now generally available. To learn more, see Claude Code overview and contact your AWS account team for guidance on deployment.


About the Authors

Jonathan Evans is a Worldwide Solutions Architect for Generative AI at AWS, where he helps customers leverage cutting-edge AI technologies with Anthropic’s Claude models on Amazon Bedrock, to solve complex business challenges. With a background in AI/ML engineering and hands-on experience supporting machine learning workflows in the cloud, Jonathan is passionate about making advanced AI accessible and impactful for organizations of all sizes.

Daniel Wirjo is a Solutions Architect at AWS, focused on SaaS and AI startups. As a former startup CTO, he enjoys collaborating with founders and engineering leaders to drive growth and innovation on AWS. Outside of work, Daniel enjoys taking walks with a coffee in hand, appreciating nature, and learning new ideas.

Omar Elkharbotly is a Senior Cloud Support Engineer at AWS, specializing in Data, Machine Learning, and Generative AI solutions. With extensive experience in helping customers architect and optimize their cloud-based AI/ML/GenAI workloads, Omar works closely with AWS customers to solve complex technical challenges and implement best practices across the AWS AI/ML/GenAI service portfolio. He is passionate about helping organizations leverage the full potential of cloud computing to drive innovation in generative AI and machine learning.

Gideon Teo is a FSI Solution Architect at AWS in Melbourne, where he brings specialised expertise in Amazon SageMaker and Amazon Bedrock. With a deep passion for both traditional AI/ML methodologies and the emerging field of Generative AI, he helps financial institutions leverage cutting-edge technologies to solve complex business challenges. Outside of work, he cherishes quality time with friends and family, and continuously expands his knowledge across diverse technology domains.

Read More

70 Amazon Research Award recipients announced

70 Amazon Research Award recipients announced


70 Amazon Research Award recipients announced

Awardees, who represent 44 universities in 10 countries, have access to Amazon public datasets, along with AWS AI/ML services and tools.

June 03, 01:06 PMJune 03, 01:06 PM

Amazon Research Awards (ARA) provides unrestricted funds and AWS Promotional Credits to academic researchers investigating various research topics in multiple disciplines. This cycle, ARA received many excellent research proposals from across the world and today is publicly announcing 70 award recipients who represent 44 universities in 10 countries.

This announcement includes awards funded under five call for proposals during the fall 2024 cycle: AI for Information Security, Automated Reasoning, AWS AI, AWS Cryptography, and Sustainability. Proposals were reviewed for the quality of their scientific content and their potential to impact both the research community and society. Additionally, Amazon encourages the publication of research results, presentations of research at Amazon offices worldwide, and the release of related code under open-source licenses.

Recipients have access to more than 700 Amazon public datasets and can utilize AWS AI/ML services and tools through their AWS Promotional Credits. Recipients also are assigned an Amazon research contact who offers consultation and advice, along with opportunities to participate in Amazon events and training sessions.

Automated Reasoning is an important area of research for Amazon, with potential applications across various features and applications to help improve security, reliability, and performance for our customers. Through the ARA program, we collaborate with leading academic researchers to explore challenges in this field, said Robert Jones, senior principal scientist with the Cloud Automated Reasoning Group. We were again impressed by the exceptional response to our Automated Reasoning call for proposals this year, receiving numerous high-quality submissions. Congratulations to the recipients! We’re excited to support their work and partner with them as they develop new science and technology in this important area.

At Amazon, we believe that solving the world’s toughest sustainability challenges benefits from both breakthrough scientific research and open and bold collaboration. Through programs like the Amazon Research Awards program, we aim to support academic research that could contribute to our understanding of these complex issues, said Kommy Weldemariam, Director of Science and Innovation Sustainability. The selected proposals represent innovative projects that we hope will help advance knowledge in this field, potentially benefiting customers, communities, and the environment.

ARA funds proposals throughout the year in a variety of research areas. Applicants are encouraged to visit the ARA call for proposals page for more information or send an email to be notified of future open calls.

The tables below list, in alphabetical order by last name, fall 2024 cycle call-for-proposal recipients, sorted by research area.

AI for Information Security

&lt;tbody&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;b&gt;Recipient&lt;/b&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;b&gt;University&lt;/b&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;b&gt;Research title&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/christopher-amato” data-cms-id=”00000196-f43b-d69c-af96-fc3ff74c0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/christopher-amato” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748965923144,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748965923144,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f43b-d69c-af96-fc3ff74c0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-367d-d5b3-a9f7-be7de6bf0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Christopher Amato&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-367d-d5b3-a9f7-be7de6a40000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Christopher Amato&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ height=”77″ class=”xl65″ width=”171″ style=”height:57.6pt;width:128pt”&gt;Northeastern University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ height=”77″ class=”xl65″ width=”202″ style=”height:57.6pt;width:151pt”&gt;Multi-Agent Reinforcement Learning Cyber Defense for Securing Cloud Computing Platforms&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″ height=”58″ class=”xl66″ width=”117″ style=”height:43.2pt;width:88pt”&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/bernd-bischl” data-cms-id=”00000196-f3df-d69c-af96-ffffe5470000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/bernd-bischl” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748965936365,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748965936365,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f3df-d69c-af96-ffffe5470000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-367e-d651-a1ff-fe7e23900000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Bernd Bischl&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-367e-d651-a1ff-fe7e23880000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Bernd Bischl&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”width:128pt”&gt;Ludwig Maximilian University of Munich&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”width:151pt”&gt;Improving Generative and Foundation Models Reliability via Uncertainty-awareness&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″ height=”77″ class=”xl66″ width=”117″ style=”height:57.6pt;width:88pt”&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/alina-oprea” data-cms-id=”00000196-f438-d69c-af96-fc3eb0360000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/alina-oprea” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748965950713,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748965950713,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f438-d69c-af96-fc3eb0360000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-367e-da03-a5d7-ff7f598f0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Alina Oprea&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-367e-da03-a5d7-ff7f59860000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Alina Oprea&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”width:128pt”&gt;Northeastern University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”width:151pt”&gt;Multi-Agent Reinforcement Learning Cyber Defense for Securing Cloud Computing Platforms&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″ height=”77″ class=”xl66″ width=”117″ style=”height:57.6pt;width:88pt”&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/roberto-perdisci” data-cms-id=”00000196-f3d4-dd94-ad97-f7df08670000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/roberto-perdisci” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748965964159,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748965964159,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f3d4-dd94-ad97-f7df08670000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-367e-d1b2-a397-36fe8d4a0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Roberto Perdisci&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-367e-d1b2-a397-36fe8d400000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Roberto Perdisci&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”width:128pt”&gt;University of Georgia&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”width:151pt”&gt;ContextADBench: A Comprehensive Benchmark Suite for Contextual Anomaly Detection&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;

Automated Reasoning

&lt;tbody&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;b&gt;Recipient&lt;/b&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;b&gt;University&lt;/b&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;b&gt;Research title&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/nada-amin” data-cms-id=”00000188-9255-dbd2-a1db-fad53a150000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/nada-amin” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748965988276,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748965988276,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000188-9255-dbd2-a1db-fad53a150000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-367e-dd34-a3ff-7f7fea920000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Nada Amin&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-367e-dd34-a3ff-7f7fea890000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Nada Amin&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl67″ width=”175″ style=”width:131pt”&gt;Harvard University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl67″ width=”285″ style=”width:214pt”&gt;LLM-Augmented Semi-Automated Proofs for Interactive Verification&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/suguman-bansal” data-cms-id=”00000196-f46e-d69c-af96-fc6efbb40000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/suguman-bansal” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966000025,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966000025,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f46e-d69c-af96-fc6efbb40000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-367f-df40-ad97-b6ff22560000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Suguman Bansal&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-367f-df40-ad97-b6ff224a0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Suguman Bansal&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”width:131pt”&gt;Georgia Institute of Technology&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-left:none;width:214pt”&gt;Certified Inductive Generalization in Reinforcement Learning&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/ioana-boureanu” data-cms-id=”00000196-f441-d69c-af96-fc6733410000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/ioana-boureanu” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966045584,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966045584,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f441-d69c-af96-fc6733410000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-367f-dd34-a3ff-7f7fd12c0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Ioana Boureanu&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-367f-dd34-a3ff-7f7fd1240000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Ioana Boureanu&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;University of Surrey&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;Phoebe+: An Automated-Reasoning Tool for Provable Privacy in Cryptographic Systems&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/omar-haider-chowdhury” data-cms-id=”00000196-f468-dd94-ad97-f4ff34610000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/omar-haider-chowdhury” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966059168,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966059168,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f468-dd94-ad97-f4ff34610000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3680-d5b3-a9f7-be8004920000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Omar Haider Chowdhury&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3680-d5b3-a9f7-be8004890000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Omar Haider Chowdhury&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;Stony Brook University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;Restricter: An Automatic Tool for Authoring Amazon Cedar Access Control Policies with the Principle of Least Privilege&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/stefan-ciobaca” data-cms-id=”00000196-f500-d5f3-af9e-f5b5d6da0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/stefan-ciobaca” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966070795,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966070795,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f500-d5f3-af9e-f5b5d6da0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3680-df40-ad97-b69d37310000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Stefan Ciobaca&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3680-df40-ad97-b69d372b0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Stefan Ciobaca&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;Alexandru Ioan Cuza University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;An Interactive Proof Mode for Dafny&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/joao-ferreira” data-cms-id=”00000196-f4f1-d5f3-af9e-f4f59c7c0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/joao-ferreira” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966086030,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966086030,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f4f1-d5f3-af9e-f4f59c7c0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3680-da03-a5d7-ffcd67330000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Joo Ferreira&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3680-da03-a5d7-ffcd672c0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Joo Ferreira&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;INESC-ID&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;Polyglot Automated Program Repair for Infrastructure as Code&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″ height=”20″ class=”xl66″ width=”135″ style=”height:15.0pt;width:101pt”&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/mirco-giacobbe” data-cms-id=”00000196-f4fd-d957-a396-feff222b0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/mirco-giacobbe” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966098360,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966098360,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f4fd-d957-a396-feff222b0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3680-d23f-a3ff-f6aba0f70000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Mirco Giacobbe&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3680-d23f-a3ff-f6aba0da0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Mirco Giacobbe&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;University of Birmingham&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;Neural Software Verification&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/tobias-grosser” data-cms-id=”0000018f-0e10-dc75-abaf-7f525b050000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/tobias-grosser” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966112895,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966112895,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000018f-0e10-dc75-abaf-7f525b050000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3680-d1b2-a397-36b2d1ba0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Tobias Grosser&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3680-d1b2-a397-36b2d1ad0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Tobias Grosser&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;University of Cambridge&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;Synthesis-based Symbolic BitVector Simplification for Lean&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/ronghui-gu” data-cms-id=”00000188-9255-dbd2-a1db-fad507a70000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/ronghui-gu” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966124564,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966124564,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000188-9255-dbd2-a1db-fad507a70000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3681-da03-a5d7-ffcd07970000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Ronghui Gu&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3681-da03-a5d7-ffcd07900000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Ronghui Gu&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;Columbia University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;Scaling Formal Verification of Security Properties for Unmodified System Software&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/alexey-ignatiev” data-cms-id=”00000196-f478-dd94-ad97-f4ff7b120000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/alexey-ignatiev” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966137783,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966137783,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f478-dd94-ad97-f4ff7b120000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3681-dd34-a3ff-7fd335cc0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Alexey Ignatiev&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3681-dd34-a3ff-7fd335c60000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Alexey Ignatiev&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;Monash University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;Huub: Next-Gen Lazy Clause Generation&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/kenneth-mcmillan” data-cms-id=”00000196-f4ff-d957-a396-feff01330000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/kenneth-mcmillan” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966150239,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966150239,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f4ff-d957-a396-feff01330000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3681-d23f-a3ff-f6ab68f20000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Kenneth McMillan&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3681-d23f-a3ff-f6ab68e90000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Kenneth McMillan&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;University of Texas At Austin&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;Synthesis of Auxiliary Variables and Invariants for Distributed Protocol Verification&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/alexandra-mendes” data-cms-id=”00000196-f4fb-d957-a396-feff544d0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/alexandra-mendes” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966164850,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966164850,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f4fb-d957-a396-feff544d0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3681-dd34-a3ff-7fd39e2a0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Alexandra Mendes&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3681-dd34-a3ff-7fd39e210000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Alexandra Mendes&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;University of Porto&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;Overcoming Barriers to the Adoption of Verification-Aware Languages&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/jason-nieh” data-cms-id=”00000188-9257-dbd2-a1db-fad7b8490000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/jason-nieh” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966179016,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966179016,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000188-9257-dbd2-a1db-fad7b8490000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3681-da03-a5d7-ffcdd65d0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Jason Nieh&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3681-da03-a5d7-ffcdd6500000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Jason Nieh&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;Columbia University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;Scaling Formal Verification of Security Properties for Unmodified System Software&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/rohan-padhye” data-cms-id=”00000188-9255-dbd2-a1db-fad5092b0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/rohan-padhye” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966191602,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966191602,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000188-9255-dbd2-a1db-fad5092b0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3682-d23f-a3ff-f6ab09f60000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Rohan Padhye&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3682-d23f-a3ff-f6ab09f10000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Rohan Padhye&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;Carnegie Mellon University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;Automated Synthesis and Evaluation of Property-Based Tests&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/fortunat-rajaona” data-cms-id=”00000196-f463-d69c-af96-fc6734b50000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/fortunat-rajaona” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966204266,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966204266,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f463-d69c-af96-fc6734b50000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3682-d23f-a3ff-f6ab3d8b0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Fortunat Rajaona&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3682-d23f-a3ff-f6ab3d850000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Fortunat Rajaona&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;University of Surrey&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;Phoebe+: An Automated-Reasoning Tool for Provable Privacy in Cryptographic Systems&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/subhajit-roy” data-cms-id=”00000196-f502-d957-a396-ff5e4a290000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/subhajit-roy” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966222672,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966222672,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f502-d957-a396-ff5e4a290000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3682-da03-a5d7-ffcf6ee80000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Subhajit Roy&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3682-da03-a5d7-ffcf6edf0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Subhajit Roy&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;Indian Institute of Technology Kanpur&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;Theorem Proving Modulo LLM&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/gagandeep-singh” data-cms-id=”00000196-f46c-d69c-af96-fc6efdf30000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/gagandeep-singh” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966236127,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966236127,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f46c-d69c-af96-fc6efdf30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3682-da03-a5d7-ffcfba800000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Gagandeep Singh&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3682-da03-a5d7-ffcfba770000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Gagandeep Singh&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;University of Illinois At UrbanaChampaign&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;Trustworthy LLM Systems using Formal Contracts&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/scott-stoller” data-cms-id=”00000196-f46a-dd94-ad97-f4ff16e70000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/scott-stoller” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966247968,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966247968,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f46a-dd94-ad97-f4ff16e70000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3682-dd34-a3ff-7fd3eb0c0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Scott Stoller&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3682-dd34-a3ff-7fd3eb060000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Scott Stoller&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;Stony Brook University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;Restricter: An Automatic Tool for Authoring Amazon Cedar Access Control Policies with the Principle of Least Privilege&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″ height=”20″ class=”xl66″ width=”135″ style=”height:15.0pt;width:101pt”&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/peter-stuckey” data-cms-id=”00000196-f476-dd94-ad97-f4fff4600000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/peter-stuckey” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966264947,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966264947,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f476-dd94-ad97-f4fff4600000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3683-df40-ad97-b69f18300000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Peter Stuckey&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3683-df40-ad97-b69f18290000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Peter Stuckey&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;Monash University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;Huub: Next-Gen Lazy Clause Generation&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/yulei-sui” data-cms-id=”00000196-f465-dd94-ad97-f4fff04e0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/yulei-sui” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966283536,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966283536,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f465-dd94-ad97-f4fff04e0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3683-d23f-a3ff-f6ab5b9f0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Yulei Sui&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3683-d23f-a3ff-f6ab5b970000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Yulei Sui&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;University of New South Wales&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;Path-Sensitive Typestate Analysis through Sparse Abstract Execution&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/nikos-vasilakis” data-cms-id=”00000196-f4f7-d957-a396-feff12da0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/nikos-vasilakis” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966296140,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966296140,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f4f7-d957-a396-feff12da0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3683-d5b3-a9f7-be83a51e0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Nikos Vasilakis&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3683-d5b3-a9f7-be83a5150000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Nikos Vasilakis&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;Brown University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;Semantics-Driven Static Analysis for the Unix/Linux Shell&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/ping-wang” data-cms-id=”00000196-f46b-d69c-af96-fc6f73960000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/ping-wang” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966308787,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966308787,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f46b-d69c-af96-fc6f73960000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3683-da03-a5d7-ffcfd5430000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Ping Wang&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3683-da03-a5d7-ffcfd53a0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Ping Wang&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;Stevens Institute of Technology&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;Leveraging Large Language Models for Reasoning Augmented Searching on Domain-specific NoSQL Database&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/john-wawrzynek” data-cms-id=”00000196-f475-dd94-ad97-f4ff402b0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/john-wawrzynek” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966318795,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966318795,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f475-dd94-ad97-f4ff402b0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3684-d23f-a3ff-f6af03750000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;John Wawrzynek&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3684-d23f-a3ff-f6af036f0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;John Wawrzynek&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”&gt;University of California, Berkeley&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”&gt;GPU-Accelerated High-Throughput SAT Sampling&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;

AWS AI

&lt;tbody&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;b&gt;Recipient&lt;/b&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;b&gt;University&lt;/b&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;b&gt;Research title&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/panagiotis-adamopoulos” data-cms-id=”00000196-f3c0-dd94-ad97-f7dff51f0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/panagiotis-adamopoulos” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748966365424,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748966365424,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f3c0-dd94-ad97-f7dff51f0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3684-df40-ad97-b69db6290000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Panagiotis Adamopoulos&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3684-df40-ad97-b69db6200000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Panagiotis Adamopoulos&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl67″ width=”162″ style=”width:121pt”&gt;Emory University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl67″ width=”174″ style=”width:130pt”&gt;Generative AI solutions for The Spillover Effect of Fraudulent Reviews on Product Recommendations&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/vikram-adve” data-cms-id=”00000188-9258-dbd2-a1db-fad98acc0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/vikram-adve” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967163514,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967163514,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000188-9258-dbd2-a1db-fad98acc0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3684-d5b3-a9f7-be84e3590000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Vikram Adve&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3684-d5b3-a9f7-be84e3500000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Vikram Adve&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”width:121pt”&gt;University of Illinois at UrbanaChampaign&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-left:none;width:130pt”&gt;Fellini: Differentiable ML Compiler for Full-Graph Optimization for LLM Models&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/frances-arnold” data-cms-id=”00000196-efef-dbbc-a7de-ffffbc620000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/frances-arnold” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967179974,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967179974,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-efef-dbbc-a7de-ffffbc620000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3691-d23f-a3ff-f6bb13b60000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Frances Arnold&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3691-d23f-a3ff-f6bb13ad0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Frances Arnold&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;California Institute of Technology&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Closed-loop Generative Machine Learning for De Novo Enzyme Discovery and Optimization&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/yonatan-bisk” data-cms-id=”00000196-eff8-d41c-a3df-effd09d80000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/yonatan-bisk” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967191163,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967191163,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-eff8-d41c-a3df-effd09d80000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3691-dd34-a3ff-7fd350c70000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Yonatan Bisk&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3691-dd34-a3ff-7fd350c10000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Yonatan Bisk&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;Carnegie Mellon University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Useful, Safe, and Robust Multiturn Interactions with LLMs&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/shiyu-chang” data-cms-id=”00000196-f002-d41c-a3df-f04f10490000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/shiyu-chang” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967204023,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967204023,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f002-d41c-a3df-f04f10490000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3691-d23f-a3ff-f6bb7bc90000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Shiyu Chang&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3691-d23f-a3ff-f6bb7bc20000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Shiyu Chang&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;University of California, Santa Barbara&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Cut the Crap: Advancing the Efficient Communication of Multi-Agent Systems via Spatial-Temporal Topology Design and KV Cache Sharing&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/yuxin-chen” data-cms-id=”00000196-effa-d41c-a3df-efff73b70000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/yuxin-chen” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967216353,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967216353,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-effa-d41c-a3df-efff73b70000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3691-dd34-a3ff-7fd3af410000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Yuxin Chen&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3691-dd34-a3ff-7fd3af3b0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Yuxin Chen&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;University of Pennsylvania&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Provable Acceleration of Diffusion Models for Modern Generative AI&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/tianlong-chen” data-cms-id=”00000196-efff-d41c-a3df-efff3c610000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/tianlong-chen” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967227811,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967227811,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-efff-d41c-a3df-efff3c610000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3691-dd34-a3ff-7fd3dec80000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Tianlong Chen&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3691-dd34-a3ff-7fd3dec00000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Tianlong Chen&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;University of North Carolina at Chapel Hill&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Cut the Crap: Advancing the Efficient Communication of Multi-Agent Systems via Spatial-Temporal Topology Design and KV Cache Sharing&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/mingyu-ding” data-cms-id=”00000196-eff3-d41c-a3df-efff69820000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/mingyu-ding” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967246244,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967246244,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-eff3-d41c-a3df-efff69820000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3692-d23f-a3ff-f6bb0d320000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Mingyu Ding&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3692-d23f-a3ff-f6bb0d2d0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Mingyu Ding&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;University of North Carolina at Chapel Hill&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Aligning Long Videos and Language as Long-Horizon World Models&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/nikhil-garg” data-cms-id=”0000018f-0e2d-dce3-af9f-4efd5cdc0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/nikhil-garg” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967258544,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967258544,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000018f-0e2d-dce3-af9f-4efd5cdc0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3692-dd34-a3ff-7fd351ad0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Nikhil Garg&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3692-dd34-a3ff-7fd351a60000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Nikhil Garg&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;Cornell University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Market Design for Responsible Multi-agent LLMs&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/jessica-hullman” data-cms-id=”00000196-f3a4-dd94-ad97-f7bf98490000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/jessica-hullman” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967272546,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967272546,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f3a4-dd94-ad97-f7bf98490000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3692-dd34-a3ff-7fd3845d0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Jessica Hullman&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3692-dd34-a3ff-7fd384560000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Jessica Hullman&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;Northwestern University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Human-Aligned Uncertainty Quantification in High Dimensions&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/christopher-jermaine” data-cms-id=”00000196-ef10-dbbc-a7de-ff5308810000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/christopher-jermaine” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967285624,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967285624,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-ef10-dbbc-a7de-ff5308810000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3692-d23f-a3ff-f6bbbd060000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Christopher Jermaine&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3692-d23f-a3ff-f6bbbd000000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Christopher Jermaine&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;Rice University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Fast, Trusted AI Using the EINSUMMABLE Compiler&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/yunzhu-li” data-cms-id=”00000196-f009-dbbc-a7de-f15b10e10000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/yunzhu-li” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967297260,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967297260,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f009-dbbc-a7de-f15b10e10000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3692-d23f-a3ff-f6bbee940000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Yunzhu Li&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3692-d23f-a3ff-f6bbee8f0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Yunzhu Li&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;Columbia University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Physics-Informed Foundation Models Through Embodied Interactions&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/pattie-maes” data-cms-id=”00000196-f3d2-d69c-af96-fff617430000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/pattie-maes” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967308974,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967308974,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f3d2-d69c-af96-fff617430000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3693-d23f-a3ff-f6bb1e6c0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Pattie Maes&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3693-d23f-a3ff-f6bb1e670000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Pattie Maes&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;Massachusetts Institute of Technology&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Understanding How LLM Agents Deviate from Human Choices&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/sasa-misailovic” data-cms-id=”00000196-efc4-d41c-a3df-efcdf1080000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/sasa-misailovic” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967319321,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967319321,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-efc4-d41c-a3df-efcdf1080000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3693-dd34-a3ff-7fd347b30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Sasa Misailovic&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3693-dd34-a3ff-7fd347ad0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Sasa Misailovic&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;University of Illinois at UrbanaChampaign&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Fellini: Differentiable ML Compiler for Full-Graph Optimization for LLM Models&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/kristina-monakhova” data-cms-id=”00000196-f3af-d69c-af96-ffaf98780000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/kristina-monakhova” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967334299,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967334299,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f3af-d69c-af96-ffaf98780000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3693-d23f-a3ff-f6bb73600000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Kristina Monakhova&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3693-d23f-a3ff-f6bb73580000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Kristina Monakhova&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;Cornell University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Trustworthy extreme imaging for science using interpretable uncertainty quantification&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/todd-mowry” data-cms-id=”00000196-efb8-dbbc-a7de-fffb70d60000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/todd-mowry” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967345339,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967345339,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-efb8-dbbc-a7de-fffb70d60000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3693-dd34-a3ff-7fd3aae40000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Todd Mowry&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3693-dd34-a3ff-7fd3aade0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Todd Mowry&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;Carnegie Mellon University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Efficient LLM Serving on Trainium via Kernel Generation&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/min-hwan-oh” data-cms-id=”00000196-f3b6-d69c-af96-ffb608e40000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/min-hwan-oh” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967357021,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967357021,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f3b6-d69c-af96-ffb608e40000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3693-d23f-a3ff-f6bbd9690000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Min-hwan Oh&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3693-d23f-a3ff-f6bbd9610000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Min-hwan Oh&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;Seoul National University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Mutually Beneficial Interplay Between Selection Fairness and Context Diversity in Contextual Bandits&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/patrick-rebeschini” data-cms-id=”00000196-effd-dbbc-a7de-ffff13700000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/patrick-rebeschini” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967368283,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967368283,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-effd-dbbc-a7de-ffff13700000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3694-dd34-a3ff-7fd702400000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Patrick Rebeschini&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3694-dd34-a3ff-7fd7023a0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Patrick Rebeschini&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;University of Oxford&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Optimal Regularization for LLM Alignment&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/jose-renau” data-cms-id=”00000196-f3a0-dd94-ad97-f7bf5d880000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/jose-renau” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967379029,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967379029,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f3a0-dd94-ad97-f7bf5d880000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3694-dd34-a3ff-7fd72fbf0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Jose Renau&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3694-dd34-a3ff-7fd72fb80000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Jose Renau&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;University of California, Santa Cruz&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Verification Constrained Hardware Optimization using Intelligent Design Agentic Programming&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/vilma-todri” data-cms-id=”00000196-f3c2-dd94-ad97-f7dffd220000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/vilma-todri” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967390803,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967390803,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f3c2-dd94-ad97-f7dffd220000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3694-d23f-a3ff-f6bf5c2c0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Vilma Todri&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3694-d23f-a3ff-f6bf5c260000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Vilma Todri&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;Emory University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Generative AI solutions for The Spillover Effect of Fraudulent Reviews on Product Recommendations&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/aravindan-vijayaraghavan” data-cms-id=”00000196-f3a7-d69c-af96-ffa727d50000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/aravindan-vijayaraghavan” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967402051,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967402051,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f3a7-d69c-af96-ffa727d50000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3694-d23f-a3ff-f6bf8aab0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Aravindan Vijayaraghavan&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3694-d23f-a3ff-f6bf8aa50000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Aravindan Vijayaraghavan&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;Northwestern University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Human-Aligned Uncertainty Quantification in High Dimensions&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/wei-yang” data-cms-id=”00000196-efd1-d41c-a3df-efdd570c0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/wei-yang” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967414018,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967414018,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-efd1-d41c-a3df-efdd570c0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3694-d23f-a3ff-f6bfb5480000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Wei Yang&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3694-d23f-a3ff-f6bfb5420000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Wei Yang&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;University of Texas at Dallas&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Optimizing RISC-V Compilers with RISC-LLM and Syntax Parsing&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/huaxiu-yao” data-cms-id=”00000196-eff5-d41c-a3df-effdd5fe0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/huaxiu-yao” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967432850,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967432850,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-eff5-d41c-a3df-effdd5fe0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3695-dd34-a3ff-7fd703150000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Huaxiu Yao&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3695-dd34-a3ff-7fd7030b0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Huaxiu Yao&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;University of North Carolina at Chapel Hill&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Aligning Long Videos and Language as Long-Horizon World Models&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/amy-zhang” data-cms-id=”00000196-f3c4-dd94-ad97-f7df9fc20000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/amy-zhang” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967448147,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967448147,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f3c4-dd94-ad97-f7df9fc20000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3695-d23f-a3ff-f6bf2eb10000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Amy Zhang&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3695-d23f-a3ff-f6bf2eab0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Amy Zhang&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;University of Washington&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Tools for Governing AI Agent Autonomy&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/ruqi-zhang” data-cms-id=”00000196-f3b3-dd94-ad97-f7bf55460000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/ruqi-zhang” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967462196,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967462196,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f3b3-dd94-ad97-f7bf55460000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3695-dd34-a3ff-7fd769aa0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Ruqi Zhang&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3695-dd34-a3ff-7fd769a50000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Ruqi Zhang&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”&gt;Purdue University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”&gt;Efficient Test-time Alignment for Large Language Models and Large Multimodal Models&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/zheng-zhang” data-cms-id=”00000196-efcf-d41c-a3df-efcfa8a40000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/zheng-zhang” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967475461,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967475461,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-efcf-d41c-a3df-efcfa8a40000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-3695-d23f-a3ff-f6bfa1a80000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Zheng Zhang&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-3695-d23f-a3ff-f6bfa19f0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Zheng Zhang&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″&gt;Rutgers University-New Brunswick&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″&gt;AlphaQC: An AI-powered Quantum Circuit Optimizer and Denoiser&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;

AWS Cryptography

&lt;tbody&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;b&gt;Recipient&lt;/b&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;b&gt;University&lt;/b&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;b&gt;Research title&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/alexandra-boldyreva” data-cms-id=”00000196-f50f-d5f3-af9e-f5bf87f10000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/alexandra-boldyreva” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967939572,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967939572,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f50f-d5f3-af9e-f5bf87f10000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-369c-d23f-a3ff-f6bfb9e60000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Alexandra Boldyreva&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-369c-d23f-a3ff-f6bfb9e00000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Alexandra Boldyreva&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl67″ width=”171″ style=”width:128pt”&gt;Georgia Institute of Technology&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl67″ width=”202″ style=”width:151pt”&gt;Quantifying Information Leakage in Searchable Encryption Protocols&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/maria-eichlseder” data-cms-id=”00000196-f51d-d5f3-af9e-f5bd6c8f0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/maria-eichlseder” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967955452,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967955452,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f51d-d5f3-af9e-f5bd6c8f0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-369c-d23f-a3ff-f6bfeef80000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Maria Eichlseder&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-369c-d23f-a3ff-f6bfeef30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Maria Eichlseder&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”width:128pt”&gt;Graz University of Technology, Austria&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-left:none;width:151pt”&gt;SALAD Systematic Analysis of Lightweight Ascon-based Designs&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/venkatesan-guruswami” data-cms-id=”00000196-f511-d5f3-af9e-f5b580ef0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/venkatesan-guruswami” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967968239,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967968239,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f511-d5f3-af9e-f5b580ef0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-369d-dd34-a3ff-7fdf24920000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Venkatesan Guruswami&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-369d-dd34-a3ff-7fdf248b0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Venkatesan Guruswami&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”&gt;University of California, Berkeley&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”&gt;Obfuscation, Proof Systems, and Secure Computation: A Research Program on Cryptography at the Simons Institute for the Theory of Computing&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/joseph-jaeger” data-cms-id=”00000196-f514-d957-a396-ff5e245b0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/joseph-jaeger” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967981498,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967981498,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f514-d957-a396-ff5e245b0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-369d-dd34-a3ff-7fdf5c000000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Joseph Jaeger&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-369d-dd34-a3ff-7fdf5bf80000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Joseph Jaeger&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”&gt;Georgia Institute of Technology&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”&gt;Analyzing Chat Encryption for Group Messaging&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″ height=”39″ class=”xl66″ width=”117″ style=”height:29.4pt;width:88pt”&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/aayush-jain” data-cms-id=”00000196-f509-d5f3-af9e-f5bd37f20000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/aayush-jain” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748967994921,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748967994921,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f509-d5f3-af9e-f5bd37f20000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-369d-d23f-a3ff-f6bf94530000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Aayush Jain&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-369d-d23f-a3ff-f6bf944b0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Aayush Jain&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”&gt;Carnegie Mellon&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”&gt;Large Scale Multiparty Silent Preprocessing for MPC from LPN&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″ height=”39″ class=”xl66″ width=”117″ style=”height:29.4pt;width:88pt”&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/huijia-lin” data-cms-id=”00000196-f50a-d957-a396-ff5eb3e50000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/huijia-lin” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748968010326,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748968010326,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f50a-d957-a396-ff5eb3e50000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-369d-d23f-a3ff-f6bfc4c70000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Huijia Lin&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-369d-d23f-a3ff-f6bfc4c20000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Huijia Lin&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”&gt;University of Washington&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”&gt;Large Scale Multiparty Silent Preprocessing for MPC from LPN&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/hamed-nemati” data-cms-id=”00000196-f50d-d5f3-af9e-f5bd00050000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/hamed-nemati” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748968031219,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748968031219,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f50d-d5f3-af9e-f5bd00050000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-369e-dd34-a3ff-7fdf17bf0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Hamed Nemati&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-369e-dd34-a3ff-7fdf17ba0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Hamed Nemati&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”&gt;KTH Royal Institute of Technology&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”&gt;Trustworthy Automatic Verification of Side-Channel Countermeasures for Binary Cryptographic Programs using the HoIBA libary&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/karl-palmskog” data-cms-id=”00000196-f50e-d5f3-af9e-f5bf57800000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/karl-palmskog” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748968042791,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748968042791,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f50e-d5f3-af9e-f5bf57800000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-369e-d23f-a3ff-f6bf4e480000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Karl Palmskog&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-369e-d23f-a3ff-f6bf4e400000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Karl Palmskog&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”&gt;KTH Royal Institute of Technology&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”&gt;Trustworthy Automatic Verification of Side-Channel Countermeasures for Binary Cryptographic Programs using the HoIBA libary&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/chris-piekert” data-cms-id=”00000196-f507-d5f3-af9e-f5b794a90000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/chris-piekert” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748968060521,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748968060521,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f507-d5f3-af9e-f5b794a90000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-369e-d23f-a3ff-f6bf8d3f0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Chris Piekert&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-369e-d23f-a3ff-f6bf8d390000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Chris Piekert&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”&gt;University of Michigan, Ann Arbor&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”&gt;Practical Third-Generation FHE and Bootstrapping&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/dimitrios-skarlatos” data-cms-id=”00000196-f515-d957-a396-ff5f86bf0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/dimitrios-skarlatos” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748968074223,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748968074223,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f515-d957-a396-ff5f86bf0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-369e-d23f-a3ff-f6bfc4de0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Dimitrios Skarlatos&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-369e-d23f-a3ff-f6bfc4d80000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Dimitrios Skarlatos&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”&gt;Carnegie Mellon University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”&gt;Scale-Out FHE LLMs on GPUs&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/vinod-vaikuntanathan” data-cms-id=”00000196-f51a-d5f3-af9e-f5bf81dc0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/vinod-vaikuntanathan” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748968085679,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748968085679,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f51a-d5f3-af9e-f5bf81dc0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-369e-d23f-a3ff-f6bff7370000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Vinod Vaikuntanathan&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-369e-d23f-a3ff-f6bff7320000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Vinod Vaikuntanathan&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”&gt;Massachusetts Institute of Technology&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”&gt;Can Quantum Computers (Really) Factor?&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/daniel-wichs” data-cms-id=”00000196-f512-d5f3-af9e-f5b7dc450000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/daniel-wichs” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748968101847,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748968101847,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f512-d5f3-af9e-f5b7dc450000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-369f-d23f-a3ff-f6bf2a1a0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Daniel Wichs&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-369f-d23f-a3ff-f6bf2a120000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Daniel Wichs&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”&gt;Northeastern University&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”&gt;Obfuscation, Proof Systems, and Secure Computation: A Research Program on Cryptography at the Simons Institute for the Theory of Computing&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/david-wu” data-cms-id=”00000196-f51c-d957-a396-ff5e295c0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/david-wu” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748968115140,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748968115140,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f51c-d957-a396-ff5e295c0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-369f-d23f-a3ff-f6bf66700000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;David Wu&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-369f-d23f-a3ff-f6bf666a0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;David Wu&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”&gt;University Of Texas At Austin&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”&gt;Fast Private Information Retrieval and More using Homomorphic Encryption&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;

Sustainability

&lt;tbody&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;b&gt;Recipient&lt;/b&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;b&gt;University&lt;/b&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;b&gt;Research title&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/meeyoung-cha” data-cms-id=”00000196-f522-d957-a396-ff7e706b0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/meeyoung-cha” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748968131452,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748968131452,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f522-d957-a396-ff7e706b0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-369f-d23f-a3ff-f6bfa3c00000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Meeyoung Cha&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-369f-d23f-a3ff-f6bfa3ba0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Meeyoung Cha&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”190″ style=”border-left:none;width:143pt”&gt;Max Planck Institute&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”219″ style=”border-left:none;width:164pt”&gt;Forest-Blossom (Flossom): A New Framework for Sustaining Forest Biodiversity Through Outcome-Driven Remote Sensing Monitoring&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/jingrui-he” data-cms-id=”00000196-f520-d5f3-af9e-f5b5e8c40000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/jingrui-he” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748968175615,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748968175615,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f520-d5f3-af9e-f5b5e8c40000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-369f-dd34-a3ff-7fdfd7930000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Jingrui He&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-369f-dd34-a3ff-7fdfd78d0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Jingrui He&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”190″ style=”border-top:none;border-left:none;width:143pt”&gt;University of Illinois at UrbanaChampaign&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”219″ style=”border-top:none;border-left:none;width:164pt”&gt;Foundation Model Enabled Earths Ecosystem Monitoring&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/pedro-lopes” data-cms-id=”00000196-f51f-d957-a396-ff5f2bc00000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/pedro-lopes” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748968193127,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748968193127,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f51f-d957-a396-ff5f2bc00000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-36a0-dd34-a3ff-7ff395080000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Pedro Lopes&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-36a0-dd34-a3ff-7ff394ff0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Pedro Lopes&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”190″ style=”border-top:none;border-left:none;width:143pt”&gt;University of Chicago&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl65″ width=”219″ style=”border-top:none;border-left:none;width:164pt”&gt;AI-powered Tools that Enable Engineers to Make &amp;amp; Re-make Sustainable Hardware&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=”1″ rowspan=”1″&gt;&lt;a href=”https://www.amazon.science/research-awards/recipients/cheng-yaw-low” data-cms-id=”00000196-f524-d5f3-af9e-f5b5e2c60000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/cheng-yaw-low” link-data=”{&amp;quot;cms.site.owner&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000016e-17e7-d263-a5fe-fff724f30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;ae3387cc-b875-31b7-b82d-63fd8d758c20&amp;quot;},&amp;quot;cms.content.publishDate&amp;quot;:1748968205207,&amp;quot;cms.content.publishUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;cms.content.updateDate&amp;quot;:1748968205207,&amp;quot;cms.content.updateUser&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;0000017f-b709-d2ad-a97f-f7fd25e30000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;6aa69ae1-35be-30dc-87e9-410da9e1cdcc&amp;quot;},&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;link&amp;quot;:{&amp;quot;rekognitionVideo.timeFrameMetadata&amp;quot;:[],&amp;quot;attributes&amp;quot;:[],&amp;quot;item&amp;quot;:{&amp;quot;_ref&amp;quot;:&amp;quot;00000196-f524-d5f3-af9e-f5b5e2c60000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6&amp;quot;},&amp;quot;_id&amp;quot;:&amp;quot;00000197-36a0-d23f-a3ff-f6abc7f20000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;c3f0009d-3dd9-3762-acac-88c3a292c6b2&amp;quot;},&amp;quot;linkText&amp;quot;:&amp;quot;Cheng Yaw Low&amp;quot;,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset&amp;quot;:null,&amp;quot;theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset&amp;quot;:null,&amp;quot;_id&amp;quot;:&amp;quot;00000197-36a0-d23f-a3ff-f6abc7ed0000&amp;quot;,&amp;quot;_type&amp;quot;:&amp;quot;809caec9-30e2-3666-8b71-b32ddbffc288&amp;quot;}”&gt;Cheng Yaw Low&lt;/a&gt;&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl67″ width=”190″ style=”width:143pt”&gt;Max Planck Institute&lt;/td&gt;&lt;td colspan=”1″ rowspan=”1″ class=”xl67″ width=”219″ style=”width:164pt”&gt;Forest-Blossom (Flossom): A New Framework for Sustaining Forest Biodiversity Through Outcome-Driven Remote Sensing Monitoring&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;

Tags: Generative AI

Read More