How 1X Technologies’ Robots Are Learning to Lend a Helping Hand

How 1X Technologies’ Robots Are Learning to Lend a Helping Hand

Humans learn the norms, values and behaviors of society from each other — and Bernt Børnich, founder and CEO of 1X Technologies, thinks robots should learn like this, too.

“For robots to be truly intelligent and show nuances like being careful around your pet, holding the door open for an elderly person and generally behaving like we want robots to behave, they have to live and learn among us,” Børnich told the AI Podcast.

1X Technologies is committed to building fully autonomous humanoid robots, with a focus on safety, affordability and adaptability.

Børnich explained how 1X Technologies uses a combination of reinforcement learning, expert demonstrations and real-world data to enable its robots to continuously learn and adapt to new situations.

NEO, the company’s upcoming robot, can perform household tasks like vacuuming, folding laundry, tidying and retrieving items. It’s built with operational safety at its core, using tendon-driven mechanisms inspired by the human musculoskeletal system to achieve low energy consumption.

Børnich highlights the potential for robots to enhance human productivity by helping handle mundane tasks, freeing people up to focus more on interpersonal connections and creative activities.

Learn more about the latest in physical AI and robotics at NVIDIA GTC Paris, which takes place from June 10-12. Register to attend humanoid-related sessions, including:

Time Stamps

05:18 – 1X Technologies’ approach to robot safety.

11:36 – How world models enable robots to search backwards from the goal.

16:51 – How robots can free humans up for more meaningful activities.

22:29 – NEO answers the door so Børnich can interview a candidate.

You Might Also Like… 

How World Foundation Models Will Advance Physical AI With NVIDIA’s Ming-Yu Liu

AI models that can accurately simulate and predict outcomes in physical, real-world environments will enable the next generation of physical AI systems. Ming-Yu Liu, vice president of research at NVIDIA and an IEEE Fellow, explains the significance of world foundation models — powerful neural networks that can simulate physical environments.

Roboflow Helps Unlock Computer Vision for Every Kind of AI Builder

Roboflow’s mission is to make the world programmable through computer vision. By simplifying computer vision development, the company helps bridge the gap between AI and people looking to harness it. Cofounder and CEO Joseph Nelson discusses how Roboflow empowers users in manufacturing, healthcare and automotive to solve complex problems with visual AI.

Imbue CEO Kanjun Qiu on Transforming AI Agents Into Personal Collaborators

Kanjun Qiu, CEO of Imbue, explores the emerging era where individuals can create and use their own AI agents. Drawing a parallel to the PC revolution of the late 1970s and ‘80s, Qiu discusses how modern AI systems are evolving to work collaboratively with users, enhancing their capabilities rather than just automating tasks.

Read More

NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results

NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results

NVIDIA is working with companies worldwide to build out AI factories — speeding the training and deployment of next-generation AI applications that use the latest advancements in training and inference.

The NVIDIA Blackwell architecture is built to meet the heightened performance requirements of these new applications. In the latest round of MLPerf Training — the 12th since the benchmark’s introduction in 2018 — the NVIDIA AI platform delivered the highest performance at scale on every benchmark and powered every result submitted on the benchmark’s toughest large language model (LLM)-focused test: Llama 3.1 405B pretraining.

The NVIDIA platform was the only one that submitted results on every MLPerf Training v5.0 benchmark — underscoring its exceptional performance and versatility across a wide array of AI workloads, spanning LLMs, recommendation systems, multimodal LLMs, object detection and graph neural networks.

The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. In addition, NVIDIA collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs.

On the new Llama 3.1 405B pretraining benchmark, Blackwell delivered 2.2x greater performance compared with previous-generation architecture at the same scale.

On the Llama 2 70B LoRA fine-tuning benchmark, NVIDIA DGX B200 systems, powered by eight Blackwell GPUs, delivered 2.5x more performance compared with a submission using the same number of GPUs in the prior round.

These performance leaps highlight advancements in the Blackwell architecture, including high-density liquid-cooled racks, 13.4TB of coherent memory per rack, fifth-generation NVIDIA NVLink and NVIDIA NVLink Switch interconnect technologies for scale-up and NVIDIA Quantum-2 InfiniBand networking for scale-out. Plus, innovations in the NVIDIA NeMo Framework software stack raise the bar for next-generation multimodal LLM training, critical for bringing agentic AI applications to market.

These agentic AI-powered applications will one day run in AI factories — the engines of the agentic AI economy. These new applications will produce tokens and valuable intelligence that can be applied to almost every industry and academic domain.

The NVIDIA data center platform includes GPUs, CPUs, high-speed fabrics and networking, as well as a vast array of software like NVIDIA CUDA-X libraries, the NeMo Framework, NVIDIA TensorRT-LLM and NVIDIA Dynamo. This highly tuned ensemble of hardware and software technologies empowers organizations to train and deploy models more quickly, dramatically accelerating time to value.

The NVIDIA partner ecosystem participated extensively in this MLPerf round. Beyond the submission with CoreWeave and IBM, other compelling submissions were from ASUS, Cisco, Dell Technologies, Giga Computing, Google Cloud, Hewlett Packard Enterprise, Lambda, Lenovo, Nebius, Oracle Cloud Infrastructure, Quanta Cloud Technology, ScitiX and Supermicro.

Learn more about MLPerf benchmarks.

Read More

Build Responsible AI Products with your own Yellow Teaming LLM

Build Responsible AI Products with your own Yellow Teaming LLM

The tools we use to build AI are evolving fast, with PyTorch at the heart of many advances. But unless we evolve the way we approach building AI systems, we risk amplifying harm as fast as we’re scaling up performance. Building AI responsibly means designing systems that not only perform well but do so fairly, safely, and transparently—like making sure an AI hiring tool doesn’t favor one demographic over another.

One useful approach to developing responsible AI systems is Yellow Teaminga proactive exercise that surfaces potential unintended consequences before deployment. Yellow Teaming helps companies stand out in a crowded market by making more thoughtful, impact-aware design choices that lead to an overall better product.

In this blog, we show how you can quickly create a PyTorch-based LLM Yellow Teaming assistant running on AWS Graviton4 with a reusable system prompt. We also give you an example to show you how to use your new assistant to explore unintended business-critical consequences of feature design and ultimately build better products.

Let’s get started.

What is Yellow Teaming:

You may already be aware of the more popular term Red Teaming in cybersecurity, which involves simulating how adversaries might attack your product and fixing vulnerabilities before launch. Other color-coded approaches exist (like Blue Teams that defend against attacks), but Yellow Teaming is distinct in focusing on thoughtful design and implementation from the start of the product’s lifecycle. Red Teaming practices have already been adapted to the AI domain. Yellow Teaming principles are now becoming an important part of AI development as well.

The practice of Yellow Teaming asks a set of probing questions to help reveal the broader, unintended impacts of your product on your business, your users, and society at large. This application of Yellow Teaming, and the rationale behind it, are explained eloquently in the Development in Progress essay by The Consilience Project. A closely related practice is also offered in the module, Minimizing Harmful Consequences, in the Center for Humane Technology free course.

Why Does Yellow Teaming Matter?

The central idea is that by analyzing the consequences of your product decisions with a wide view, you can design better products that create positive feedback loops for your company’s bottom line and your users’ well-being. For example, it helps you avoid building a chatbot that unintentionally reinforces bias.

Traditional product development practices often solve for narrowly defined success metrics. Creating specific product measurables is good for focus and accountability, but can lead to over-optimization on metrics while ignoring other signals that matter to your company. For instance, building an app with AI-driven recommendations that boosts engagement in the short term but makes people feel worse and fails to retain users over time.

Narrow product optimization tends to cause unmeasured negative effects. These include users getting burnt out or frustrated when using your product, reputational harm or less overall engagement with your company, and society fracturing from lack of trust and meaningful communication.

In many cases, what looks like product success on paper is actually harming your users, your company, and your long-term goals.

How to Implement Yellow Teaming Practices

Yellow Teaming is straightforward and powerful. Pick a product you are building, and systematically evaluate the various consequences for your users, your business, and society when adopted at scale. Start with direct consequences, then move to second- and third-order consequences by asking ‘what happens as a result of the previous effects?’ You should think through these consequences across multiple axis:

  1. Good and bad
  2. Short-term and long-term
  3. Intended and unintended
  4. Your company and your users
  5. A single user and groups of users

These types of questions help foster productive brainstorming:

  • What kinds of behaviors will this feature incentivize in users?
  • What affordances does this technology provide (what can users now do that they couldn’t before, even if unintended)?
  • Will this improve or degrade trust in our platform?
  • What social groups might benefit—or be left behind?

Yellow Teaming is based on complex systems thinking and externality analysis—fields that have traditionally felt far removed from engineering workflows. But by incorporating a lightweight Yellow Teaming assistant to help your ideation processes, it can become an intuitive, high ROI part of product development.

Building Your PyTorch YellowTeamGPT

The good news is that you don’t need a PhD in philosophy or a panel of advisors to Yellow Team your AI project. You just need to be willing to act and, in this implementation of Yellow Teaming, use a good LLM with the right prompt. There are several advantages to running your LLM locally. The biggest is that you can safely feed in confidential product plans without worrying about your data being leaked. Another benefit is that the smaller model is not perfect and makes mistakes, forcing us as users to apply critical thinking to every output, and putting us in the right headspace to analyze non-obvious product consequences.

Here is how you can set up a PyTorch-based 8-billion parameter Llama3 model on your Graviton instance. First, create a r8g.4xlarge instance running Ubuntu 24.04 with at least 50 GB of storage, then follow these three steps:

1. Set up your machine with the torchchat repo and other requirements:

sudo apt-get update && sudo apt install gcc g++ build-essential python3-pip python3-venv google-perftools -y

git clone https://github.com/pytorch/torchchat.git && cd torchchat

python3 -m venv .venv && source .venv/bin/activate

./install/install_requirements.sh

2. Download the model from Hugging Face (HF) by entering your HF access token (note the max sequence length parameter, which you can increase to enable longer conversations with a linear increase in memory usage):

pip install -U "huggingface_hub[cli]"

huggingface-cli login

python torchchat.py export llama3.1 --output-dso-path exportedModels/llama3.1.so --device cpu --max-seq-length 8192

3. Run the model with Arm CPU optimizations and 700 max token length per response:

LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libtcmalloc.so.4 TORCHINDUCTOR_CPP_WRAPPER=1 TORCHINDUCTOR_FREEZING=1 OMP_NUM_THREADS=16 python torchchat.py generate llama3.1 --dso-path exportedModels/llama3.1.so --device cpu --max-new-tokens 700 --chat

For more details on these commands and additional code snippets to add a UI to this chatbot, review this Arm Learning Path.

You can then enter a custom system prompt. Below is a simple prompt that turns your local LLM into a Yellow Teaming assistant. Feel free to review and tweak it to get the most out of it for your specific needs. Here’s what it does:
  1. Gathers key product details: What you’re building, how it makes money, who your users are.
  2. Analyzes direct and indirect consequences: YellowTeamGPT presents one at a time, considering non-obvious impacts to your business, users, and beyond (you’ll likely start to think of more impacts on your own).
  3. Iterates with you: You are in control, telling YellowTeamGPT to continue listing general direct consequences, identifying specific company risks, moving to 2nd-order effects, and even brainstorming features to make your product better.

Here is the YellowTeamGPT system prompt for you to copy. If directly copying, make sure to copy as one line into your terminal or the new lines may cause issues.

You are an expert in complex systems thinking and AI product design, called YellowTeamGPT. You help technologists build better products that users love, and lower company risk. You do this by helping the user evaluate their product design decisions via the Yellow Teaming methodology, which identifies the consequences of design decisions on their business, their users, and society.

You will request from the user information about their product under development. Once you have enough information, you will analyze the product’s consequences that arise if deployed at scale. Structure your thinking to first review direct consequences, then 2nd order consequences that follow from the identified direct effects (by asking ‘what might happen next as a result?’). Consider consequences that impact the company, users, and society; are short and long term; are across categories like truth and understanding, human well-being, capability growth, economics, and more.

You are here to constructively challenge users, not reinforce their existing ideas. Play devil’s advocate to help users think in ways they are not currently.

You will output in this format: For each identified consequence, tie the impact to product quality, and prompt the user with a question that helps them design the product better to mitigate that consequence (or turn a negative impact into a positive one). List one consequence at a time and ask the user to continue listing them or explore that consequence further.

Example Yellow Teaming

Give your LLM the provided system prompt and hit enter. Next, your YellowTeamGPT assistant will ask for some product details. Here is a hypothetical example product I used:

I’m building an app that turns a group chat conversation into a catchy pop song. Targeting any user, like WhatsApp users. Key functionality is importing a group chat conversation and outputting a tune with lyrics and beat to match. It is an app on any smartphone. Ideally, millions of users. Would make money by targeted advertising of the users.

You’ll notice, as YellowTeamGPT thinks and generates its reply, that it is notably slower than ChatGPT or other popular GPTs. Like the model’s inaccuracy, its slow speed can be perceived as a benefit. The point of this exercise is to slow down, think through non-obvious product impacts, and brainstorm enhancements that create positive value across the systems your product touches. While your YellowTeamingGPT is ‘thinking,’ you should be too.

And below are snippets of my conversation. First, it starts with one direct consequence:

I then instruct it to continue to another consequence:

I ask to explore the second-order effects of having misinformation spread at scale from this app:

Finally, I ask for help brainstorming product features to mitigate this harm. It generates a few interesting concepts that are not product-ready, but easily spark further ideation:

Using YellowTeamGPT for this use case, we were able to rapidly explore product impacts we may not have considered. We could then brainstorm features solving previously unconsidered problems, leading to an improved product experience that also mitigates the risk of reputational harm to our hypothetical company.

Integrating Yellow Teaming Into Your Practices

Anywhere you’re making decisions that shape your product’s features and the user experience, Yellow Teaming fits. Here are a few examples of where you can leverage your new YellowTeamGPT:

  • New product ideation sessions to expand your thinking.
  • Feature planning docs to stress-test your specs.
  • Code review workflows for flagging potential misuse.
  • Sprint retrospectives to reflect on design choices at scale.
  • Product pitch decks to show responsible AI due diligence.

It can be as formal or informal as you want. The more you and your team think about unintended, Nth-order product consequences across multiple axis, the better your product will be. By incorporating Yellow Teaming into your work, you don’t just do the right thing, you build products that:

  • Users engage with and trust more
  • Mitigate harmful impacts
  • Minimize company risk
  • Create lasting business value

Let’s stop thinking of responsible AI practices as something to check off a list and start thinking of it as what it really is –a competitive edge that creates positive outcomes for your company, for your users, and for our shared society.

Read More

NVIDIA RTX Blackwell GPUs Accelerate Professional-Grade Video Editing

NVIDIA RTX Blackwell GPUs Accelerate Professional-Grade Video Editing

4:2:2 cameras — capable of capturing double the color information compared with most standard cameras — are becoming widely available for consumers. At the same time, generative AI video models are rapidly increasing in functionality and quality, making new tools and workflows possible.

NVIDIA RTX GPUs based on the NVIDIA Blackwell architecture include dedicated hardware to encode and decode 4:2:2 video, and come with fifth-generation Tensor Cores designed to accelerate AI and deep learning workloads.

GeForce RTX 50 Series and NVIDIA RTX PRO Blackwell Series are primed to meet this demand, powering generative AI, new AI features and state-of-the-art video editing workflows for quicker cuts and faster exports.

4:2:2 Goes Mainstream

4:2:2 10-bit compatible video cameras are on the rise.

These cameras, which were traditionally reserved for professional use due to their high cost, are becoming more cost-friendly, with major manufacturers offering them at prices under $600.

4:2:2 cameras can capture double the color information compared with standard 4:2:0 cameras while only increasing raw file sizes by 30%.

4:2:2 video cameras are on the rise, thanks to more affordable prices. Creators have more camera options than ever at lower entry points.

Standard cameras typically use 4:2:0 8-bit color compression, capable of capturing only a fraction of color information. While 4:2:0 is acceptable for video playback on browsers, professional video editors demand cameras that capture 4:2:2 color accuracy and fidelity, while keeping file sizes reasonable.

The downside of 4:2:2 is that the additional color information requires more computational power for playback, often leading to stuttering streams. As a result, many editors have had to create proxies before editing — a time-consuming process that requires additional storage and lowers fidelity while editing.

The GeForce RTX 50 Series adds hardware acceleration for 4:2:2 encode and decode, helping solve this computational challenge. RTX 50 Series GPUs boast a 10x acceleration in 4:2:2 encoding and can decode up to 8K 75 frames per second — equivalent to 10x 4K 30fps streams per decoder.

The most popular video editing apps, including Blackmagic Design’s DaVinci Resolve, CapCut and Wondershare Filmora, support NVIDIA hardware acceleration for 4:2:2 encode and decode. Adobe Premiere Pro offers decode support.

Combining 4:2:2 support with NVIDIA hardware increases creative possibilities. 10-bit 4:2:2 retains more color information than 8-bit 4:2:0, resulting in more accurate color representations and better color grading results for video editors.

4:2:2 offers more accurate color representation for better color grading results.

The extra color data from 4:2:2 support allows for increased flexibility during color correction and grading for more detailed adjustments. Improved keying enables cleaner and more accurate extractions of subjects from background, as well as sharper edges for smaller keyed objects.

4:2:2 offers more accurate color representation for better color grading results.4:2:2 enables cleaner text in video content.

 

4:2:2 reduces file sizes without significantly impacting picture quality, offering an optimal balance between quality and storage.

Generative AI-Powered Video Editing

Generative AI models are enabling video editors to generate filler video, extend clips, modify videos styles and apply advanced visual effects with speed and ease, drastically reducing production times.

Popular models like WAN or LTX Video can generate higher-quality video with greater prompt accuracy and faster load times.

GeForce RTX and NVIDIA RTX PRO GPUs based on NVIDIA Blackwell enable these large, complex models to run quickly and on device, with support thanks to NVIDIA CUDA optimizations for PyTorch. Plus, the fifth-generation Tensor Cores in these GPUs offer support for FP4 quantization, allowing developers and enthusiasts to improve performance by over 2x and halve the VRAM needed.

Cutting-Edge Video Editing AI Features

Modern video editing apps provide an impressive array of advanced AI features — accelerated by GeForce RTX and NVIDIA RTX PRO GPUs.

DaVinci Resolve Studio 20, now in general release, adds new AI effects and integrates NVIDIA TensorRT to optimize AI performance. One of the new features, UltraNR Noise Reduction, is an AI-driven noise reduction mode that intelligently targets and reduces digital noise in video footage to maintain image clarity while minimizing softening. UltraNR Noise Reduction runs up to 75% faster on the GeForce RTX 5090 GPU than the previous generation.

Magic Mask is another AI-powered feature in DaVinci Resolve that enables users to quickly and accurately select and track objects, people or features within a scene, simplifying the process of creating masks and effects. Magic Mask v2 adds a paint brush to further adjust masking selections for more accurate and faster workflows.

Topaz Video AI Pro video enhancement software uses AI models like Gaia and Artemis to intelligently increase video resolution to 4K, 8K and even 16K — adding detail and sharpness while minimizing artifacts and noise. The software also benefits from TensorRT acceleration.

Topaz Starlight mini, the first local desktop diffusion model for video enhancement, can enhance footage — from tricky 8/16mm film to de-interlaced mini-DV video — that may otherwise be challenging for traditional AI models to handle. The model delivers exceptional quality at the cost of intensive compute requirements, meaning it can only run locally on RTX GPUs.

Adobe Premiere Pro recently released several new AI features, such as Adobe Media Intelligence, which uses AI to analyze footage and apply semantic tags to clips. This lets users more easily and quickly find specific footage by describing its content, including objects, locations, camera angles and even transcribed spoken words. Media Intelligence runs 30% faster on the GeForce RTX 5090 Laptop GPU compared with the GeForce RTX 4090 Laptop GPU.

Adobe’s Enhance Speech feature improves the quality of recorded speech by filtering out unwanted noise and making the audio sound clearer. Enhance Speech runs 7x faster on GeForce RTX 5090 Laptop GPUs compared with the MacBook Pro M4 Max.

Cut Like a Pro

GeForce RTX and NVIDIA RTX PRO GPUs are built to deliver the computational power needed for advanced video editing workflows.

These GPUs contain powerful NVIDIA hardware decoders (NVDEC) to unlock smooth playback and scrubbing of high-resolution video footage and multi-stream videos without the need for proxies. NVDEC is supported in Adobe Premiere Pro, CapCut, DaVinci Resolve, Vegas Pro and Wondershare Filmora.

Creative apps use these additional encoders in GeForce RTX 5080 and 5090 GPUs, as well as RTX PRO 6000, 5000, 4500 and 4000 Blackwell GPUs — and now features support for 4:2:2.

Creators can use the RTX 5080 and 5090, for example, to import 5x 8K30 or 20x 4K30 streams at once, or import 10x 4K60 to do multi-camera editing and review multiple camera angles without slowdown. With the RTX PRO 6000, this can be boosted to up to 10x 8K30 or 40x 4K30 streams.

GeForce RTX and NVIDIA RTX PRO GPU Laptop GPU encoders and decoders.

NVIDIA CUDA cores accelerate video and image processing effects such as motion tracking, sharpening, upsampling, transition effects and other computationally intensive tasks. They also accelerate rendering times, enable real-time previews while working with high-resolution video footage and speed up AI features, such as automatic color correction, object removal and noise reduction.

When it’s time to export, video editors that use the GeForce RTX 50 Series ninth-generation NVIDIA video encoder can get a 5% improvement in video quality on HEVC and AV1 encoding (BD-BR), resulting in higher-quality exports at the same bitrates.

Plus, a new Ultra High Quality (UHQ) mode available in the latest Blackwell encoder boosts quality by an additional 5% for HEVC and AV1 and is backwards-compatible with the GeForce RTX 40 Series.

DaVinci Resolve, CapCut and Filmora also support multi-encoder encoding, either via split encoding — where an input frame is divided into three parts, each processed by a different NVENC encoder — or simultaneous scene encoding, in which a video is split by groups of pictures that are each sent to an encoder to batch the operation for up to 2.5x faster export performance.

Tune in to NVIDIA founder and CEO Jensen Huang’s keynote at NVIDIA GTC Paris at VivaTech on June 11. Check out full-day workshops on June 10 and two days of technical sessions, training and certifications.

Stay tuned for more RTX and AI powered advances in content creation.

Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 

Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.

Follow NVIDIA Workstation on LinkedIn and X

See notice regarding software product information.

Read More

Analyzing the Effect of Linguistic Similarity on Cross-Lingual Transfer: Tasks and Input Representations Matter

Cross-lingual transfer is a popular approach to increase the amount of training data for NLP tasks in a low-resource context. However, the best strategy to decide which cross-lingual data to include is unclear. Prior research often focuses on a small set of languages from a few language families or a single task. It is still an open question how these findings extend to a wider variety of languages and tasks. In this work, we contribute to this question by analyzing cross-lingual transfer for 263 languages from a wide variety of language families. Moreover, we include three popular NLP tasks…Apple Machine Learning Research

70 Amazon Research Award recipients announced

70 Amazon Research Award recipients announced


70 Amazon Research Award recipients announced

Awardees, who represent 44 universities in 10 countries, have access to Amazon public datasets, along with AWS AI/ML services and tools.

June 03, 01:06 PMJune 03, 01:06 PM

Amazon Research Awards (ARA) provides unrestricted funds and AWS Promotional Credits to academic researchers investigating various research topics in multiple disciplines. This cycle, ARA received many excellent research proposals from across the world and today is publicly announcing 70 award recipients who represent 44 universities in 10 countries.

This announcement includes awards funded under five call for proposals during the fall 2024 cycle: AI for Information Security, Automated Reasoning, AWS AI, AWS Cryptography, and Sustainability. Proposals were reviewed for the quality of their scientific content and their potential to impact both the research community and society. Additionally, Amazon encourages the publication of research results, presentations of research at Amazon offices worldwide, and the release of related code under open-source licenses.

Recipients have access to more than 700 Amazon public datasets and can utilize AWS AI/ML services and tools through their AWS Promotional Credits. Recipients also are assigned an Amazon research contact who offers consultation and advice, along with opportunities to participate in Amazon events and training sessions.

Automated Reasoning is an important area of research for Amazon, with potential applications across various features and applications to help improve security, reliability, and performance for our customers. Through the ARA program, we collaborate with leading academic researchers to explore challenges in this field, said Robert Jones, senior principal scientist with the Cloud Automated Reasoning Group. We were again impressed by the exceptional response to our Automated Reasoning call for proposals this year, receiving numerous high-quality submissions. Congratulations to the recipients! We’re excited to support their work and partner with them as they develop new science and technology in this important area.

At Amazon, we believe that solving the world’s toughest sustainability challenges benefits from both breakthrough scientific research and open and bold collaboration. Through programs like the Amazon Research Awards program, we aim to support academic research that could contribute to our understanding of these complex issues, said Kommy Weldemariam, Director of Science and Innovation Sustainability. The selected proposals represent innovative projects that we hope will help advance knowledge in this field, potentially benefiting customers, communities, and the environment.

ARA funds proposals throughout the year in a variety of research areas. Applicants are encouraged to visit the ARA call for proposals page for more information or send an email to be notified of future open calls.

The tables below list, in alphabetical order by last name, fall 2024 cycle call-for-proposal recipients, sorted by research area.

AI for Information Security

<tbody><tr><td colspan=”1″ rowspan=”1″><b>Recipient</b></td><td colspan=”1″ rowspan=”1″><b>University</b></td><td colspan=”1″ rowspan=”1″><b>Research title</b></td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/christopher-amato” data-cms-id=”00000196-f43b-d69c-af96-fc3ff74c0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/christopher-amato” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748965923144,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748965923144,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f43b-d69c-af96-fc3ff74c0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-367d-d5b3-a9f7-be7de6bf0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Christopher Amato","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-367d-d5b3-a9f7-be7de6a40000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Christopher Amato</a></td><td colspan=”1″ rowspan=”1″ height=”77″ class=”xl65″ width=”171″ style=”height:57.6pt;width:128pt”>Northeastern University</td><td colspan=”1″ rowspan=”1″ height=”77″ class=”xl65″ width=”202″ style=”height:57.6pt;width:151pt”>Multi-Agent Reinforcement Learning Cyber Defense for Securing Cloud Computing Platforms</td></tr><tr><td colspan=”1″ rowspan=”1″ height=”58″ class=”xl66″ width=”117″ style=”height:43.2pt;width:88pt”><a href=”https://www.amazon.science/research-awards/recipients/bernd-bischl” data-cms-id=”00000196-f3df-d69c-af96-ffffe5470000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/bernd-bischl” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748965936365,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748965936365,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f3df-d69c-af96-ffffe5470000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-367e-d651-a1ff-fe7e23900000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Bernd Bischl","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-367e-d651-a1ff-fe7e23880000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Bernd Bischl</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”width:128pt”>Ludwig Maximilian University of Munich</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”width:151pt”>Improving Generative and Foundation Models Reliability via Uncertainty-awareness</td></tr><tr><td colspan=”1″ rowspan=”1″ height=”77″ class=”xl66″ width=”117″ style=”height:57.6pt;width:88pt”><a href=”https://www.amazon.science/research-awards/recipients/alina-oprea” data-cms-id=”00000196-f438-d69c-af96-fc3eb0360000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/alina-oprea” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748965950713,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748965950713,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f438-d69c-af96-fc3eb0360000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-367e-da03-a5d7-ff7f598f0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Alina Oprea","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-367e-da03-a5d7-ff7f59860000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Alina Oprea</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”width:128pt”>Northeastern University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”width:151pt”>Multi-Agent Reinforcement Learning Cyber Defense for Securing Cloud Computing Platforms</td></tr><tr><td colspan=”1″ rowspan=”1″ height=”77″ class=”xl66″ width=”117″ style=”height:57.6pt;width:88pt”><a href=”https://www.amazon.science/research-awards/recipients/roberto-perdisci” data-cms-id=”00000196-f3d4-dd94-ad97-f7df08670000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/roberto-perdisci” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748965964159,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748965964159,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f3d4-dd94-ad97-f7df08670000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-367e-d1b2-a397-36fe8d4a0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Roberto Perdisci","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-367e-d1b2-a397-36fe8d400000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Roberto Perdisci</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”width:128pt”>University of Georgia</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”width:151pt”>ContextADBench: A Comprehensive Benchmark Suite for Contextual Anomaly Detection</td></tr></tbody>

Automated Reasoning

<tbody><tr><td colspan=”1″ rowspan=”1″><b>Recipient</b></td><td colspan=”1″ rowspan=”1″><b>University</b></td><td colspan=”1″ rowspan=”1″><b>Research title</b></td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/nada-amin” data-cms-id=”00000188-9255-dbd2-a1db-fad53a150000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/nada-amin” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748965988276,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748965988276,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000188-9255-dbd2-a1db-fad53a150000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-367e-dd34-a3ff-7f7fea920000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Nada Amin","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-367e-dd34-a3ff-7f7fea890000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Nada Amin</a></td><td colspan=”1″ rowspan=”1″ class=”xl67″ width=”175″ style=”width:131pt”>Harvard University</td><td colspan=”1″ rowspan=”1″ class=”xl67″ width=”285″ style=”width:214pt”>LLM-Augmented Semi-Automated Proofs for Interactive Verification</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/suguman-bansal” data-cms-id=”00000196-f46e-d69c-af96-fc6efbb40000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/suguman-bansal” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966000025,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966000025,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f46e-d69c-af96-fc6efbb40000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-367f-df40-ad97-b6ff22560000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Suguman Bansal","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-367f-df40-ad97-b6ff224a0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Suguman Bansal</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”width:131pt”>Georgia Institute of Technology</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-left:none;width:214pt”>Certified Inductive Generalization in Reinforcement Learning</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/ioana-boureanu” data-cms-id=”00000196-f441-d69c-af96-fc6733410000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/ioana-boureanu” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966045584,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966045584,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f441-d69c-af96-fc6733410000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-367f-dd34-a3ff-7f7fd12c0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Ioana Boureanu","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-367f-dd34-a3ff-7f7fd1240000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Ioana Boureanu</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>University of Surrey</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>Phoebe+: An Automated-Reasoning Tool for Provable Privacy in Cryptographic Systems</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/omar-haider-chowdhury” data-cms-id=”00000196-f468-dd94-ad97-f4ff34610000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/omar-haider-chowdhury” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966059168,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966059168,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f468-dd94-ad97-f4ff34610000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3680-d5b3-a9f7-be8004920000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Omar Haider Chowdhury","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3680-d5b3-a9f7-be8004890000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Omar Haider Chowdhury</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>Stony Brook University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>Restricter: An Automatic Tool for Authoring Amazon Cedar Access Control Policies with the Principle of Least Privilege</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/stefan-ciobaca” data-cms-id=”00000196-f500-d5f3-af9e-f5b5d6da0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/stefan-ciobaca” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966070795,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966070795,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f500-d5f3-af9e-f5b5d6da0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3680-df40-ad97-b69d37310000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Stefan Ciobaca","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3680-df40-ad97-b69d372b0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Stefan Ciobaca</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>Alexandru Ioan Cuza University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>An Interactive Proof Mode for Dafny</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/joao-ferreira” data-cms-id=”00000196-f4f1-d5f3-af9e-f4f59c7c0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/joao-ferreira” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966086030,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966086030,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f4f1-d5f3-af9e-f4f59c7c0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3680-da03-a5d7-ffcd67330000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Joo Ferreira","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3680-da03-a5d7-ffcd672c0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Joo Ferreira</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>INESC-ID</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>Polyglot Automated Program Repair for Infrastructure as Code</td></tr><tr><td colspan=”1″ rowspan=”1″ height=”20″ class=”xl66″ width=”135″ style=”height:15.0pt;width:101pt”><a href=”https://www.amazon.science/research-awards/recipients/mirco-giacobbe” data-cms-id=”00000196-f4fd-d957-a396-feff222b0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/mirco-giacobbe” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966098360,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966098360,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f4fd-d957-a396-feff222b0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3680-d23f-a3ff-f6aba0f70000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Mirco Giacobbe","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3680-d23f-a3ff-f6aba0da0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Mirco Giacobbe</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>University of Birmingham</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>Neural Software Verification</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/tobias-grosser” data-cms-id=”0000018f-0e10-dc75-abaf-7f525b050000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/tobias-grosser” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966112895,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966112895,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"0000018f-0e10-dc75-abaf-7f525b050000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3680-d1b2-a397-36b2d1ba0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Tobias Grosser","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3680-d1b2-a397-36b2d1ad0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Tobias Grosser</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>University of Cambridge</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>Synthesis-based Symbolic BitVector Simplification for Lean</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/ronghui-gu” data-cms-id=”00000188-9255-dbd2-a1db-fad507a70000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/ronghui-gu” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966124564,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966124564,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000188-9255-dbd2-a1db-fad507a70000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3681-da03-a5d7-ffcd07970000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Ronghui Gu","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3681-da03-a5d7-ffcd07900000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Ronghui Gu</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>Columbia University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>Scaling Formal Verification of Security Properties for Unmodified System Software</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/alexey-ignatiev” data-cms-id=”00000196-f478-dd94-ad97-f4ff7b120000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/alexey-ignatiev” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966137783,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966137783,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f478-dd94-ad97-f4ff7b120000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3681-dd34-a3ff-7fd335cc0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Alexey Ignatiev","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3681-dd34-a3ff-7fd335c60000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Alexey Ignatiev</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>Monash University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>Huub: Next-Gen Lazy Clause Generation</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/kenneth-mcmillan” data-cms-id=”00000196-f4ff-d957-a396-feff01330000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/kenneth-mcmillan” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966150239,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966150239,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f4ff-d957-a396-feff01330000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3681-d23f-a3ff-f6ab68f20000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Kenneth McMillan","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3681-d23f-a3ff-f6ab68e90000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Kenneth McMillan</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>University of Texas At Austin</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>Synthesis of Auxiliary Variables and Invariants for Distributed Protocol Verification</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/alexandra-mendes” data-cms-id=”00000196-f4fb-d957-a396-feff544d0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/alexandra-mendes” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966164850,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966164850,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f4fb-d957-a396-feff544d0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3681-dd34-a3ff-7fd39e2a0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Alexandra Mendes","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3681-dd34-a3ff-7fd39e210000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Alexandra Mendes</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>University of Porto</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>Overcoming Barriers to the Adoption of Verification-Aware Languages</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/jason-nieh” data-cms-id=”00000188-9257-dbd2-a1db-fad7b8490000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/jason-nieh” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966179016,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966179016,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000188-9257-dbd2-a1db-fad7b8490000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3681-da03-a5d7-ffcdd65d0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Jason Nieh","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3681-da03-a5d7-ffcdd6500000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Jason Nieh</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>Columbia University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>Scaling Formal Verification of Security Properties for Unmodified System Software</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/rohan-padhye” data-cms-id=”00000188-9255-dbd2-a1db-fad5092b0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/rohan-padhye” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966191602,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966191602,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000188-9255-dbd2-a1db-fad5092b0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3682-d23f-a3ff-f6ab09f60000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Rohan Padhye","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3682-d23f-a3ff-f6ab09f10000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Rohan Padhye</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>Carnegie Mellon University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>Automated Synthesis and Evaluation of Property-Based Tests</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/fortunat-rajaona” data-cms-id=”00000196-f463-d69c-af96-fc6734b50000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/fortunat-rajaona” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966204266,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966204266,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f463-d69c-af96-fc6734b50000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3682-d23f-a3ff-f6ab3d8b0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Fortunat Rajaona","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3682-d23f-a3ff-f6ab3d850000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Fortunat Rajaona</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>University of Surrey</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>Phoebe+: An Automated-Reasoning Tool for Provable Privacy in Cryptographic Systems</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/subhajit-roy” data-cms-id=”00000196-f502-d957-a396-ff5e4a290000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/subhajit-roy” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966222672,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966222672,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f502-d957-a396-ff5e4a290000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3682-da03-a5d7-ffcf6ee80000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Subhajit Roy","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3682-da03-a5d7-ffcf6edf0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Subhajit Roy</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>Indian Institute of Technology Kanpur</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>Theorem Proving Modulo LLM</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/gagandeep-singh” data-cms-id=”00000196-f46c-d69c-af96-fc6efdf30000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/gagandeep-singh” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966236127,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966236127,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f46c-d69c-af96-fc6efdf30000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3682-da03-a5d7-ffcfba800000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Gagandeep Singh","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3682-da03-a5d7-ffcfba770000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Gagandeep Singh</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>University of Illinois At UrbanaChampaign</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>Trustworthy LLM Systems using Formal Contracts</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/scott-stoller” data-cms-id=”00000196-f46a-dd94-ad97-f4ff16e70000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/scott-stoller” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966247968,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966247968,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f46a-dd94-ad97-f4ff16e70000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3682-dd34-a3ff-7fd3eb0c0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Scott Stoller","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3682-dd34-a3ff-7fd3eb060000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Scott Stoller</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>Stony Brook University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>Restricter: An Automatic Tool for Authoring Amazon Cedar Access Control Policies with the Principle of Least Privilege</td></tr><tr><td colspan=”1″ rowspan=”1″ height=”20″ class=”xl66″ width=”135″ style=”height:15.0pt;width:101pt”><a href=”https://www.amazon.science/research-awards/recipients/peter-stuckey” data-cms-id=”00000196-f476-dd94-ad97-f4fff4600000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/peter-stuckey” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966264947,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966264947,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f476-dd94-ad97-f4fff4600000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3683-df40-ad97-b69f18300000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Peter Stuckey","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3683-df40-ad97-b69f18290000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Peter Stuckey</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>Monash University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>Huub: Next-Gen Lazy Clause Generation</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/yulei-sui” data-cms-id=”00000196-f465-dd94-ad97-f4fff04e0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/yulei-sui” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966283536,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966283536,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f465-dd94-ad97-f4fff04e0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3683-d23f-a3ff-f6ab5b9f0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Yulei Sui","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3683-d23f-a3ff-f6ab5b970000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Yulei Sui</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>University of New South Wales</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>Path-Sensitive Typestate Analysis through Sparse Abstract Execution</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/nikos-vasilakis” data-cms-id=”00000196-f4f7-d957-a396-feff12da0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/nikos-vasilakis” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966296140,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966296140,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f4f7-d957-a396-feff12da0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3683-d5b3-a9f7-be83a51e0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Nikos Vasilakis","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3683-d5b3-a9f7-be83a5150000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Nikos Vasilakis</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>Brown University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>Semantics-Driven Static Analysis for the Unix/Linux Shell</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/ping-wang” data-cms-id=”00000196-f46b-d69c-af96-fc6f73960000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/ping-wang” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966308787,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966308787,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f46b-d69c-af96-fc6f73960000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3683-da03-a5d7-ffcfd5430000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Ping Wang","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3683-da03-a5d7-ffcfd53a0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Ping Wang</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>Stevens Institute of Technology</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>Leveraging Large Language Models for Reasoning Augmented Searching on Domain-specific NoSQL Database</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/john-wawrzynek” data-cms-id=”00000196-f475-dd94-ad97-f4ff402b0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/john-wawrzynek” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966318795,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966318795,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f475-dd94-ad97-f4ff402b0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3684-d23f-a3ff-f6af03750000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"John Wawrzynek","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3684-d23f-a3ff-f6af036f0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>John Wawrzynek</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”175″ style=”border-top:none;width:131pt”>University of California, Berkeley</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”285″ style=”border-top:none;border-left:none;width:214pt”>GPU-Accelerated High-Throughput SAT Sampling</td></tr></tbody>

AWS AI

<tbody><tr><td colspan=”1″ rowspan=”1″><b>Recipient</b></td><td colspan=”1″ rowspan=”1″><b>University</b></td><td colspan=”1″ rowspan=”1″><b>Research title</b></td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/panagiotis-adamopoulos” data-cms-id=”00000196-f3c0-dd94-ad97-f7dff51f0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/panagiotis-adamopoulos” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748966365424,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748966365424,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f3c0-dd94-ad97-f7dff51f0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3684-df40-ad97-b69db6290000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Panagiotis Adamopoulos","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3684-df40-ad97-b69db6200000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Panagiotis Adamopoulos</a></td><td colspan=”1″ rowspan=”1″ class=”xl67″ width=”162″ style=”width:121pt”>Emory University</td><td colspan=”1″ rowspan=”1″ class=”xl67″ width=”174″ style=”width:130pt”>Generative AI solutions for The Spillover Effect of Fraudulent Reviews on Product Recommendations</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/vikram-adve” data-cms-id=”00000188-9258-dbd2-a1db-fad98acc0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/vikram-adve” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967163514,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967163514,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000188-9258-dbd2-a1db-fad98acc0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3684-d5b3-a9f7-be84e3590000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Vikram Adve","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3684-d5b3-a9f7-be84e3500000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Vikram Adve</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”width:121pt”>University of Illinois at UrbanaChampaign</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-left:none;width:130pt”>Fellini: Differentiable ML Compiler for Full-Graph Optimization for LLM Models</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/frances-arnold” data-cms-id=”00000196-efef-dbbc-a7de-ffffbc620000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/frances-arnold” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967179974,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967179974,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-efef-dbbc-a7de-ffffbc620000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3691-d23f-a3ff-f6bb13b60000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Frances Arnold","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3691-d23f-a3ff-f6bb13ad0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Frances Arnold</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>California Institute of Technology</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Closed-loop Generative Machine Learning for De Novo Enzyme Discovery and Optimization</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/yonatan-bisk” data-cms-id=”00000196-eff8-d41c-a3df-effd09d80000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/yonatan-bisk” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967191163,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967191163,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-eff8-d41c-a3df-effd09d80000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3691-dd34-a3ff-7fd350c70000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Yonatan Bisk","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3691-dd34-a3ff-7fd350c10000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Yonatan Bisk</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>Carnegie Mellon University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Useful, Safe, and Robust Multiturn Interactions with LLMs</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/shiyu-chang” data-cms-id=”00000196-f002-d41c-a3df-f04f10490000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/shiyu-chang” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967204023,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967204023,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f002-d41c-a3df-f04f10490000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3691-d23f-a3ff-f6bb7bc90000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Shiyu Chang","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3691-d23f-a3ff-f6bb7bc20000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Shiyu Chang</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>University of California, Santa Barbara</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Cut the Crap: Advancing the Efficient Communication of Multi-Agent Systems via Spatial-Temporal Topology Design and KV Cache Sharing</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/yuxin-chen” data-cms-id=”00000196-effa-d41c-a3df-efff73b70000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/yuxin-chen” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967216353,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967216353,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-effa-d41c-a3df-efff73b70000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3691-dd34-a3ff-7fd3af410000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Yuxin Chen","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3691-dd34-a3ff-7fd3af3b0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Yuxin Chen</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>University of Pennsylvania</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Provable Acceleration of Diffusion Models for Modern Generative AI</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/tianlong-chen” data-cms-id=”00000196-efff-d41c-a3df-efff3c610000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/tianlong-chen” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967227811,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967227811,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-efff-d41c-a3df-efff3c610000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3691-dd34-a3ff-7fd3dec80000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Tianlong Chen","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3691-dd34-a3ff-7fd3dec00000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Tianlong Chen</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>University of North Carolina at Chapel Hill</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Cut the Crap: Advancing the Efficient Communication of Multi-Agent Systems via Spatial-Temporal Topology Design and KV Cache Sharing</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/mingyu-ding” data-cms-id=”00000196-eff3-d41c-a3df-efff69820000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/mingyu-ding” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967246244,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967246244,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-eff3-d41c-a3df-efff69820000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3692-d23f-a3ff-f6bb0d320000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Mingyu Ding","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3692-d23f-a3ff-f6bb0d2d0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Mingyu Ding</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>University of North Carolina at Chapel Hill</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Aligning Long Videos and Language as Long-Horizon World Models</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/nikhil-garg” data-cms-id=”0000018f-0e2d-dce3-af9f-4efd5cdc0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/nikhil-garg” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967258544,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967258544,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"0000018f-0e2d-dce3-af9f-4efd5cdc0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3692-dd34-a3ff-7fd351ad0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Nikhil Garg","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3692-dd34-a3ff-7fd351a60000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Nikhil Garg</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>Cornell University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Market Design for Responsible Multi-agent LLMs</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/jessica-hullman” data-cms-id=”00000196-f3a4-dd94-ad97-f7bf98490000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/jessica-hullman” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967272546,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967272546,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f3a4-dd94-ad97-f7bf98490000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3692-dd34-a3ff-7fd3845d0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Jessica Hullman","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3692-dd34-a3ff-7fd384560000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Jessica Hullman</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>Northwestern University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Human-Aligned Uncertainty Quantification in High Dimensions</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/christopher-jermaine” data-cms-id=”00000196-ef10-dbbc-a7de-ff5308810000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/christopher-jermaine” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967285624,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967285624,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-ef10-dbbc-a7de-ff5308810000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3692-d23f-a3ff-f6bbbd060000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Christopher Jermaine","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3692-d23f-a3ff-f6bbbd000000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Christopher Jermaine</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>Rice University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Fast, Trusted AI Using the EINSUMMABLE Compiler</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/yunzhu-li” data-cms-id=”00000196-f009-dbbc-a7de-f15b10e10000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/yunzhu-li” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967297260,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967297260,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f009-dbbc-a7de-f15b10e10000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3692-d23f-a3ff-f6bbee940000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Yunzhu Li","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3692-d23f-a3ff-f6bbee8f0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Yunzhu Li</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>Columbia University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Physics-Informed Foundation Models Through Embodied Interactions</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/pattie-maes” data-cms-id=”00000196-f3d2-d69c-af96-fff617430000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/pattie-maes” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967308974,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967308974,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f3d2-d69c-af96-fff617430000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3693-d23f-a3ff-f6bb1e6c0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Pattie Maes","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3693-d23f-a3ff-f6bb1e670000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Pattie Maes</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>Massachusetts Institute of Technology</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Understanding How LLM Agents Deviate from Human Choices</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/sasa-misailovic” data-cms-id=”00000196-efc4-d41c-a3df-efcdf1080000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/sasa-misailovic” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967319321,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967319321,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-efc4-d41c-a3df-efcdf1080000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3693-dd34-a3ff-7fd347b30000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Sasa Misailovic","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3693-dd34-a3ff-7fd347ad0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Sasa Misailovic</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>University of Illinois at UrbanaChampaign</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Fellini: Differentiable ML Compiler for Full-Graph Optimization for LLM Models</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/kristina-monakhova” data-cms-id=”00000196-f3af-d69c-af96-ffaf98780000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/kristina-monakhova” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967334299,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967334299,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f3af-d69c-af96-ffaf98780000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3693-d23f-a3ff-f6bb73600000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Kristina Monakhova","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3693-d23f-a3ff-f6bb73580000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Kristina Monakhova</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>Cornell University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Trustworthy extreme imaging for science using interpretable uncertainty quantification</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/todd-mowry” data-cms-id=”00000196-efb8-dbbc-a7de-fffb70d60000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/todd-mowry” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967345339,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967345339,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-efb8-dbbc-a7de-fffb70d60000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3693-dd34-a3ff-7fd3aae40000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Todd Mowry","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3693-dd34-a3ff-7fd3aade0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Todd Mowry</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>Carnegie Mellon University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Efficient LLM Serving on Trainium via Kernel Generation</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/min-hwan-oh” data-cms-id=”00000196-f3b6-d69c-af96-ffb608e40000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/min-hwan-oh” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967357021,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967357021,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f3b6-d69c-af96-ffb608e40000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3693-d23f-a3ff-f6bbd9690000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Min-hwan Oh","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3693-d23f-a3ff-f6bbd9610000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Min-hwan Oh</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>Seoul National University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Mutually Beneficial Interplay Between Selection Fairness and Context Diversity in Contextual Bandits</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/patrick-rebeschini” data-cms-id=”00000196-effd-dbbc-a7de-ffff13700000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/patrick-rebeschini” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967368283,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967368283,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-effd-dbbc-a7de-ffff13700000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3694-dd34-a3ff-7fd702400000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Patrick Rebeschini","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3694-dd34-a3ff-7fd7023a0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Patrick Rebeschini</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>University of Oxford</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Optimal Regularization for LLM Alignment</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/jose-renau” data-cms-id=”00000196-f3a0-dd94-ad97-f7bf5d880000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/jose-renau” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967379029,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967379029,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f3a0-dd94-ad97-f7bf5d880000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3694-dd34-a3ff-7fd72fbf0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Jose Renau","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3694-dd34-a3ff-7fd72fb80000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Jose Renau</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>University of California, Santa Cruz</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Verification Constrained Hardware Optimization using Intelligent Design Agentic Programming</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/vilma-todri” data-cms-id=”00000196-f3c2-dd94-ad97-f7dffd220000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/vilma-todri” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967390803,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967390803,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f3c2-dd94-ad97-f7dffd220000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3694-d23f-a3ff-f6bf5c2c0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Vilma Todri","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3694-d23f-a3ff-f6bf5c260000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Vilma Todri</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>Emory University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Generative AI solutions for The Spillover Effect of Fraudulent Reviews on Product Recommendations</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/aravindan-vijayaraghavan” data-cms-id=”00000196-f3a7-d69c-af96-ffa727d50000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/aravindan-vijayaraghavan” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967402051,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967402051,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f3a7-d69c-af96-ffa727d50000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3694-d23f-a3ff-f6bf8aab0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Aravindan Vijayaraghavan","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3694-d23f-a3ff-f6bf8aa50000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Aravindan Vijayaraghavan</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>Northwestern University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Human-Aligned Uncertainty Quantification in High Dimensions</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/wei-yang” data-cms-id=”00000196-efd1-d41c-a3df-efdd570c0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/wei-yang” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967414018,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967414018,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-efd1-d41c-a3df-efdd570c0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3694-d23f-a3ff-f6bfb5480000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Wei Yang","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3694-d23f-a3ff-f6bfb5420000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Wei Yang</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>University of Texas at Dallas</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Optimizing RISC-V Compilers with RISC-LLM and Syntax Parsing</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/huaxiu-yao” data-cms-id=”00000196-eff5-d41c-a3df-effdd5fe0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/huaxiu-yao” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967432850,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967432850,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-eff5-d41c-a3df-effdd5fe0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3695-dd34-a3ff-7fd703150000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Huaxiu Yao","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3695-dd34-a3ff-7fd7030b0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Huaxiu Yao</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>University of North Carolina at Chapel Hill</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Aligning Long Videos and Language as Long-Horizon World Models</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/amy-zhang” data-cms-id=”00000196-f3c4-dd94-ad97-f7df9fc20000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/amy-zhang” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967448147,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967448147,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f3c4-dd94-ad97-f7df9fc20000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3695-d23f-a3ff-f6bf2eb10000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Amy Zhang","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3695-d23f-a3ff-f6bf2eab0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Amy Zhang</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>University of Washington</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Tools for Governing AI Agent Autonomy</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/ruqi-zhang” data-cms-id=”00000196-f3b3-dd94-ad97-f7bf55460000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/ruqi-zhang” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967462196,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967462196,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f3b3-dd94-ad97-f7bf55460000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3695-dd34-a3ff-7fd769aa0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Ruqi Zhang","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3695-dd34-a3ff-7fd769a50000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Ruqi Zhang</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”162″ style=”border-top:none;width:121pt”>Purdue University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”174″ style=”border-top:none;border-left:none;width:130pt”>Efficient Test-time Alignment for Large Language Models and Large Multimodal Models</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/zheng-zhang” data-cms-id=”00000196-efcf-d41c-a3df-efcfa8a40000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/zheng-zhang” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967475461,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967475461,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-efcf-d41c-a3df-efcfa8a40000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-3695-d23f-a3ff-f6bfa1a80000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Zheng Zhang","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-3695-d23f-a3ff-f6bfa19f0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Zheng Zhang</a></td><td colspan=”1″ rowspan=”1″>Rutgers University-New Brunswick</td><td colspan=”1″ rowspan=”1″>AlphaQC: An AI-powered Quantum Circuit Optimizer and Denoiser</td></tr></tbody>

AWS Cryptography

<tbody><tr><td colspan=”1″ rowspan=”1″><b>Recipient</b></td><td colspan=”1″ rowspan=”1″><b>University</b></td><td colspan=”1″ rowspan=”1″><b>Research title</b></td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/alexandra-boldyreva” data-cms-id=”00000196-f50f-d5f3-af9e-f5bf87f10000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/alexandra-boldyreva” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967939572,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967939572,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f50f-d5f3-af9e-f5bf87f10000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-369c-d23f-a3ff-f6bfb9e60000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Alexandra Boldyreva","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-369c-d23f-a3ff-f6bfb9e00000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Alexandra Boldyreva</a></td><td colspan=”1″ rowspan=”1″ class=”xl67″ width=”171″ style=”width:128pt”>Georgia Institute of Technology</td><td colspan=”1″ rowspan=”1″ class=”xl67″ width=”202″ style=”width:151pt”>Quantifying Information Leakage in Searchable Encryption Protocols</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/maria-eichlseder” data-cms-id=”00000196-f51d-d5f3-af9e-f5bd6c8f0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/maria-eichlseder” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967955452,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967955452,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f51d-d5f3-af9e-f5bd6c8f0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-369c-d23f-a3ff-f6bfeef80000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Maria Eichlseder","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-369c-d23f-a3ff-f6bfeef30000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Maria Eichlseder</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”width:128pt”>Graz University of Technology, Austria</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-left:none;width:151pt”>SALAD Systematic Analysis of Lightweight Ascon-based Designs</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/venkatesan-guruswami” data-cms-id=”00000196-f511-d5f3-af9e-f5b580ef0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/venkatesan-guruswami” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967968239,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967968239,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f511-d5f3-af9e-f5b580ef0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-369d-dd34-a3ff-7fdf24920000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Venkatesan Guruswami","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-369d-dd34-a3ff-7fdf248b0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Venkatesan Guruswami</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”>University of California, Berkeley</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”>Obfuscation, Proof Systems, and Secure Computation: A Research Program on Cryptography at the Simons Institute for the Theory of Computing</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/joseph-jaeger” data-cms-id=”00000196-f514-d957-a396-ff5e245b0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/joseph-jaeger” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967981498,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967981498,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f514-d957-a396-ff5e245b0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-369d-dd34-a3ff-7fdf5c000000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Joseph Jaeger","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-369d-dd34-a3ff-7fdf5bf80000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Joseph Jaeger</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”>Georgia Institute of Technology</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”>Analyzing Chat Encryption for Group Messaging</td></tr><tr><td colspan=”1″ rowspan=”1″ height=”39″ class=”xl66″ width=”117″ style=”height:29.4pt;width:88pt”><a href=”https://www.amazon.science/research-awards/recipients/aayush-jain” data-cms-id=”00000196-f509-d5f3-af9e-f5bd37f20000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/aayush-jain” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748967994921,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748967994921,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f509-d5f3-af9e-f5bd37f20000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-369d-d23f-a3ff-f6bf94530000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Aayush Jain","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-369d-d23f-a3ff-f6bf944b0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Aayush Jain</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”>Carnegie Mellon</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”>Large Scale Multiparty Silent Preprocessing for MPC from LPN</td></tr><tr><td colspan=”1″ rowspan=”1″ height=”39″ class=”xl66″ width=”117″ style=”height:29.4pt;width:88pt”><a href=”https://www.amazon.science/research-awards/recipients/huijia-lin” data-cms-id=”00000196-f50a-d957-a396-ff5eb3e50000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/huijia-lin” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748968010326,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748968010326,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f50a-d957-a396-ff5eb3e50000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-369d-d23f-a3ff-f6bfc4c70000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Huijia Lin","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-369d-d23f-a3ff-f6bfc4c20000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Huijia Lin</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”>University of Washington</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”>Large Scale Multiparty Silent Preprocessing for MPC from LPN</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/hamed-nemati” data-cms-id=”00000196-f50d-d5f3-af9e-f5bd00050000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/hamed-nemati” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748968031219,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748968031219,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f50d-d5f3-af9e-f5bd00050000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-369e-dd34-a3ff-7fdf17bf0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Hamed Nemati","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-369e-dd34-a3ff-7fdf17ba0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Hamed Nemati</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”>KTH Royal Institute of Technology</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”>Trustworthy Automatic Verification of Side-Channel Countermeasures for Binary Cryptographic Programs using the HoIBA libary</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/karl-palmskog” data-cms-id=”00000196-f50e-d5f3-af9e-f5bf57800000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/karl-palmskog” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748968042791,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748968042791,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f50e-d5f3-af9e-f5bf57800000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-369e-d23f-a3ff-f6bf4e480000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Karl Palmskog","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-369e-d23f-a3ff-f6bf4e400000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Karl Palmskog</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”>KTH Royal Institute of Technology</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”>Trustworthy Automatic Verification of Side-Channel Countermeasures for Binary Cryptographic Programs using the HoIBA libary</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/chris-piekert” data-cms-id=”00000196-f507-d5f3-af9e-f5b794a90000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/chris-piekert” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748968060521,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748968060521,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f507-d5f3-af9e-f5b794a90000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-369e-d23f-a3ff-f6bf8d3f0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Chris Piekert","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-369e-d23f-a3ff-f6bf8d390000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Chris Piekert</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”>University of Michigan, Ann Arbor</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”>Practical Third-Generation FHE and Bootstrapping</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/dimitrios-skarlatos” data-cms-id=”00000196-f515-d957-a396-ff5f86bf0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/dimitrios-skarlatos” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748968074223,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748968074223,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f515-d957-a396-ff5f86bf0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-369e-d23f-a3ff-f6bfc4de0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Dimitrios Skarlatos","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-369e-d23f-a3ff-f6bfc4d80000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Dimitrios Skarlatos</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”>Carnegie Mellon University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”>Scale-Out FHE LLMs on GPUs</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/vinod-vaikuntanathan” data-cms-id=”00000196-f51a-d5f3-af9e-f5bf81dc0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/vinod-vaikuntanathan” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748968085679,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748968085679,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f51a-d5f3-af9e-f5bf81dc0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-369e-d23f-a3ff-f6bff7370000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Vinod Vaikuntanathan","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-369e-d23f-a3ff-f6bff7320000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Vinod Vaikuntanathan</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”>Massachusetts Institute of Technology</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”>Can Quantum Computers (Really) Factor?</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/daniel-wichs” data-cms-id=”00000196-f512-d5f3-af9e-f5b7dc450000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/daniel-wichs” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748968101847,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748968101847,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f512-d5f3-af9e-f5b7dc450000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-369f-d23f-a3ff-f6bf2a1a0000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Daniel Wichs","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-369f-d23f-a3ff-f6bf2a120000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Daniel Wichs</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”>Northeastern University</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”>Obfuscation, Proof Systems, and Secure Computation: A Research Program on Cryptography at the Simons Institute for the Theory of Computing</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/david-wu” data-cms-id=”00000196-f51c-d957-a396-ff5e295c0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/david-wu” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748968115140,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748968115140,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f51c-d957-a396-ff5e295c0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-369f-d23f-a3ff-f6bf66700000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"David Wu","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-369f-d23f-a3ff-f6bf666a0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>David Wu</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”171″ style=”border-top:none;width:128pt”>University Of Texas At Austin</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”202″ style=”border-top:none;border-left:none;width:151pt”>Fast Private Information Retrieval and More using Homomorphic Encryption</td></tr></tbody>

Sustainability

<tbody><tr><td colspan=”1″ rowspan=”1″><b>Recipient</b></td><td colspan=”1″ rowspan=”1″><b>University</b></td><td colspan=”1″ rowspan=”1″><b>Research title</b></td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/meeyoung-cha” data-cms-id=”00000196-f522-d957-a396-ff7e706b0000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/meeyoung-cha” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748968131452,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748968131452,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f522-d957-a396-ff7e706b0000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-369f-d23f-a3ff-f6bfa3c00000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Meeyoung Cha","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-369f-d23f-a3ff-f6bfa3ba0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Meeyoung Cha</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”190″ style=”border-left:none;width:143pt”>Max Planck Institute</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”219″ style=”border-left:none;width:164pt”>Forest-Blossom (Flossom): A New Framework for Sustaining Forest Biodiversity Through Outcome-Driven Remote Sensing Monitoring</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/jingrui-he” data-cms-id=”00000196-f520-d5f3-af9e-f5b5e8c40000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/jingrui-he” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748968175615,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748968175615,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f520-d5f3-af9e-f5b5e8c40000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-369f-dd34-a3ff-7fdfd7930000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Jingrui He","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-369f-dd34-a3ff-7fdfd78d0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Jingrui He</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”190″ style=”border-top:none;border-left:none;width:143pt”>University of Illinois at UrbanaChampaign</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”219″ style=”border-top:none;border-left:none;width:164pt”>Foundation Model Enabled Earths Ecosystem Monitoring</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/pedro-lopes” data-cms-id=”00000196-f51f-d957-a396-ff5f2bc00000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/pedro-lopes” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748968193127,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748968193127,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f51f-d957-a396-ff5f2bc00000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-36a0-dd34-a3ff-7ff395080000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Pedro Lopes","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-36a0-dd34-a3ff-7ff394ff0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Pedro Lopes</a></td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”190″ style=”border-top:none;border-left:none;width:143pt”>University of Chicago</td><td colspan=”1″ rowspan=”1″ class=”xl65″ width=”219″ style=”border-top:none;border-left:none;width:164pt”>AI-powered Tools that Enable Engineers to Make & Re-make Sustainable Hardware</td></tr><tr><td colspan=”1″ rowspan=”1″><a href=”https://www.amazon.science/research-awards/recipients/cheng-yaw-low” data-cms-id=”00000196-f524-d5f3-af9e-f5b5e2c60000″ data-cms-href=”https://www.amazon.science/research-awards/recipients/cheng-yaw-low” link-data=”{"cms.site.owner":{"_ref":"0000016e-17e7-d263-a5fe-fff724f30000","_type":"ae3387cc-b875-31b7-b82d-63fd8d758c20"},"cms.content.publishDate":1748968205207,"cms.content.publishUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"cms.content.updateDate":1748968205207,"cms.content.updateUser":{"_ref":"0000017f-b709-d2ad-a97f-f7fd25e30000","_type":"6aa69ae1-35be-30dc-87e9-410da9e1cdcc"},"rekognitionVideo.timeFrameMetadata":[],"link":{"rekognitionVideo.timeFrameMetadata":[],"attributes":[],"item":{"_ref":"00000196-f524-d5f3-af9e-f5b5e2c60000","_type":"07a8c4fb-2e5e-394d-8c44-6bb1ed9f87f6"},"_id":"00000197-36a0-d23f-a3ff-f6abc7f20000","_type":"c3f0009d-3dd9-3762-acac-88c3a292c6b2"},"linkText":"Cheng Yaw Low","theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.enhancementAlignment":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs.overlayText":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.hbs._preset":null,"theme.0000016e-17e8-d263-a5fe-fff8347d0000.:core:enhancement:Enhancement.amp.hbs._preset":null,"_id":"00000197-36a0-d23f-a3ff-f6abc7ed0000","_type":"809caec9-30e2-3666-8b71-b32ddbffc288"}”>Cheng Yaw Low</a></td><td colspan=”1″ rowspan=”1″ class=”xl67″ width=”190″ style=”width:143pt”>Max Planck Institute</td><td colspan=”1″ rowspan=”1″ class=”xl67″ width=”219″ style=”width:164pt”>Forest-Blossom (Flossom): A New Framework for Sustaining Forest Biodiversity Through Outcome-Driven Remote Sensing Monitoring</td></tr></tbody>

Tags: Generative AI

Read More

Unlocking the power of Model Context Protocol (MCP) on AWS

Unlocking the power of Model Context Protocol (MCP) on AWS

We’ve witnessed remarkable advances in model capabilities as generative AI companies have invested in developing their offerings. Language models such as Anthropic’s Claude Opus 4 & Sonnet 4, Amazon Nova, and Amazon Bedrock can reason, write, and generate responses with increasing sophistication. But even as these models grow more powerful, they can only work with the information available to them.

No matter how impressive a model might be, it’s confined to the data it was trained on or what’s manually provided in its context window. It’s like having the world’s best analyst locked in a room with incomplete files—brilliant, but isolated from your organization’s most current and relevant information.

This isolation creates three critical challenges for enterprises using generative AI:

  1. Information silos trap valuable data behind custom APIs and proprietary interfaces
  2. Integration complexity requires building and maintaining bespoke connectors and glue code for every data source or tool provided to the language model for every data source
  3. Scalability bottlenecks appear as organizations attempt to connect more models to more systems and tools

Sound familiar? If you’re an AI-focused developer, technical decision-maker, or solution architect working with Amazon Web Services (AWS) and language models, you’ve likely encountered these obstacles firsthand. Let’s explore how the Model Context Protocol (MCP) offers a path forward.

What is the MCP?

The MCP is an open standard that creates a universal language for AI systems to communicate with external data sources, tools, and services. Conceptually, MCP functions as a universal translator, enabling seamless dialogue between language models and the diverse systems where your valuable information resides.

Developed by Anthropic and released as an open source project, MCP addresses a fundamental challenge: how to provide AI models with consistent, secure access to the information they need, when they need it, regardless of where that information lives.

MCP deployment diagram showing client interaction with local and internet-based MCP servers

At its core, MCP implements a client-server architecture:

  • MCP clients are AI applications like Anthropic’s Claude Desktop or custom solutions built on Amazon Bedrock that need access to external data
  • MCP servers provide standardized access to specific data sources, whether that’s a GitHub repository, Slack workspace, or AWS service
  • Communication flow between clients and servers follows a well-defined protocol that can run locally or remotely

This architecture supports three essential primitives that form the foundation of MCP:

  1. Tools – Functions that models can call to retrieve information or perform actions
  2. Resources – Data that can be included in the model’s context such as database records, images, or file contents
  3. Prompts – Templates that guide how models interact with specific tools or resources

What makes MCP especially powerful is its ability to work across both local and remote implementations. You can run MCP servers directly on your development machine for testing or deploy them as distributed services across your AWS infrastructure for enterprise-scale applications.

Solving the M×N integration problem

Before diving deeper into the AWS specific implementation details, it’s worth understanding the fundamental integration challenge MCP solves.

Imagine you’re building AI applications that need to access multiple data sources in your organization. Without a standardized protocol, you face what we call the “M×N problem”: for M different AI applications connecting to N different data sources, you need to build and maintain M×N custom integrations.

This creates an integration matrix that quickly becomes unmanageable as your organization adds more AI applications and data sources. Each new system requires multiple custom integrations, with development teams duplicating efforts across projects. MCP transforms this M×N problem into a simpler M+N equation: with MCP, you build M clients and N servers, requiring only M+N implementations. These solutions to the MCP problem are shown in the following diagram.

Visualization showing how MCP reduces integration complexity from 9 to 6 implementations

This approach draws inspiration from other successful protocols that solved similar challenges:

  • APIs standardized how web applications interact with the backend
  • Language Server Protocol (LSP) standardizes how integrated development environments (IDEs) interact with language-specific tools for coding

In the same way that these protocols revolutionized their domains, MCP is poised to transform how AI applications interact with the diverse landscape of data sources in modern enterprises.

Why MCP matters for AWS users

For AWS customers, MCP represents a particularly compelling opportunity. AWS offers hundreds of services, each with its own APIs and data formats. By adopting MCP as a standardized protocol for AI interactions, you can:

  1. Streamline integration between Amazon Bedrock language models and AWS data services
  2. Use existing AWS security mechanisms such as AWS Identity and Access Management (IAM) for consistent access control
  3. Build composable, scalable AI solutions that align with AWS architectural best practices

MCP and the AWS service landscape

What makes MCP particularly powerful in the AWS context is how it can interface with the broader AWS service landscape. Imagine AI applications that can seamlessly access information from:

MCP servers act as consistent interfaces to these diverse data sources, providing language models with a unified access pattern regardless of the underlying AWS service architecture. This alleviates the need for custom integration code for each service and enables AI systems to work with your AWS resources in a way that respects your existing security boundaries and access controls.

In the remaining sections of this post, we explore how MCP works with AWS services, examine specific implementation examples, and provide guidance for technical decision-makers considering adopt MCP in their organizations.

How MCP works with AWS services, particularly Amazon Bedrock

Now that we’ve shown the fundamental value proposition of MCP, we dive into how it integrates with AWS services, with a special focus on Amazon Bedrock. This integration creates a powerful foundation for building context-aware AI applications that can securely access your organization’s data and tools.

Amazon Bedrock and language models

Amazon Bedrock represents the strategic commitment by AWS to make foundation models (FMs) accessible, secure, and enterprise-ready. It’s a fully managed service that provides a unified API across multiple leading language models, including:

  • Anthropic’s Claude
  • Meta’s Llama
  • Amazon Titan and Amazon Nova

What makes Amazon Bedrock particularly compelling for enterprise deployments is its integration with the broader AWS landscape. You can run FMs with the same security, compliance, and operational tools you already use for your AWS workloads. This includes IAM for access control and CloudWatch for monitoring.

At the heart of the versatility of Amazon Bedrock is the Converse API—the interface that enables multiturn conversations with language models. The Converse API includes built-in support for what AWS calls “tool use,” allowing models to:

  1. Recognize when they need information outside their training data
  2. Request that information from external systems using well-defined function calls
  3. Incorporate the returned data into their responses

This tool use capability in the Amazon Bedrock Converse API dovetails perfectly with MCP’s design, creating a natural integration point.

MCP and Amazon Bedrock integration architecture

Integrating MCP with Amazon Bedrock involves creating a bridge between the model’s ability to request information (through the Converse API) and MCP’s standardized protocol for accessing external systems.

Integration flow walkthrough

To help you understand how MCP and Amazon Bedrock work together in practice, we walk through a typical interaction flow, step-by-step:

  1. The user initiates a query through your application interface:

"What were our Q1 sales figures for the Northwest region?"

  1. Your application forwards the query to Amazon Bedrock through the Converse API:
   # Initialize the Bedrock runtime client with your AWS credentials
   bedrock = boto3.client(service_name='bedrock-runtime', region_name='us-east-1')
   
   # Define the query from the user
   user_query = "What were our Q1 sales figures for the Northwest region?"
   
   # available_tools contains tool definitions that match MCP server capabilities
   # These will be exposed to the model through the Converse API
   
   # Call the Converse API with the user's query and available tools
   response = bedrock.converse(
       modelId="us.anthropic.claude-3-7-sonnet-20250219-v1:0",  # Specify which language model to use
       messages=[{"role": "user", "content": [{"text": user_query}]}],  # Format the user's message
       toolConfig={"tools": available_tools}  # Pass the tool definitions to the model
   )
  1. Amazon Bedrock processes the query and determines that it needs financial data that isn’t in its training data
  2. Amazon Bedrock returns a toolUse message, requesting access to a specific tool:
   {
     "role": "assistant",  // Indicates this message is from the model
     "content": [{
       "toolUse": {  // The model is requesting to use a tool
         "toolUseId": "tu_01234567",  // Unique identifier for this tool use request
         "name": "query_sales_data",  // Name of the tool the model wants to use
         "input": {  // Parameters for the tool call
           "quarter": "Q1",  // The model extracted this parameter from the user query
           "region": "Northwest"  // Another parameter extracted from the user query
         }
       }
     }]
   }
  1. Your MCP client application receives this toolUse message and translates it into an MCP protocol
    tool call
  2. The MCP client routes the request to the appropriate MCP server (in this case, a server connected to your
    financial database)
  3. The MCP server executes the tool, retrieving the requested data from your systems:
   # Call the tool through the MCP protocol
   # session is the MCP client session established earlier
   result = await session.call_tool(
       "query_sales_data",  # The tool name from the toolUse message
       {
           "quarter": "Q1",  # Pass through the parameters from the toolUse message
           "region": "Northwest"
       }
   )
   # The MCP server handles authentication, data access, and result formatting
   # This abstracts away the complexity of accessing different data sources
  1. The tool results are returned through the MCP protocol to your client application
  2. Your application sends the results back to Amazon Bedrock as a toolResult message:
   {
     "role": "user",  // This is sent as if from the user, but contains tool results
     "content": [{
       "toolResult": {  // Indicates this is a result from a tool
         "toolUseId": "tu_01234567",  // Must match the ID from the original toolUse
         "content": [{
           "json": {  // Results are formatted as JSON
             "total_sales": 12450000,  // Numerical data accessible to the model
             "growth": 0.12,  // Percentage growth for analysis
             "top_products": ["Product A", "Product B", "Product C"]  // List data
           }
         }]
       }
     }]
   }
  1. Amazon Bedrock generates a final response incorporating the tool results:
“Based on the data I've retrieved, our Q1 sales figures for the Northwest region were $12.45 million, 
representing a 12% growth compared to the previous quarter. 
The top-performing products were Product A, Product B, and Product C.”
  1. Your application returns the final response to the user

This entire process, illustrated in the following diagram, happens in seconds, giving users the impression of a seamless conversation with an AI that has direct access to their organization’s data. Behind the scenes, MCP is handling the complex work of securely routing requests to the right tools and data sources.

Streamlined sequence diagram showing core MCP message flow from user query to final response

In the next section, we explore a practical implementation example that shows how to connect an MCP server to Amazon Bedrock Knowledge Bases, providing a blueprint for your own implementations.

Practical implementation example: Amazon Bedrock Knowledge Bases integration

As you might recall from our earlier discussion of strategic use cases, enterprise knowledge bases represent one of the most valuable applications of MCP on AWS. Now, we explore a concrete implementation of MCP that connects language models to Amazon Bedrock Knowledge Bases. The code for the MCP server can be found in the AWS Labs MCP code repository and for the client in the same AWS Labs MCP samples directory on GitHub. This example brings to life the “universal translator” concept we introduced earlier, demonstrating how MCP can transform the way AI systems interact with enterprise knowledge repositories.

Understanding the challenge

Enterprise knowledge bases contain vast repositories of information—from documentation and policies to technical guides and product specifications. Traditional search approaches are often inadequate when users ask natural language questions, failing to understand context or identify the most relevant content.

Amazon Bedrock Knowledge Bases provide vector search capabilities that improve upon traditional keyword search, but even this approach has limitations:

  1. Manual filter configuration requires predefined knowledge of metadata structures
  2. Query-result mismatch occurs when users don’t use the exact terminology in the knowledge base
  3. Relevance challenges arise when similar documents compete for attention
  4. Context switching between searching and reasoning disrupts user experience

The MCP server we explore addresses these challenges by creating an intelligent layer between language models and knowledge bases.

Architecture overview

At a high level, our MCP server for Amazon Bedrock Knowledge Bases follows a clean, well-organized architecture that builds upon the client-server pattern we outlined previously. The server exposes two key interfaces to language models:

  1. A knowledge bases resource that provides discovery capabilities for available knowledge bases
  2. A query tool that enables dynamic searching across these knowledge bases

Detailed MCP Bedrock architecture with intelligent query processing workflow and AWS service connections

Remember the M×N integration problem we discussed earlier? This implementation provides a tangible example of how MCP solves it – creating a standardized interface between a large language model and your Amazon Bedrock Knowledge Base repositories.

Knowledge base discovery resource

The server begins with a resource that enables language models to discover available knowledge bases:

@mcp.resource(uri='resource://knowledgebases', name='KnowledgeBases', mime_type='application/json')
async def knowledgebases_resource() -> str:
    """List all available Amazon Bedrock Knowledge Bases and their data sources.
 
    This resource returns a mapping of knowledge base IDs to their details, including:
    - name: The human-readable name of the knowledge base
    - data_sources: A list of data sources within the knowledge base, each with:
      - id: The unique identifier of the data source
      - name: The human-readable name of the data source
 
    ## Example response structure:
    ```json
    {
        "kb-12345": {
            "name": "Customer Support KB",
            "data_sources": [
                {"id": "ds-abc123", "name": "Technical Documentation"},
                {"id": "ds-def456", "name": "FAQs"}
            ]
        },
        "kb-67890": {
            "name": "Product Information KB",
            "data_sources": [
                {"id": "ds-ghi789", "name": "Product Specifications"}
            ]
        }
    }
    ```
 
    ## How to use this information:
    1. Extract the knowledge base IDs (like "kb-12345") for use with the QueryKnowledgeBases tool
    2. Note the data source IDs if you want to filter queries to specific data sources
    3. Use the names to determine which knowledge base and data source(s) are most relevant to the user's query
    """
    return json.dumps(await discover_knowledge_bases(kb_agent_mgmt_client, kb_inclusion_tag_key)) 

This resource serves as both documentation and a discovery mechanism that language models can use to identify available knowledge bases before querying them.

Querying knowledge bases with the MCP tool

The core functionality of this MCP server resides in its QueryKnowledgeBases tool:

@mcp.tool(name='QueryKnowledgeBases')
async def query_knowledge_bases_tool(
    query: str = Field(
        ..., description='A natural language query to search the knowledge base with'
    ),
    knowledge_base_id: str = Field(
        ...,
        description='The knowledge base ID to query. It must be a valid ID from the resource://knowledgebases MCP resource',
    ),
    number_of_results: int = Field(
        10,
        description='The number of results to return. Use smaller values for focused results and larger values for broader coverage.',
    ),
    reranking: bool = Field(
        kb_reranking_enabled,
        description='Whether to rerank the results. Useful for improving relevance and sorting. Can be globally configured with BEDROCK_KB_RERANKING_ENABLED environment variable.',
    ),
    reranking_model_name: Literal['COHERE', 'AMAZON'] = Field(
        'AMAZON',
        description="The name of the reranking model to use. Options: 'COHERE', 'AMAZON'",
    ),
    data_source_ids: Optional[List[str]] = Field(
        None,
        description='The data source IDs to filter the knowledge base by. It must be a list of valid data source IDs from the resource://knowledgebases MCP resource',
    ),
) -> str:
    """Query an Amazon Bedrock Knowledge Base using natural language.
 
    ## Usage Requirements
    - You MUST first use the `resource://knowledgebases` resource to get valid knowledge base IDs
    - You can query different knowledge bases or make multiple queries to the same knowledge base
 
    ## Query Tips
    - Use clear, specific natural language queries for best results
    - You can use this tool MULTIPLE TIMES with different queries to gather comprehensive information
    - Break complex questions into multiple focused queries
    - Consider querying for factual information and explanations separately
     """
## Additional Implementation details …

What makes this tool powerful is its flexibility in querying knowledge bases with natural language. It supports several key features:

  1. Configurable result sizes – Adjust the number of results based on whether you need focused or comprehensive information
  2. Optional reranking – Improve relevance using language models (such as reranking models from Amazon or Cohere)
  3. Data source filtering – Target specific sections of the knowledge base when needed

Reranking is disabled by default in this implementation but can be quickly enabled through environment variables or direct parameter configuration.

Enhanced relevance with reranking

A notable feature of this implementation is the ability to rerank search results using language models available through Amazon Bedrock. This capability allows the system to rescore search results based on deeper semantic understanding:

# Parse reranking enabled environment variable
kb_reranking_enabled_raw = os.getenv('BEDROCK_KB_RERANKING_ENABLED')
kb_reranking_enabled = False  # Default value is now False (off)
if kb_reranking_enabled_raw is not None:
    kb_reranking_enabled_raw = kb_reranking_enabled_raw.strip().lower()
    if kb_reranking_enabled_raw in ('true', '1', 'yes', 'on'):
        kb_reranking_enabled = True

Reranking is particularly valuable for queries where semantic similarity might not be enough to determine the
most relevant content. For example, when answering a specific question, the most relevant document isn’t necessarily
the one with the most keyword matches, but the one that directly addresses the question being asked.

Full interaction flow

This section walks through a complete interaction flow to show how all these components work
together:

  1. The user asks a question to a language model such as Anthropic’s Claude through an application:
   "What's our quarterly IT security audit procedure?"
  1. The language model recognizes it needs to access the knowledge base and calls the MCP tool:
{
  "toolUse": {
    "name": "QueryKnowledgeBases",
    "input": {
      "query": "What's our quarterly IT security audit procedure?",
      "knowledge_base_id": "kb-12345abcde",
      "reranking": true
    }
  }
} 
  1. The MCP server processes the request by querying the knowledge base with the specified parameters
  2. The MCP server returns formatted results to the language model, including content, location, and relevance scores:
Applied metadata filters: department=IT, document_type=Procedure, category=Security Audit, frequency=Quarterly
 
   Results have been reranked to improve relevance.
 
   Result 1 [Score: 0.92]:
   Source: S3 - s3://company-docs/procedures/it/security/quarterly-audit-procedure-v2.pdf
   Content: The IT Security Quarterly Audit Procedure outlines the steps for conducting systematic evaluations of the organization's information security controls. This document provides guidance on scope, methodology, reporting requirements, and remediation tracking...
   Metadata:
     • department: IT
     • document_type: Procedure
     • category: Security Audit
     • frequency: Quarterly
     • last_updated: 2024-11-15
  1. The language model incorporates these results into its response to the user:
   Based on our IT Security Quarterly Audit Procedure, the process involves four main phases:
 
   1. Planning: Define scope, objectives, and schedule for the audit
   2. Execution: Conduct the technical assessments and control evaluations
   3. Reporting: Document findings, risks, and recommendations
   4. Remediation: Track and verify resolution of identified issues
 
   The procedure was last updated on November 15, 2024, and specifies that the Cybersecurity team leads the effort with support from IT Operations.

This interaction, illustrated in the following diagram, demonstrates the seamless fusion of language model capabilities with enterprise knowledge, enabled by the MCP protocol. The user doesn’t need to specify complex search parameters or know the structure of the knowledge base—the integration layer handles these details automatically.

End-to-end sequence diagram of IT security audit query processing through Bedrock Claude and MCP Server with metadata analysis

Looking ahead: The MCP journey continues

As we’ve explored throughout this post, the Model Context Protocol provides a powerful framework for connecting language models to your enterprise data and tools on AWS. But this is just the beginning of the journey.

The MCP landscape is rapidly evolving, with new capabilities and implementations emerging regularly. In future posts in this series, we’ll dive deeper into advanced MCP architectures and use cases, with a particular focus on remote MCP implementation.

The introduction of the new Streamable HTTP transport layer represents a significant advancement for MCP, enabling truly enterprise-scale deployments with features such as:

  • Stateless server options for simplified scaling
  • Session ID management for request routing
  • Robust authentication and authorization mechanisms for secure access control
  • Horizontal scaling across server nodes
  • Enhanced resilience and fault tolerance

These capabilities will be essential as organizations move from proof-of-concept implementations to production-grade MCP deployments that serve multiple teams and use cases.

We invite you to follow this blog post series as we continue to explore how MCP and AWS services can work together to create more powerful, context-aware AI applications for your organization.

Conclusion

As language models continue to transform how we interact with technology, the ability to connect these models to enterprise data and systems becomes increasingly critical. The Model Context Protocol (MCP) offers a standardized, secure, and scalable approach to integration.

Through MCP, AWS customers can:

  • Establish a standardized protocol for AI-data connections
  • Reduce development overhead and maintenance costs
  • Enforce consistent security and governance policies
  • Create more powerful, context-aware AI experiences

The Amazon Bedrock Knowledge Bases implementation we explored demonstrates how MCP can transform simple retrieval into intelligent discovery, adding value far beyond what either component could deliver independently.

Getting started

Ready to begin your MCP journey on AWS? Here are some resources to help you get started:

Learning resources:

Implementation steps:

  1. Identify a high-value use case where AI needs access to enterprise data
  2. Select the appropriate MCP servers for your data sources
  3. Set up a development environment with local MCP implementations
  4. Integrate with Amazon Bedrock using the patterns described in this post
  5. Deploy to production with appropriate security and scaling considerations

Remember that MCP offers a “start small, scale incrementally” approach. You can begin with a single server connecting to one data source, then expand your implementation as you validate the value and establish patterns for your organization.

We encourage you to try the MCP with AWS services today. Start with a simple implementation, perhaps connecting a language model to your documentation or code repositories, and experience firsthand the power of context-aware AI.

Share your experiences, challenges, and successes with the community. The open source nature of MCP means that your contributions—whether code, use cases, or feedback—can help shape the future of this important protocol.

In a world where AI capabilities are advancing rapidly, the difference between good and great implementations often comes down to context. With MCP and AWS, you have the tools to make sure your AI systems have the right context at the right time, unlocking their full potential for your organization.

This blog post is part of a series exploring the Model Context Protocol (MCP) on AWS. In our next installment, we’ll explore the world of agentic AI, demonstrating how to build autonomous agents using the open-source Strands Agents SDK with MCP to create intelligent systems that can reason, plan, and execute complex multi-step workflows. We’ll also explore advanced implementation patterns, remote MCP architectures, and discover additional use cases for MCP.


About the authors

Aditya Addepalli is a Delivery Consultant at AWS, where he works to lead, architect, and build applications directly with customers. With a strong passion for Applied AI, he builds bespoke solutions and contributes to the ecosystem while consistently keeping himself at the edge of technology. Outside of work, you can find him meeting new people, working out, playing video games and basketball, or feeding his curiosity through personal projects.

Elie Schoppik leads live education at Anthropic as their Head of Technical Training. He has spent over a decade in technical education, working with multiple coding schools and starting one of his own. With a background in consulting, education, and software engineering, Elie brings a practical approach to teaching Software Engineering and AI. He’s shared his insights at a variety of technical conferences as well as universities including MIT, Columbia, Wharton, and UC Berkeley.

Jawhny Cooke is a Senior Anthropic Specialist Solutions Architect for Generative AI at AWS. He specializes in integrating and deploying Anthropic models on AWS infrastructure. He partners with customers and AI providers to implement production-grade generative AI solutions through Amazon Bedrock, offering expert guidance on architecture design and system implementation to maximize the potential of these advanced models.

Kenton Blacutt is an AI Consultant within the GenAI Innovation Center. He works hands-on with customers helping them solve real-world business problems with cutting edge AWS technologies, especially Amazon Q and Bedrock. In his free time, he likes to travel, experiment with new AI techniques, and run an occasional marathon.

Mani Khanuja is a Principal Generative AI Specialist Solutions Architect, author of the book Applied Machine Learning and High-Performance Computing on AWS, and a member of the Board of Directors for Women in Manufacturing Education Foundation Board. She leads machine learning projects in various domains such as computer vision, natural language processing, and generative AI. She speaks at internal and external conferences such AWS re:Invent, Women in Manufacturing West, YouTube webinars, and GHC 23. In her free time, she likes to go for long runs along the beach.

Nicolai van der Smagt is a Senior Specialist Solutions Architect for Generative AI at AWS, focusing on third-party model integration and deployment. He collaborates with AWS’ biggest AI partners to bring their models to Amazon Bedrock, while helping customers architect and implement production-ready generative AI solutions with these models.

Read More