Introducing KBLaM: Bringing plug-and-play external knowledge to LLMs

Introducing KBLaM: Bringing plug-and-play external knowledge to LLMs

KBLaM blog | A flowchart illustrating the process of handling a prompt using a language model. The process begins with documents being used to construct and summarize a knowledge base (KB) offline. The summarized KB is then encoded and fed into the main process. A prompt goes through a tokenizer, followed by rectangular attention, and then into the large language model (LLM). The LLM retrieves information from the encoded KB to generate an answer.

Large language models (LLMs) have demonstrated remarkable capabilities in reasoning, language understanding, and even creative tasks. Yet, a key challenge persists: how to efficiently integrate external knowledge.

Traditional methods such as fine-tuning and Retrieval-Augmented Generation (RAG) come with trade-offs—fine-tuning demands costly retraining, while RAG introduces separate retrieval modules that increase complexity and prevent seamless, end-to-end training. In-context learning, on the other hand, becomes increasingly inefficient as knowledge bases grow, facing quadratic computational scaling that hinders its ability to handle large repositories. A comparison of these approaches can be seen in Figure 1.

A new way to integrate knowledge

To address these challenges, we introduce the Knowledge Base-Augmented Language Model (KBLaM) —a novel approach that integrates structured knowledge bases into pre-trained LLMs. Instead of relying on external retrieval modules or costly fine-tuning, KBLaM encodes knowledge into continuous key-value vector pairs, efficiently embedding them within the model’s attention layers using a specialized rectangular attention mechanism, which implicitly performs retrieval in an integrated manner.

We use structured knowledge bases to represent the data, allowing us to consolidate knowledge and leverage structure. This design allows it to scale linearly with the size of the knowledge base while maintaining dynamic updates without retraining, making it far more efficient than existing methods.

Microsoft research podcast

NeurIPS 2024: The co-evolution of AI and systems with Lidong Zhou

Just after his NeurIPS 2024 keynote on the co-evolution of systems and AI, Microsoft CVP Lidong Zhou joins the podcast to discuss how rapidly advancing AI impacts the systems supporting it and the opportunities to use AI to enhance systems engineering itself.


Scalable, efficient, and future-ready

At its core, KBLaM is designed to integrate structured knowledge into LLMs, making them more efficient and scalable. It achieves this by converting external knowledge bases—collections of facts structured as triples consisting of an entity, a property, and a value—into a format that LLMs can process naturally.  Such knowledge bases allow for consolidated, reliable sources of knowledge.

To create these knowledge bases, we first extract structured data in JSON format using small language models. We then apply Project Alexandria’s probabilistic clustering. Once we have this structured knowledge base, KBLaM follows a three-step pipeline:

  1. Knowledge Encoding: Each knowledge triple is mapped into a key-value vector pair using a pre-trained sentence encoder with lightweight linear adapters. The key vector, derived from the entity name and property, encodes “index information,” while the value vector captures the corresponding property value. This allows us to create continuous, learnable key-value representations.
  2. Integration with LLMs: These key-value pairs, or knowledge tokens, are augmented into the model’s attention layers using a specialized rectangular attention structure. Unlike traditional transformer models that process all tokens equally and come with quadratic cost—such as GPT-4, Phi, and Llama—rectangular attention enables the model to attend over knowledge with linear cost, as illustrated in Figure 2. Compared to standard attention mechanisms in generative language models, where each token attends to all preceding tokens, our approach introduces a more efficient structure. In this setup, language tokens (such as those from a user’s question) attend to all knowledge tokens. However, knowledge tokens do not attend to one another, nor do they attend back to the language tokens. This selective attention pattern significantly reduces computational cost while preserving the model’s ability to incorporate external knowledge effectively.

    This linear cost, which is crucial for the efficiency of KBLaM, effectively amounts to treating each fact independently—an assumption that holds for most facts. For example, the model’s name, KBLaM, and the fact that the research was conducted at Microsoft Research are very weakly correlated. This rectangular attention is implemented as an extension of standard attention. During training, we keep the base model’s weights frozen, ensuring that when no knowledge tokens are provided, the model functions exactly as it did originally.

  3. Efficient Knowledge Retrieval: Through this rectangular attention, the model learns to dynamically retrieve relevant knowledge tokens during inference, eliminating the need for separate retrieval steps.
Figure 1: A diagram comparing KBLaM and existing approaches. With RAG, we take the user’s prompt and use that to retrieve relevant documents from an external corpus using some retriever module, and append a tokenized version of those relevant documents in the context. This is relatively cheap, but requires many components. On the other hand, In Context Learning just puts the entire corpus into the context. This is simple, involving only one component, but is expensive. Our method, KBLaM, makes a structured knowledge base from the documents in an offline process, and includes the entire knowledge base to the context, while using a novel variant of attention, rectangular attention, so that the cost is linear in the size of the knowledge base. This results in a system where the retrieval only requires a single, trainable component, that is also cheap.
Figure 1: KBLaM allows for attention over the entire knowledge base instead of having an external retriever.
Figure 2: A diagram illustrating rectangular attention. Unlike regular attention, the attention matrix is not square, as we remove the parts where the knowledge base would attend over itself. This allows for KBLaM to scale linearly with the number of items in its context.
Figure 2: By having the user’s question attend to the knowledge base, while treating facts in the knowledge base independently, KBLaM scales efficiently and linearly with the size of the knowledge base.

Unlike RAG, which appends retrieved document chunks to prompts, KBLaM allows for direct integration of knowledge into the model. Compared to in-context learning,  KBLaM’s rectangular attention maintains a linear memory footprint, making it vastly more scalable for large knowledge bases. 

Its efficiency is a game-changer. While traditional in-context learning methods struggle with quadratic memory growth due to self-attention overhead, KBLaM’s linear overhead means we can store much more knowledge in the context. In practice, this means KBLaM can store and process over 10,000 knowledge triples, the equivalent of approximately 200,000 text tokens on a single GPU—a feat that would be computationally prohibitive with conventional in-context learning. The results across a wide range of triples and can be seen in Figure 3. Remarkably, it achieves this while extending a base model that has a context length of only 8K tokens. Additionally, KBLaM enables dynamic updates: modifying a single knowledge triple does not require retraining or re-computation of the entire knowledge base. 

Figure 3: Two graphs, showing time to first token, and memory usage for both KBLaM and RAG. KBLaM’s time to first token remains relatively constant across a large range of knowledge base sizes, with the time-to-first-token with 4096 triples in the context being lower than that of conventional RAG with 5 triples in the context. The memory usage is also much lower, with KBLaM with 512 triples having a similar memory usage to RAG at 5 triples.
Figure 3: KBLaM is much faster and uses much less memory than adding the equivalent number of triples in the context using conventional RAG-like approaches. In particular, we have lower time to first token with 4,096 tripes in the context with KBLaM than we would with 5 triples in the context.

Enhancing interpretability and reliability

Another major benefit of KBLaM is its interpretability. Unlike in-context learning, where knowledge injection is opaque, KBLAM’s attention weights provide clear insights into how the model utilizes knowledge tokens. Experiments show that KBLaM assigns high attention scores to relevant knowledge triples, effectively mimicking a soft retrieval process.

Furthermore, KBLaM enhances model reliability by learning through its training examples when not to answer a question if the necessary information is missing from the knowledge base. In particular, with knowledge bases larger than approximately 200 triples, we found that the model refuses to answer questions it has no knowledge about more precisely than a model given the information as text in context. This feature helps reduce hallucinations, a common problem in LLMs that rely on internal knowledge alone, making responses more accurate and trustworthy.

The future of knowledge-augmented AI

KBLaM represents a major step forward in integrating structured knowledge into LLMs. By offering a scalable, efficient, and interpretable alternative to existing techniques, it paves the way for AI systems that can stay up to date and provide reliable, knowledge-driven responses. In fields where accuracy and trust are critical—such as medicine, finance, and scientific research—this approach has the potential to transform how language models interact with real-world information.

As AI systems increasingly rely on dynamic knowledge rather than static model parameters, we hope KBLaM will serve as a bridge between raw computational power and real-world understanding.

However, there is still work to be done before it can be deployed at scale. Our current model has been trained primarily on factual question-answer pairs, and further research is needed to expand its capabilities across more complex reasoning tasks and diverse knowledge domains.

To accelerate progress, we are releasing KBLaM’s code and datasets (opens in new tab) to the research community, and we are planning integrations with the Hugging Face transformers library. By making these resources available, we hope to inspire further research and adoption of scalable, efficient knowledge augmentation for LLMs. The future of AI isn’t just about generating text—it’s about generating knowledge that is accurate, adaptable, and deeply integrated with the evolving world. KBLaM is a step in that direction.

The post Introducing KBLaM: Bringing plug-and-play external knowledge to LLMs appeared first on Microsoft Research.

Read More

Semantic Telemetry: Understanding how users interact with AI systems

Semantic Telemetry: Understanding how users interact with AI systems

Semantic Telemetry blog | diagram showing relationships between chat, LLM prompt, and labeled data

AI tools are proving useful across a range of applications, from helping to drive the new era of business transformation to helping artists craft songs. But which applications are providing the most value to users? We’ll dig into that question in a series of blog posts that introduce the Semantic Telemetry project at Microsoft Research. In this initial post, we will introduce a new data science approach that we will use to analyze topics and task complexity of Copilot in Bing usage.

Human-AI interactions can be iterative and complex, requiring a new data science approach to understand user behavior to build and support increasingly high value use cases. Imagine the following chat:

Example chat between user and AI

Here we see that chats can be complex and span multiple topics, such as event planning, team building, and logistics. Generative AI has ushered in a two-fold paradigm shift. First, LLMs give us a new thing to measure, that is, how people interact with AI systems. Second, they give us a new way to measure those interactions, that is, they give us the capability to understand and make inferences on these interactions, at scale. The Semantic Telemetry project has created new measures to classify human-AI interactions and understand user behavior, contributing to efforts in developing new approaches for measuring generative AI (opens in new tab) across various use cases.

Semantic Telemetry is a rethink of traditional telemetry–in which data is collected for understanding systems–designed for analyzing chat-based AI. We employ an innovative data science methodology that uses a large language model (LLM) to generate meaningful categorical labels, enabling us to gain insights into chat log data.

Flow chart illustrating the LLM classification process starting with chat input, then prompting LLM with chat using generated label taxonomy, and output is the labeled chat.
Figure 1: Prompting an LLM to classify a conversation based on LLM generated label taxonomy

This process begins with developing a set of classifications and definitions. We create these classifications by instructing an LLM to generate a short summary of the conversation, and then iteratively prompting the LLM to generate, update, and review classification labels on a batched set of summaries. This process is outlined in the paper: TnT-LLM: Text Mining at Scale with Large Language Models. We then prompt an LLM with these generated classifiers to label new unstructured (and unlabeled) chat log data.

Description of LLM generated label taxonomy process

With this approach, we have analyzed how people interact with Copilot in Bing. In this blog, we examine insights into how people are using Copilot in Bing, including how that differs from traditional search engines. Note that all analyses were conducted on anonymous Copilot interactions containing no personal information.

Topics

To get a clear picture of how people are using Copilot in Bing, we need to first classify sessions into topical categories. To do this, we developed a topic classifier. We used the LLM classification approach described above to label the primary topic (domain) for the entire content of the chat. Although a single chat can cover multiple topics, for this analysis, we generated a single label for the primary topic of the conversation. We sampled five million anonymized Copilot in Bing chats during August and September 2024, and found that globally, 21% of all chats were about technology, with a high concentration of these chats in programming and scripting and computers and electronics.

Bubble chart showing topics based on percentage of sample. Primary topics shown are Technology (21%), Entertainment (12.8%), Health (11%), Language, Writing, & Editing (11.6%), Lifestyle (9.2%), Money (8.5%), History, Events, & Law (8.5%), Career (7.8%), Science (6.3%)
Figure 2: Top Copilot in Bing topics based on anonymized data (August-September 2024)
Bubble chart of Technology topic showing subtopics: Programming & scripting, Computers & electronics, Engineering & design, Data analysis, and ML & AI.
Figure 3: Frequent topic summaries in Technology
Bubble chart of Entertainment showing subtopics: Entertainment, Sports & fitness, Travel & tourism, Small talk & chatbot, and Gaming
Figure 4: Frequent topic summaries in Entertainment

Diving into the technology category, we find a lot of professional tasks in programming and scripting, where users request problem-specific assistance such as fixing a SQL query syntax error. In computers and electronics, we observe users getting help with tasks like adjusting screen brightness and troubleshooting internet connectivity issues. We can compare this with our second most common topic, entertainment, in which we see users seeking information related to personal activities like hiking and game nights.

We also note that top topics differ by platform. The figure below depicts topic popularity based on mobile and desktop usage. Mobile device users tend to use the chat for more personal-related tasks such as helping to plant a garden or understanding medical symptoms whereas desktop users conduct more professional tasks like revising an email.

Sankey visual showing top topics for Desktop and Mobile users
Figure 5: Top topics for desktop users and mobile users

Microsoft research blog

PromptWizard: The future of prompt optimization through feedback-driven self-evolving prompts

PromptWizard from Microsoft Research is now open source. It is designed to automate and simplify AI prompt optimization, combining iterative LLM feedback with efficient exploration and refinement techniques to create highly effective prompts in minutes.


Search versus Copilot

Beyond analyzing topics, we compared Copilot in Bing usage to that of traditional search. Chat extends beyond traditional online search by enabling users to summarize, generate, compare, and analyze information. Human-AI interactions are conversational and more complex than traditional search (Figure 6).

Venn diagram showing differences between Bing Search and Copilot in Bing, with intersection in information lookup.
Figure 6: Bing Search Query compared to Copilot in Bing Conversation

A major differentiation between search and chat is the ability to ask more complex questions, but how can we measure this? We think of complexity as a scale ranging from simply asking chat to look up information to evaluating several ideas. We aim to understand the difficulty of a task if performed by a human without the assistance of AI. To achieve this, we developed the task complexity classifier, which assesses task difficulty using Anderson and Krathwohl’s Taxonomy of Learning Objectives (opens in new tab). For our analysis, we have grouped the learning objectives into two categories: low complexity and high complexity. Any task more complicated than information lookup is classified as high complexity. Note that this would be very challenging to classify using traditional data science techniques.

Description of task complexity and 6 categories of the Anderson and Krathwohl’s Taxonomy of Learning Objectives

Comparing low versus high complexity tasks, most chat interactions were categorized as high complexity (78.9%), meaning that they were more complex than looking up information. Programming and scripting, marketing and sales, and creative and professional writing are topics in which users engage in higher complexity tasks (Figure 7) such as learning a skill, troubleshooting a problem, or writing an article.

Highest and lowest complexity topics based on percent of high complexity chats
Figure 7: Most and least complex topics based on percentage of high complexity tasks.

Travel and tourism and history and culture scored lowest in complexity, with users looking up information like flights time and latest news updates.

Demo of task complexity and topics on anonymous Copilot interactions

When should you use chat instead of search? A 2024 Microsoft Research study: The Use of Generative Search Engines for Knowledge Work and Complex Tasks, suggests that people are seeing value in technical, complex tasks such as web development and data analysis. Bing Search contained more queries with lower complexity focused on non-professional areas, like gaming and entertainment, travel and tourism, and fashion and beauty, while chat had a greater distribution of complex technical tasks. (Figure 8).

Comparison of Bing Search and Copilot in Bing topics based on complexity and knowledge work. Copilot in Bing trends greater complexity and greater knowledge work than Bing Search.
Figure 8: Comparison of Bing Search and Copilot in Bing for anonymized sample data (May-June 2023)

Conclusion

LLMs have enabled a new era of high-quality human-AI interaction, and with it, the capability to analyze those same interactions with high fidelity, at scale, and near real-time. We are now able to obtain actionable insight from complex data that is not possible with traditional data science pattern-matching methods. LLM-generated classifications are pushing research into new directions that will ultimately improve user experience and satisfaction when using chat and other user-AI interactions tools.

This analysis indicates that Copilot in Bing is enabling users to do more complex work, specifically in areas such as technology. In our next post, we will explore how Copilot in Bing is supporting professional knowledge work and how we can use these measures as indicators for retention and engagement.


FOOTNOTE: This research was conducted at the time the feature Copilot in Bing was available as part of the Bing service; since October 2024 Copilot in Bing has been deprecated in favor of the standalone Microsoft Copilot service.

References:

  1. Krathwohl, D. R. (2002). A Revision of Bloom’s Taxonomy: An Overview. Theory Into Practice, 41(4), 212–218. https://doi.org/10.1207/s15430421tip4104_2 (opens in new tab)

The post Semantic Telemetry: Understanding how users interact with AI systems appeared first on Microsoft Research.

Read More

The AI Revolution in Medicine, Revisited: An Introduction

Two years ago, OpenAI’s GPT-4 kick-started a new era in AI. In the months leading up to its public release, Peter Lee, president of Microsoft Research, cowrote a book full of optimism for the potential of advanced AI models to transform the world of healthcare. What has happened since? In this special podcast series, Lee revisits the book, exploring how patients, providers, and other medical professionals are experiencing and using generative AI today while examining what he and his coauthors got right—and what they didn’t foresee.

In this introduction to the series, Lee talks about his early encounters with GPT-4, when the AI model was still in secret development with OpenAI, and the range of emotions he cycled through as he came to understand the new technology better. The emergence of generative AI has created a “new world,” Lee says, one he is eager to investigate with the aim of discovering the technology’s impact so far and what it means for the future of healthcare and medicine.

Transcript

[MUSIC]

PETER LEE: This is The AI Revolution in Medicine, Revisited. I’m Peter Lee, president of Microsoft Research, and I’m pretty excited to introduce this series of conversations as part of the Microsoft Research Podcast.

About two years ago, with Carey Goldberg and Zak Kohane, we wrote a book, The AI Revolution in Medicine. This was a book that was intended to educate the world of healthcare and the world of medical research about this new thing that was emerging. This idea of generative AI. And we wrote the book in secret. In fact, the whole existence of what we now know of as OpenAI’s GPT-4 AI model hadn’t been publicly disclosed or revealed to the world. And so when we were working on this book, we had to make some guesses. What is this going to mean for healthcare? If you’re a doctor or a nurse, in what ways will AI impact your work? If you’re a patient, in what ways could AI change your experience as you try to navigate a complex healthcare system?

And so now it’s been about two years. Two years hence, what did we get right? What did we get wrong? What things have come along much faster than we ever would have dreamed of? What did we miss? And what things have turned out to be much harder than we ever could have realized? And so this series of conversations is going to talk to people in the real world. We’ll delve into exactly what’s happening in the clinic, the patient experience, how people are thinking about safety and regulatory matters, and what this all means for discovery and advancements of medical science. And even then, we’ll have guests that will allow us to look into the future—the AI advances that are happening now and what is going to happen next.


[MUSIC TRANSITIONS TO SERIES THEME] [MUSIC FADES]

So now, let me just take a step back here to talk about this book project. And I’d like to just read the first couple of sentences in Chapter 1, and Chapter 1 is entitled “First Contact.” And it starts with a quote. Quote, “I think that Zak and his mother deserve better than that,” unquote. “I was being scolded. And while I’ve been scolded plenty in my life, for the first time it wasn’t a person scolding me; it was an artificial intelligence system.” So that’s how we started this book, and I wanted to read that because, at least for me, it takes me back to the kind of awe and wonderment in those early days when in secret development, we had access from OpenAI to what we now know of as GPT-4.

And what was that quote about? Well, after getting access to GPT-4, I became very interested in what this might mean for healthcare. But I, not being a doctor, knew I needed help. So I had reached out to a good colleague of mine who is a doctor, a pediatric endocrinologist, and head of the bioinformatics department at Harvard Medical School, Dr. Isaac “Zak” Kohane. And I sought his help. And in our back-and-forth discussions, one of the things that Zak shared with me was an article that he wrote for a magazine where he talked about his use of machine learning in the care of his 90-year-old mother, his 90-year-old mother, who—like many 90-year-old people—was having some health issues.

And this article was very interesting. It really went into some detail about not only the machine learning technology that Zak had created in order to help manage his mother’s health but also the kind of emotional burden of doing this and in what ways technology was helping Zak cope with that. And so as I read that article, it touched me because at that time, I was struggling in a very similar way with my own father, who was at that time 89 years old and was also suffering from some very significant health issues. And, like Zak, I was feeling some pangs of guilt because my father was living in Southern California; I was way up in the Pacific Northwest, you know, just feeling guilty not being there, present for him, through his struggles. And reading that article a thought that occurred to me was, I wonder if in the future, AI could pretend to be me so that my father could always have a version of me to talk to. And I also had the thought in the other direction. Could AI someday capture enough of my father so that when and if he passes, I always have some memory of my father that I could interact with? A strange and bizarre thought, I admit, but a natural one, I think, for any human being that’s encountering this amazing AI technology for the first time. And so I ran an experiment. I used GPT-4 to read Zak’s article and then posed the question to GPT-4, “Based on this article, could you pretend to be Zak? I’ll pretend to be Zak’s mother, and let’s test whether it’s possible to have a mother-son conversation.”

To my surprise, GPT-4’s response at that time was to scold me, basically saying that this is wrong; that this has a lot of dangers and risks. You know, what if Zak’s mother really needs the real Zak. And in those early days of this encounter with AI, that was incredibly startling. It just really forces you to reexamine yourself, and it kicked off our writing in the book as really not only being about a technology that could help lead to better diagnoses, help reduce medical errors, reduce the amount of paperwork and clerical burden that doctors go through, could help demystify and help patients navigate a healthcare system, but it could actually be a technology that forces people to reexamine their relationships and reexamine what it really means for people to take care of other people.

And since then, of course, I’ve come to learn that many people have had similar experiences in their first encounters with AI. And in fact, I’ve come to think of this as, somewhat tongue in cheek, the nine stages of AI grief. And they actually relate to what we’ll try to address in this new series of conversations.

For me, the first time that Greg Brockman and Sam Altman presented what we now know of as OpenAI’s GPT-4 to me, they made some claims about what it could do. And my first reaction was one of skepticism, and it seemed that the claims that were being made just couldn’t be true. Then that, kind of, passed into, I would say, a period of annoyance because I started to see my colleagues here in Microsoft Research start to show some amazement about the technology. I actually was annoyed because I felt they were being duped by this technology. So that’s the second phase. And then, the third phase was concern and maybe even a little bit of frustration because it became clear that, as a company here at Microsoft, we were on the verge of making a big bet on this new technology. And that was concerning to me because of my fundamental skepticism. But then I got my hands on the technology myself. And that enters into a fourth stage, of amazement. You start to encounter things that just are fundamentally amazing. This leads to a period of intensity because I immediately surmised that, wow, this could really change everything and in very few areas other than healthcare would be more important areas of change. And that is stage five, a period of serious intensity where you’re just losing sleep and working so hard to try to imagine what this all could mean. Running as many experiments as you can; trying to lean on as much real expertise as possible. You then lead from there into a period of what I call chagrin because as amazing as the technology is, actually understanding how to harness it in real life is not easy. You finally get into this stage of what I would call enlightenment. [MUSIC] And I won’t claim to be enlightened. But it is, sort of, a combination of acceptance that we are in a new world today, that things are happening for real, and that there’s, sort of, no turning back. And at that point, I think we can really get down to work. And so as we think about really the ultimate purpose of this series of conversations that we’re about to have, it’s really to help people get to that stage of enlightenment, to really, kind of, roll up our sleeves, to sit down and think through all of the best knowledge and experience that we’ve gathered over the last two years, and chart the future of this AI revolution in medicine.

[MUSIC TRANSITIONS TO SERIES THEME]

Let’s get going.

[MUSIC FADES]

The post The AI Revolution in Medicine, Revisited: An Introduction appeared first on Microsoft Research.

Read More

Advancing biomedical discovery: Overcoming data challenges in precision medicine

Advancing biomedical discovery: Overcoming data challenges in precision medicine

white line icon of a medical paper and of a computer with a person in front of it on a blue and green gradient background

Introduction

Modern biomedical research is driven by the promise of precision medicine—tailored treatments for individual patients through the integration of diverse, large-scale datasets. Yet, the journey from raw data to actionable insights is fraught with challenges. Our team of researchers at Microsoft Research in the Health Futures group, in collaboration with the Perelman School of Medicine at the University of Pennsylvania (opens in new tab), conducted an in-depth exploration of these challenges in a study published in Nature Scientific Reports. The goal of this research was to identify pain points in the biomedical data lifecycle and offer actionable recommendations to enable secure data-sharing, improved interoperability, robust analysis, and foster collaboration across the biomedical research community.

Study at a glance

A deep understanding of the biomedical discovery process is crucial for advancing modern precision medicine initiatives. To explore this, our study involved in-depth, semi-structured interviews with biomedical research professionals spanning various roles including bench scientists, computational biologists, researchers, clinicians, and data curators. Participants provided detailed insights into their workflows, from data acquisition and curation to analysis and result dissemination. We used an inductive-deductive thematic analysis to identify key challenges occurring at each stage of the data lifecycle—from raw data collection to the communication of data-driven findings.

Some key challenges identified include:

  • Data procurement and validation: Researchers struggle to identify and secure the right datasets for their research questions, often battling inconsistent quality and manual data validation.
  • Computational hurdles: The integration of multiomic data requires navigating disparate computational environments and rapidly evolving toolsets, which can hinder reproducible analysis.
  • Data distribution and collaboration: The absence of a unified data workflow and secure sharing infrastructure often leads to bottlenecks when coordinating between stakeholders across university labs, pharmaceutical companies, clinical settings, and third-party vendors.

Main takeaways and recommendations:

  1. Establishing a unified biomedical data lifecycle 

    This study highlights the need for a unified process that spans all phases of the biomedical discovery process—from data-gathering and curation to analysis and dissemination. Such a data jobs-to-be-done framework would streamline standardized quality checks, reduce manual errors such as metadata reformatting, and ensure that the flow of data across different research phases remains secure and consistent. This harmonization is essential to accelerate research and build more robust, reproducible models that propel precision medicine forward.

  2. Empowering stakeholder collaboration and secure data sharing 

    Effective biomedical discovery requires collaboration across multiple disciplines and institutions. A key takeaway from our interviews was the critical importance of collaboration and trust among stakeholders. Secure, user-friendly platforms that enable real-time data sharing and open communication among clinical trial managers, clinicians, computational scientists, and regulators can bridge the gap between isolated research silos. As a possible solution, by implementing centralized cloud-based infrastructures and democratizing data access, organizations can dramatically reduce data handoff issues and accelerate scientific discovery.

  3. Adopting actionable recommendations to address data pain points 

    Based on the insights from this study, the authors propose a list of actionable recommendations such as:

    • Creating user-friendly platforms to transition from manual (bench-side) data collection to electronic systems.
    • Standardizing analysis workflows to facilitate reproducibility, including version control and the seamless integration of notebooks into larger workflows.
    • Leveraging emerging technologies such as generative AI and transformer models for automating data ingestion and processing of unstructured text.

If implemented, the recommendations from this study would help forge a reliable, scalable infrastructure for managing the complexity of biomedical data, ultimately advancing research and clinical outcomes.

Looking ahead

At Microsoft Research, we believe in the power of interdisciplinarity and innovation. This study not only identifies the critical pain points that have slowed biomedical discovery but also illustrates a clear path toward improved data integrity, interoperability, and collaboration. By uniting diverse stakeholders around a common, secure, and scalable data research lifecycle, we edge closer to realizing individualized therapeutics for every patient.

We encourage our colleagues, partners, and the broader research community to review the full study and consider these insights as key steps toward a more integrated biomedical data research infrastructure. The future of precision medicine depends on our ability to break down data silos and create a research data lifecycle that is both robust and responsive to the challenges of big data.

Explore the full paper (opens in new tab) in Nature Scientific Reports to see how these recommendations were derived, and consider how they might integrate into your work. Let’s reimagine biomedical discovery together—where every stakeholder contributes to a secure, interoperable, and innovative data ecosystem that transforms patient care.

We look forward to engaging with the community on these ideas as we continue to push the boundaries of biomedical discovery at Microsoft Research.

The post Advancing biomedical discovery: Overcoming data challenges in precision medicine appeared first on Microsoft Research.

Read More

Magma: A foundation model for multimodal AI agents across digital and physical worlds

Magma: A foundation model for multimodal AI agents across digital and physical worlds

Gradient background transitioning from blue on the left to pink on the right. In the center, a rectangular box with ‘MAGMA’ written in bold white letters. To the left, an icon of a globe representing Earth. To the right, an icon of a computer monitor displaying a globe. Arrows connect these three elements in a circular flow, indicating interaction or data exchange between Earth, MAGMA, and the computer.

Imagine an AI system capable of guiding a robot to manipulate physical objects as effortlessly as it navigates software menus. Such seamless integration of digital and physical tasks has long been the stuff of science fiction.  

Today, Microsoft researchers are bringing that vision closer to reality with Magma (opens in new tab), a multimodal AI foundation model designed to process information and generate action proposals across both digital and physical environments. It is designed to enable AI agents to interpret user interfaces and suggest actions like button clicks, while also orchestrating robotic movements and interactions in the physical world.  

Built on the foundation model paradigm, Magma is pretrained on an expansive and diverse dataset, allowing it to generalize better across tasks and environments than smaller, task-specific models. As illustrated in Figure 1, Magma synthesizes visual and textual inputs to generate meaningful actions—whether executing a command in software or grabbing a tool in the physical world. This new model represents a significant step toward AI agents that can serve as versatile, general-purpose assistants. 

Given a described goal, Magma can formulate plans and execute actions to achieve it. By effectively transferring knowledge from freely available visual and language data, Magma bridges verbal, spatial and temporal intelligence to navigate complex tasks and settings.
Figure 1: Magma is one of the first foundation models that is capable of interpreting and grounding multimodal inputs within both digital and physical environments. Given a described goal, Magma can formulate plans and execute actions to achieve it. By effectively transferring knowledge from freely available visual and language data, Magma bridges verbal, spatial and temporal intelligence to navigate complex tasks and settings.

Vision-Language-Action (VLA) models integrate visual perception, language comprehension, and action reasoning to enable AI systems to interpret images, process textual instructions, and propose actions. These models bridge the gap between multimodal understanding and real-world interaction. Typically pretrained on large numbers of VLA datasets, they acquire the ability to understand visual content, process language, and perceive and interact with the spatial world, allowing them to perform a wide range of tasks. However, due to the dramatic difference among various digital and physical environments, separate VLA models are trained and used for different environments. As a result, these models struggle to generalize to new tasks and environments outside of their training data. Moreover, most of these models do not leverage pretrained vision-language (VL) models or diverse VL datasets, which hampers their understanding of VL relations and generalizability.  

Magma, to the best of our knowledge, is one of the first VLA foundation model that can adapt to new tasks in both digital and physical environments, which helps AI-powered assistants or robots understand their surroundings and suggest appropriate actions. For example, it could enable a home assistant robot to learn how to organize a new type of object it has never encountered or help a virtual assistant generate step-by-step user interface navigation instructions for an unfamiliar task. Through Magma, we demonstrate the advantages of pretraining a single VLA model for AI agents across multiple environments while still achieving state-of-the-art results on user interface navigation and robotic manipulation tasks, outperforming previous models that are tailored to these specific domains. On VL tasks, Magma also compares favorably to popular VL models that are trained on much larger datasets. 

Building a foundation model that spans such different modalities has required us to rethink how we train and supervise AI agents. Magma introduces a novel training paradigm centered on two key innovations: Set-of-Mark (SoM) and Trace-of-Mark (ToM) annotations. These techniques developed by Microsoft Research, imbue the model with a structured understanding of tasks in both user interface navigation and robotic manipulation domains. 

  • Set-of-Mark (SoM): SoM is an annotated set of key objects, or interface elements that are relevant to achieving a given goal. For example, if the task is to navigate a web page, the SoM includes all the bounding boxes for clickable user interface elements. In a physical task like setting a table, the SoM could include the plate, the cup, and the position of each item on the table. By providing SoM, we give Magma a high-level hint of “what needs attention”—the essential elements of the task—without yet specifying the order or method.
Set-of-Mark prompting enables effective action grounding in images for both UI screenshot , robot manipulation and human video by having the model predict numeric marks for clickable buttons or robot arms in image space. These marks give Magma a high-level hint of “what needs attention” – the essential elements of the task.
Figure 2: Set-of-Mark (SoM) for Action Grounding. Set-of-Mark prompting enables effective action grounding in images for both UI screenshot (left), robot manipulation (middle) and human video (right) by having the model predict numeric marks for clickable buttons or robot arms in image space. These marks give Magma a high-level hint of “what needs attention” – the essential elements of the task 
  • Trace-of-Mark (ToM): In ToM we extend the strategy of “overlaying marks” from static images to dynamic videos, by incorporating tracing lines following object movements over time. While SoM highlights key objects or interface elements relevant to a task, ToM captures how these elements change or move throughout an interaction. For example, in a physical task like moving an object on a table, ToM might illustrate the motion of a hand placing the object and adjusting its position. By providing these temporal traces, ToM offers Magma a richer understanding of how actions unfold, complementing SoM’s focus on what needs attention.
Trace-of-Mark (ToM) for Action Planning. Trace-of-Mark supervisions for robot manipulation and human action. It compels the model to comprehend temporal video dynamics and anticipate future states before acting, while using fewer tokens than next-frame prediction to capture longer temporal horizons and action-related dynamics without ambient distractions
Figure 3: Trace-of-Mark (ToM) for Action Planning. Trace-of-Mark supervisions for robot manipulation (left) and human action (right). It compels the model to comprehend temporal video dynamics and anticipate future states before acting, while using fewer tokens than next-frame prediction to capture longer temporal horizons and action-related dynamics without ambient distractions. 

Performance and evaluation

Zero-shot agentic intelligence

Table 1: Zero-shot evaluation on agentic intelligence. We report the results for pretrained Magma without any domain-specific finetuning. In this experiment, Magma is the only model that can conduct the full task spectrum.
Table 1: Zero-shot evaluation on agentic intelligence. We report the results for pretrained Magma without any domain-specific finetuning. In this experiment, Magma is the only model that can conduct the full task spectrum.
Zero-shot evaluation on Google Robots and Bridge with SimplerEnv. Magma shows strong zero-shot cross-domain robustness and demonstrates impressive results in cross-embodiment manipulation simulation tasks.
Figure 4: Zero-shot evaluation on Google Robots and Bridge with SimplerEnv. Magma shows strong zero-shot cross-domain robustness and demonstrates impressive results in cross-embodiment manipulation simulation tasks.

Efficient finetuning

Table showing Zero-shot evaluation on agentic intelligence. We report the results for pretrained Magma without any domain-specific finetuning. In this experiment, Magma is the only model that can conduct the full task spectrum.
Table 2: Efficient finetuning on Mind2Web for web UI navigation.
Figure 5: Few-shot finetuning on Widow-X robot (left) and LIBERO (right). Magma achieves a significantly higher average success rate in all task suites. Additionally, removing SoM and ToM during pretraining has a negative impact on model performance.
Figure 5: Few-shot finetuning on Widow-X robot (left) and LIBERO (right). Magma achieves a significantly higher average success rate in all task suites. Additionally, removing SoM and ToM during pretraining has a negative impact on model performance.
Without task-specific data, Magma performs competitively and even outperforms some state-of-the-art approaches such as Video-Llama2 and ShareGPT4Video on most benchmarks, despite using much fewer video instruction tuning data.
Table 3: Without task-specific data, Magma performs competitively and even outperforms some state-of-the-art approaches such as Video-Llama2 and ShareGPT4Video on most benchmarks, despite using much fewer video instruction tuning data.

Relation to broader research

Magma is one component of a much larger vision within Microsoft Research for the future of agentic AI systems. Across various teams and projects at Microsoft, we are collectively exploring how AI systems can detect, analyze, and respond in the world to amplify human capabilities.

Earlier this month, we announced AutoGen v0.4, a fully reimagined open-source library for building advanced agentic AI systems. While AutoGen focuses on the structure and management of AI agents, Magma enhances those agents by empowering them with a new level of capability. Developers can already use AutoGen to set up an AI assistant that leverages an LLM for planning and dialogue using conventional LLMs. Now with MAGMA, if developers want to build agents that execute both physical or user interface/browser tasks, that same assistant would call upon Magma to understand the environment, perform reasoning, and take a sequence of actions to complete the task. 

The reasoning ability of Magma can be further developed by incorporating test-time search and reinforcement learning, as described in ExACT. ExACT shows an approach for teaching AI agents to explore more effectively, enabling them to intelligently navigate their environments, gather valuable information, evaluate options, and identify optimal decision-making and planning strategies.

At the application level, we are also exploring new user experience (UX) powered by foundation models for the next generation of agentic AI systems. Data Formulator is a prime example. Announced late last year, Data Formulator, is an AI-driven visualization tool developed by Microsoft Research that translates high-level analytical intents into rich visual representations by handling complex data transformations behind the scenes​.  

Looking ahead, the integration of reasoning, exploration and action capabilities will pave the way for highly capable, robust agentic AI systems.

Magma is available on Azure AI Foundry Labs (opens in new tab) as well as on HuggingFace (opens in new tab) with an MIT license. Please refer to the Magma project page (opens in new tab) for more technical details. We invite you to test and explore these cutting-edge agentic model innovations from Microsoft Research.

The post Magma: A foundation model for multimodal AI agents across digital and physical worlds appeared first on Microsoft Research.

Read More

Exploring the structural changes driving protein function with BioEmu-1

Exploring the structural changes driving protein function with BioEmu-1

The image shows eight different 3D models of protein structures. Each model is color-coded with various segments in blue, green, orange, and other colors to highlight different parts of the protein.

From forming muscle fibers to protecting us from disease, proteins play an essential role in almost all biological processes in humans and other life forms alike. There has been extraordinary progress in recent years toward better understanding protein structures using deep learning, enabling the accurate prediction of protein structures from their amino acid sequences. However, predicting a single protein structure from its amino acid sequence is like looking at a single frame of a movie—it offers only a snapshot of a highly flexible molecule. Biomolecular Emulator-1 (BioEmu-1) is a deep-learning model that provides scientists with a glimpse into the rich world of different structures each protein can adopt, or structural ensembles, bringing us a step closer to understanding how proteins work. A deeper understanding of proteins enables us to design more effective drugs, as many medications work by influencing protein structures to boost their function or prevent them from causing harm.

One way to model different protein structures is through molecular dynamics (MD) simulations. These tools simulate how proteins move and deform over time and are widely used in academia and industry. However, in order to simulate functionally important changes in structure, MD simulations must be run for a long time. This is a computationally demanding task and significant effort has been put into accelerating simulations, going as far as designing custom computer architectures (opens in new tab). Yet, even with these improvements, many proteins remain beyond what is currently possible to simulate and would require simulation times of years or even decades. 

Enter BioEmu-1 (opens in new tab)—a deep learning model that can generate thousands of protein structures per hour on a single graphics processing unit. Today, we are making BioEmu-1 open-source (opens in new tab), following our preprint (opens in new tab) from last December, to empower protein scientists in studying structural ensembles with our model. It provides orders of magnitude greater computational efficiency compared to classical MD simulations, thereby opening the door to insights that have, until now, been out of reach. BioEmu-1 is featured in Azure AI Foundry Labs (opens in new tab), a hub for developers, startups, and enterprises to explore groundbreaking innovations from research at Microsoft.

on-demand event

Microsoft Research Forum Episode 4

Learn about the latest multimodal AI models, advanced benchmarks for AI evaluation and model self-improvement, and an entirely new kind of computer for AI inference and hard optimization.


We have enabled this by training BioEmu-1 on three types of data sets: (1) AlphaFold Database (AFDB) (opens in new tab) structures (2) an extensive MD simulation dataset, and (3) an experimental protein folding stability dataset (opens in new tab). Training BioEmu-1 on the AFDB structures is like mapping distinct islands in a vast ocean of possible structures. When preparing this dataset, we clustered similar protein sequences so that BioEmu-1 can recognize that a protein sequence maps to multiple distinct structures. The MD simulation dataset helps BioEmu-1 predict physically plausible structural changes around these islands, mapping out the plethora of possible structures that a single protein can adopt. Finally, through fine-tuning on the protein folding stability dataset, BioEmu-1 learns to sample folded and unfolded structures with the right probabilities.

Figure 1: BioEmu-1 predicts diverse structures of LapD protein unseen during training. We sampled structures independently and reordered the samples to create a movie connecting two experimentally known structures.

Combining these advances, BioEmu-1 successfully generalizes to unseen protein sequences and predicts multiple structures. In Figure 1, we show that BioEmu-1can predict structures of the LapD protein (opens in new tab) from Vibrio cholerae bacteria, which causes cholera. BioEmu-1 predicts structures of LapD when it is bound and unbound with c-di-GMP molecules, both of which are experimentally known but not in the training set. Furthermore, our model offers a view on intermediate structures, which have never been experimentally observed, providing viable hypotheses about how this protein functions. Insights into how proteins function pave the way for further advancements in areas like drug development.

The figure compares Molecular Dynamics (MD) simulation and BioEmu-1, and shows that BioEmu-1 can emulate the equilibrium distribution 100,000 times faster than running a MD simulation to full convergence. The middle part of the figure shows that the 2D projections of the structure distributions obtained from MD simulation and BioEmu-1 are nearly identical. The bottom part of the figure shows three representative structures from the equilibrium distribution.
Figure 2: BioEmu-1 reproduces the D. E. Shaw research (DESRES) simulation of Protein G accurately with a fraction of the computational cost. On the top, we compare the distributions of structures obtained by extensive MD simulation (left) and independent sampling from BioEmu-1 (right). Three representative sample structures are shown at the bottom.

Moreover, BioEmu-1 reproduces MD equilibrium distributions accurately with a tiny fraction of the computational cost. In Figure 2, we compare 2D projections of the structural distribution of D. E. Shaw research (DESRES) simulation of Protein G (opens in new tab) and samples from BioEmu-1. BioEmu-1 reproduces the MD distribution accurately, while requiring 10,000-100,000 times fewer GPU hours.

The left panel of the figure shows a scatter plot of the experimental folding free energies ΔG against those predicted by BioEmu-1. The plot shows a good correlation between the two. The right panel of the figure shows folded and unfolded structures of a protein.
Figure 3: BioEmu-1 accurately predicts protein stability. On the left, we plot the experimentally measured free energy differences ΔG against those predicted by BioEmu-1. On the right, we show a protein in folded and unfolded structures.

Furthermore, BioEmu-1 accurately predicts protein stability, which we measure by computing the folding free energies—a way to quantify the ratio between the folded and unfolded states of a protein. Protein stability is an important factor when designing proteins, e.g., for therapeutic purposes. Figure 3 shows the folding free energies predicted by BioEmu-1, obtained by sampling protein structures and counting folded versus unfolded protein structures, compared against experimental folding free energy measurements. We see that even on sequences that BioEmu-1 has never seen during training, the predicted free energy values correlate well with experimental values.

Professor Martin Steinegger (opens in new tab) of Seoul National University, who was not part of the study, says “With highly accurate structure prediction, protein dynamics is the next frontier in discovery. BioEmu marks a significant step in this direction by enabling blazing-fast sampling of the free-energy landscape of proteins through generative deep learning.”

We believe that BioEmu-1 is a first step toward generating the full ensemble of structures that a protein can take. In these early days, we are also aware of its limitations. With this open-source release, we hope scientists will start experimenting with BioEmu-1, helping us carve out its potentials and shortcomings so we can improve it in the future. We are looking forward to hearing how it performs on various proteins you care about.

Acknowledgements

BioEmu-1 is the result of highly collaborative team effort at Microsoft Research AI for Science. The full authors: Sarah Lewis, Tim Hempel, José Jiménez-Luna, Michael Gastegger, Yu Xie, Andrew Y. K. Foong, Victor García Satorras, Osama Abdin, Bastiaan S. Veeling, Iryna Zaporozhets, Yaoyi Chen, Soojung Yang, Arne Schneuing, Jigyasa Nigam, Federico Barbero, Vincent Stimper, Andrew Campbell, Jason Yim, Marten Lienen, Yu Shi, Shuxin Zheng, Hannes Schulz, Usman Munir, Ryota Tomioka, Cecilia Clementi, Frank Noé

The post Exploring the structural changes driving protein function with BioEmu-1 appeared first on Microsoft Research.

Read More

Introducing Muse: Our first generative AI model designed for gameplay ideation

Introducing Muse: Our first generative AI model designed for gameplay ideation

Three white gaming icons on a green and blue gradient background.

Today, the journal Nature (opens in new tab) is publishing our latest research, which introduces the first World and Human Action Model (WHAM). The WHAM, which we’ve named “Muse,” is a generative AI model of a video game that can generate game visuals, controller actions, or both.

The paper in Nature offers a detailed look at Muse, which was developed by the Microsoft Research Game Intelligence (opens in new tab) and Teachable AI Experiences (opens in new tab) (Tai X) teams in collaboration with Xbox Games Studios’ Ninja Theory (opens in new tab). Simultaneously, to help other researchers explore these models and build on our work, we are open sourcing the weights and sample data and making the executable available for the WHAM Demonstrator—a concept prototype that provides a visual interface for interacting with WHAM models and multiple ways of prompting the models. Developers can learn and experiment with the weights, sample data, and WHAM Demonstrator on Azure AI Foundry (opens in new tab)

In our research, we focus on exploring the capabilities that models like Muse need to effectively support human creatives. I’m incredibly proud of our teams and the milestone we have achieved, not only by showing the rich structure of the game world that a model like Muse can learn, as you see in the video demo below, but also, and even more importantly, by demonstrating how to develop research insights to support creative uses of generative AI models.

Generated gameplay examples

10 seconds video generated by Muse. The character Gizmo from the game Bleeding Edge is attacking an enemy player, jumps forward, and then turns around.
10 seconds video generated by Muse. The character Daemon from the game Bleeding Edge destroys a Cannister, and then collects the Power Cell within. Daemon then mounts their hoverboard and moves towards another set of Cannisters to destroy them.
10 seconds video generated by Muse. The character Gizmo from the game Bleeding edge is moving forward on a hoverboard towards a group of enemies.
10 seconds video generated by Muse. The character Zero Cool from the game Bleeding edge is moving forward up a set of stairs towards a group of enemies. They then activate their ability to jump up to a higher platform.
10 seconds video generated by Muse. The character Nidhoggr from the game Bleeding edge is navigating through the game map.
10 seconds video generated by Muse. The character Makuto from the game Bleeding edge is being healed by an ally whilst they dash forwards.
10 seconds video generated by Muse. The character Miko from the game Bleeding edge is on a hoverboard moving towards a group of Cannisters.
10 seconds video generated by Muse. The character Buttercup from the game Bleeding edge is attacking players from the opposing team.
10 seconds video generated by Muse. The character Makuto from the game Bleeding edge is fleeing from a fight with enemy players.
Example gameplay sequences generated by Muse (based on WHAM-1.6B) demonstrate that our model can generate complex gameplay sequences that are consistent over several minutes. All examples shown here were generated by prompting the model with 10 initial frames (1 second) of human gameplay and the controller actions of the whole play sequence. Muse is used in “world model mode” meaning that it is used to predict how the game will evolve from the initial prompt sequence. The more closely the generated gameplay sequence resembles the actual game, the more accurately Muse has captured the dynamics of that game.

What motivated this research?

As we release our research insights and model today, I keep thinking back to how this all started.  There was a key moment back in December 2022 that I remember clearly. I had recently returned from maternity leave, and while I was away the machine learning world had changed in fundamental ways. ChatGPT had been publicly released, and those who had tried it were in awe of OpenAI’s technical achievements and the model’s capabilities. It was a powerful demonstration of what transformer-based generative models could do when trained on large amounts of (text) data. Coming back from leave at that moment, the key question on my mind was, “What are the implications of this achievement for our team’s work at the intersection of artificial intelligence and video games?”

A new research opportunity enabled by data

In our team, we had access to a very different source of data. For years, we had collaborated with Xbox Game Studios’ Ninja Theory (based in Cambridge, UK, just like our research team) to collect gameplay data from Bleeding Edge, their 2020 Xbox game. Bleeding Edge is a 4-versus-4 game where all games are played online, and matches are recorded if the player agrees to the End User License Agreement (EULA). We worked closely with our colleagues at Ninja Theory and with Microsoft compliance teams to ensure that the data was collected ethically and used responsibly for research purposes.

“It’s been amazing to see the variety of ways Microsoft Research has used the Bleeding Edge environment and data to explore novel techniques in a rapidly moving AI industry,” said Gavin Costello, technical director at Ninja Theory. “From the hackathon that started it all, where we first integrated AI into Bleeding Edge, to building AI agents that could behave more like human players, to the World and Human Action Model being able to dream up entirely new sequences of Bleeding Edge gameplay under human guidance, it’s been eye-opening to see the potential this type of technology has.” 

Muse Training Data

Current Muse instances were trained on human gameplay data (visuals and controller actions) from the Xbox game Bleeding Edge – shown here at the 300×180 px resolution at which we train current models. Muse (using WHAM-1.6B) has been trained on more than 1 billion images and controller actions, corresponding to over 7 years of continuous human gameplay.
The Game Intelligence and Teachable AI Experiences teams playing the Bleeding Edge game together.

Until that point in late 2022, we had used Bleeding Edge as a platform for human-like navigation experiments, but we had not yet made meaningful use of the large amount of human player data we now had available. With the powerful demonstration of text-models, the next question was clear: “What could we achieve if we trained a transformer-based model on large amounts of human gameplay data?” 

Scaling up model training

As the team got to work, some of the key challenges included scaling up the model training. We initially used a V100 cluster, where we were able to prove out how to scale up to training on up to 100 GPUs; that eventually paved the way to training at scale on H100s. Key design decisions we made early focused on how to best leverage insights from the large language model (LLM) community and included choices such as how to effectively represent controller actions and especially images.

The first sign that the hard work of scaling up training was paying off came in the form of a demo that thoroughly impressed me. Tim Pearce, at that time a researcher in Game Intelligence, had put together examples of what happened early versus later in training. You can see the demo here – it’s like watching the model learn. This led to our follow-up work showing how scaling laws emerge in these kinds of models.

Muse consistency over the course of training

Ground truth
Human gameplay
Game visuals generated by Muse with 206M parameters
Conditioned on 1 second of real gameplay and 9 seconds of actions
Original 10k training updates 100k training updates 1M training updates
Character recognizable ✔ ✔ ✔
Basic movements and geometry​ ✔ ✔ ✔
No degeneration over time​ ✔ ✔
Correct interaction with power cell​ ✔
Models flying mechanic correctly​ ✔
Comparing ground truth human gameplay (left) to visuals generated using Muse (using WHAM-206M) when prompted with 1 second of human gameplay (visuals and controller actions) and 9 seconds of controller actions from the ground truth. In this setting, if Muse can generate visuals that closely match the ground truth, then it has captured the game dynamics. We see that the quality of generated visuals improves visibly over the course of training. In early training (10k training updates) we see signs of life, but quality deteriorates quickly. After 100k training updates, the model is consistent over time but does not yet capture relatively less frequent aspects of the game dynamics, such as the flying mechanic. Consistency with the ground truth continues to improve with additional training, e.g., the flying mechanic is captured after 1M training updates.

Multidisciplinary collaboration: Involving users from the beginning

We had started to investigate how to evaluate these types of models early on. For example, we wanted to understand the representations learned using linear probing, which was driven by Research Intern Gunshi Gupta and Senior Research Scientist Sergio Valcarcel Macua; to explore online evaluation, driven by Senior Research Scientist Raluca Georgescu; and to generate both visuals and actions, initially termed “full dreaming” and driven by Research Intern Tarun Gupta. But working through how to systematically evaluate Muse required a much broader set of insights. More importantly, we needed to understand how people might use these models in order to know how to evaluate them.  

This was where the opportunity for multidisciplinary research became crucial. We had discussed aspects of this work with Senior Principal Research Manager Cecily Morrison and her Teachable AI Experiences team for several months. And we had already partnered on an engagement with game creatives (driven by Cecily, Design Researcher Linda Wen, and Principal Research Software Development Engineer Martin Grayson) to investigate how game creators would like to use generative AI capabilities in their creative practice.

“It was a great opportunity to join forces at this early stage to shape model capabilities to suit the needs of creatives right from the start, rather than try to retrofit an already developed technology,” Cecily said. 

Linda offered some valuable insights about how we approached the work: “We’ve seen how technology-driven AI innovation has disrupted the creative industry—often catching creators off guard and leaving many feeling excluded,” she said. “This is why we invited game creators to help us shape this technology from the start. Recognizing that most AI innovations are developed in the Global North, we also made it a priority to recruit game creators from underrepresented backgrounds and geographies. Our goal was to create a technology that benefits everyone—not just those already in positions of privilege.” 

Unlocking new creative use cases with the WHAM Demonstrator

Now, with the model’s emerging capabilities and user insights in mind, it was time to put all the pieces together. The teams joined forces during a Microsoft internal hackathon to explore new interaction paradigms and creative uses that Muse could unlock. As a result, we developed a prototype that we call the WHAM Demonstrator, which allows users to directly interface with the model.

“The Global Hackathon was the perfect opportunity for everyone to come together and build our first working prototype,” Martin said. “We wanted to develop an interface for the WHAM model that would allow us to explore its creative potential and start to test ideas and uses we had learned from our interviews with game developers.” 

WHAM Demonstrator

For interacting with World and Human Action Models like Muse, the WHAM Demonstrator provides a visual interface for interacting with a WHAM instance.

In this example, the user is loading a visual as an initial prompt to the model, here a single promotional image for the game Bleeding Edge. They use Muse to generate multiple potential continuations from this starting point.
The user explores the generated sequences and can tweak them, for example using a game controller to direct the character. These features demonstrate how Muse’s capabilities can enable iteration as part of the creative process.

Identifying key capabilities and how to evaluate them

The hands-on experience of exploring Muse capabilities with the WHAM Demonstrator, and drawing on insights we gained from the user study, allowed us to systematically identify capabilities that game creatives would require to use generative models like Muse. This in turn allowed us to establish evaluation protocols for three key capabilities: consistency, diversity, and persistency. Consistency refers to a model’s ability to generate gameplay sequences that respect the dynamics of the game. For example, the character moves consistently with controller actions, does not walk through walls, and generally reflects the physics of the underlying game. Diversity refers to a model’s ability to generate a range of gameplay variants given the same initial prompt, covering a wide range of ways in which gameplay could evolve. Finally, persistency refers to a model’s ability to incorporate (or “persist”) user modifications into generated gameplay sequences, such as a character that is copy-pasted into a game visual. We give an overview of these capabilities below. 

Muse evaluation of consistency, diversity and persistency

Consistency

We evaluate consistency by prompting the model with ground truth gameplay sequences and controller actions, and letting the model generate game visuals. The videos shown here are generated using Muse (based on WHAM-1.6B) and demonstrate the model’s ability to generate consistent gameplay sequences of up to two minutes. In our paper, we also compare the generated visuals to the ground truth visuals using FVD (Fréchet Video Distance), an established metric in the video generation community.

Diversity

Muse (based on WHAM-1.6B) generated examples of behavioral and visual diversity, conditioned on the same initial 10 frames (1 second) of real gameplay. The three examples at the top show behavioral diversity (diverse camera movement, loitering near the spawn location, and navigating various paths to the middle jump pad). The three examples below show visual diversity (different hoverboards for the character). In the paper, we also quantitatively assess diversity using the Wasserstein distance, a measure of distance between two distributions, to compare the model-generated sequences to the diversity reflected in human gameplay recordings. Muse generated examples of behavioral and visual diversity, conditioned on the same 10 frames of real gameplay. Three examples of behavioral diversity show diverse camera movement, loitering near the spawn location, and navigating various paths to the middle jump pad. Three examples of visual diversity show different hoverboards for the character.

With our evaluation framework in place, and access to an H100 compute allocation, the team was able to further improve Muse instances, including higher resolution image encoders (our current models generate visuals at a resolution of 300×180 pixels, up from the 128×128 resolution of our earliest models) and larger models, and expand to all seven Bleeding Edge maps. To show some of the capabilities of the model we are publishing today, we have included videos of 2-minute-long generated gameplay sequences above, which give an impression of the consistency and diversity of gameplay sequences that the model can generate.

According to Senior Researcher Tabish Rashid: “Being handed an allocation of H100s was initially quite daunting, especially in the early stages figuring out how to make best use of it to scale to larger models with the new image encoders. After months of experimentation, it was immensely rewarding to finally see outputs from the model on a different map (not to knock the lovely greenery of Skygarden) and not have to squint so much at smaller images. I’m sure at this point many of us have watched so many videos from Muse that we’ve forgotten what the real game looks like.”

One of my favorite capabilities of the model is how it can be prompted with modifications of gameplay sequences and persist newly introduced elements. For example, in the demo below, we’ve added a character onto the original visual from the game. Prompting the model with the modified visual, we can see how the model “persists” the added character and generates plausible variants of how the gameplay sequence could have evolved from this modified starting point.

Persistency

Demonstrations of how Muse (based on WHAM-1.6B) can persist modifications. A visual is taken from the original gameplay data and an image of an additional character is edited into the image. The generated gameplay sequence shows how the character is adapted into the generated gameplay sequence.

Conclusion

Today, our team is excited to be publishing our work in Nature and simultaneously releasing Muse open weights, the WHAM Demonstrator, and sample data to the community.

I look forward to seeing the many ways in which the community will explore these models and build on our research. I cannot wait to see all the ways that these models and subsequent research will help shape and increase our understanding of how generative AI models of human gameplay may support gameplay ideation and pave the way for future, novel, AI-based game experiences, including the use cases that our colleagues at Xbox (opens in new tab) have already started to explore.

The post Introducing Muse: Our first generative AI model designed for gameplay ideation appeared first on Microsoft Research.

Read More

Ideas: Quantum computing redefined with Chetan Nayak

Ideas: Quantum computing redefined with Chetan Nayak

Outline illustration of Chetan Nayak | Ideas podcast

Behind every emerging technology is a great idea propelling it forward. In the Microsoft Research Podcast series Ideas, members of the research community at Microsoft discuss the beliefs that animate their research, the experiences and thinkers that inform it, and the positive human impact it targets.

In this episode, host Gretchen Huizinga talks with Dr. Chetan Nayak, a technical fellow focused on quantum hardware at Microsoft. As a preteen, Nayak became engrossed in the world of scientific discovery, “accidentally exposed,” he says, to the theory of relativity, advanced mathematics, and the like while exploring the shelves of his local bookstores. In studying these big ideas, he began to develop his own understanding of the forces and phenomena at work around us and ultimately realized he could make his own unique contributions, which have since included advancing the field of quantum computing. Nayak examines the defining moments in the history of quantum computing; explains why we still need quantum computing, even with the rise of generative AI; and discusses how Microsoft Quantum is re-engineering the quantum computer with the creation of the world’s first topoconductor and first quantum processing unit (QPU) architecture with a topological core, called the Majorana 1.

Transcript

[TEASER] [MUSIC PLAYS UNDER DIALOGUE]

CHETAN NAYAK: People sometimes say, well, quantum computers are just going to be like classical computers but faster. And that’s not the case. So I really want to emphasize the fact that quantum computers are an entirely different modality of computing. You know, there are certain problems which quantum computers are not just faster at than classical computers but quantum computers can solve and classical computers have no chance of solving.

[TEASER ENDS]

GRETCHEN HUIZINGA: You’re listening to Ideas, a Microsoft Research Podcast that dives deep into the world of technology research and the profound questions behind the code. I’m Gretchen Huizinga. In this series, we’ll explore the technologies that are shaping our future and the big ideas that propel them forward.

[MUSIC FADES]

My guest today is Dr. Chetan Nayak, a technical fellow of Quantum Hardware at Microsoft Quantum. Under Chetan’s leadership, the Microsoft Quantum team has published a paper that demonstrates a fundamental operation for a scalable topological quantum computer. The team also announced the creation of the world’s first topoconductor—more on that later—and first QPU architecture with a topological core, called the Majorana 1. Chetan Nayak, I can’t wait to find out what all of this is … welcome to Ideas!


CHETAN NAYAK: Thank you. Thanks for having me. And I’m excited to tell you about this stuff.

HUIZINGA: Well, you have a huge list of accomplishments, accolades, and awards—little alliteration there. But I want to start by getting to know a bit more about you and what got you there. So specifically, what’s your “research origin story,” as it were? What big idea inspired you to study the smallest parts of the universe?

NAYAK: It’s a great question. I think if I really have to go back to the origin story, it starts when I was a kid, you know, probably a preteen. And, you know, I’d go to bookstores to … I know, I guess many of the people listening to this may not know what that is, [LAUGHTER] but there used to be these brick-and-mortar storefronts where they would sell books, physical books, …

HUIZINGA: Right.

NAYAK: … and I’d go to bookstores to, you know, to buy books to read, you know, fiction. But I would browse through them, and there’d be a nonfiction section. And often there’d be used books, you know, sometimes used textbooks or used popular science books. And I remember, even though they were bookstores, not libraries, I would spend a lot of time there leafing through books and got exposed to—accidentally exposed to—a lot of ideas that I wouldn’t otherwise have been. You know, just, sort of, you know, I maybe went there, you know, looking to pick up the next Lord of the Rings book, and while I was there, you know, wander into a book that was sort of explaining the theory of relativity to non-scientists. And I remember leafing through those books and actually reading about Einstein’s discoveries, you know, most famously E = mc2, but actually a lot of those books were explaining these thought experiments that Einstein did where he was thinking about, you know, if he were on a train that were traveling at the speed of light, what would light look like to him? [LAUGHTER] Would he catch up to it? You know, and all these incredible thought experiments that he did to try to figure out, you know, to really play around with the basic laws as they were currently understood, of physics, and by, you know, stretching and pulling them and going into extreme … taking them to extreme situations, you could either find the flaws in them or in some cases see what the next steps were. And that was, you know, really inspirational to me. I, you know, around the same time, also started leafing through various advanced math books and a little later picked up a book on calculus and started flipping through it, used book with, like, you know, the cover falling apart and the pages starting to fall out. But there was a lot of, you know, accidental discovery of topics through wandering through bookstores, actually. I also, you know, went to this great magnet high school in New York City called Stuyvesant High School, where I was surrounded by people who were really interested in science and math and technology. So I think, you know, for me, that origin story really starts, you know, maybe even earlier, but at least in my preteen years when, you know, I went through a process of learning new things and trying to understand them in my own way. And the more you do that, eventually you find maybe you’re understanding things in a little different way than anybody else ever did. And then pretty soon, you know, you’re discovering things that no one’s ever discovered before. So that’s, sort of, how it started.

HUIZINGA: Yeah. Well, I want to drill in a little bit there because you’ve brought to mind a couple of images. One is from a Harry Potter movie, And the Half-Blood Prince, where he discovers the potions handbook, but it’s all torn up and they were fighting about who didn’t get that book. And it turned out to be … so there’s you in a bookstore somewhere between the sci-fi and the non-fi, shall we call it. And you’re, kind of, melding the two together. And I love how you say, I was accidentally exposed. [LAUGHTER] Sounds kind of like radiation of some kind and you’ve turned into a scientist. A little bit more on that. This idea of quantum, because you’ve mentioned Albert Einstein, there’s quantum physics, quantum mechanics, now quantum computing. Do these all go together? I mean, what came out of what in that initial, sort of, exploration with you? Where did you start getting interested in the quantum of things?

NAYAK: Yeah, so I definitely started with relativity, not quantum. That was the first thing I heard about. And I would say in a lot of ways, that’s the easier one. I mean, those are the two big revolutions in physics in the 20th century, relativity and quantum theory, and quantum mechanics is by far, at least for me and for many people, the harder one to get your head around because it is so counterintuitive. Quantum mechanics in some sense, or quantum theory in some sense, for most of what we experience in the world is down many abstraction layers away from what we experience. What I find amazing is that the people who created, you know, discovered quantum mechanics, they had nothing but the equations to guide them. You know, they didn’t really understand what they were doing. They knew that there were some holes or gaps in the fundamental theory, and they kind of stumbled into these equations, and they gave the right answers, and they just had to follow it. I was actually just a few weeks ago, I was in Arosa, which is a small Swiss town in the Alps. That’s actually the town where Schrödinger discovered Schrödinger’s equation.

HUIZINGA: No!

NAYAK: Yeah, a hundred years ago, this summer …

HUIZINGA: Amazing!

NAYAK: So Schrödinger suffered tuberculosis, which eventually actually killed him much later in his life. And so he went into the mountains …

HUIZINGA: … for the cure.

NAYAK: … for his health, yeah, to a sanatorium to recover from tuberculosis. And while he was there in Arosa, he discovered his equation. And it’s a remarkable story because, you know, that equation, he didn’t even know what the equation meant. He just knew, well, particles are waves, and waves have wave equations. Because that’s ultimately Maxwell’s equation. You can derive wave equations for light waves and radio waves and microwaves, x-rays. And he said, you know, there has to be a wave equation for this thing and this wave equation needs to somehow correctly predict the energy levels in hydrogen.

HUIZINGA: Oh, my gosh.

NAYAK: And he, you know, worked out this equation and then solved it, which is for that time period not entirely trivial. And he got correctly the energy levels of hydrogen, which people had … the spectra, the different wavelengths of light that hydrogen emits. And lo and behold, it works. He had no idea why. No idea what it even meant. And, um, but knew that he was onto something. And then remarkably, other people were able to build on what he’d done, were able to say, no, there must be a grain of truth here, if not the whole story, and let’s build on this, and let’s make something that is richer and encompasses more and try to understand the connections between this and other things. And Heisenberg was, around the same time, developing his what’s called matrix mechanics, a different way of thinking about quantum computing, and then people realize the connections between those, like Dirac. So it’s a remarkable story how people, how scientists, took these things they understood, you know, imposed on it a certain level of mathematical consistency and a need for the math to predict things that you could observe, and once you had, sort of, the internal mathematical consistency and it was correctly explaining a couple of data points about the world, you could build this huge edifice based on that. And so that was really impressive to me as I learned that. And that’s 100 years ago! It was 1925.

HUIZINGA: Right. Well, let me …

NAYAK: And that’s quantum mechanics!

HUIZINGA: OK.

NAYAK: You’re probably going to say, well, how does quantum computing fit into this, you know? [LAUGHTER] Right? And that’s a much later development. People spent a long time just trying to understand quantum mechanics, extend it, use it to understand more things, to understand, you know, other particles. So it was initially introduced to understand the electron, but you could understand atoms, molecules, and subatomic things and quarks and positrons. So there was a rich, you know, decades of development and understanding, and then eventually it got combined with relativity, at least to some extent. So there was a lot to do there to really understand and build upon the early discoveries of quantum mechanics. One of those directions, which was kicked off by Feynman around, I think, 1982 and independently by a Russian mathematician named Yuri Manin was, OK, great, you know, today’s computers, again, is many abstraction layers away from anything quantum mechanical, and in fact, it’s sort of separated from the quantum world by many classical abstraction layers. But what if we built a technology that didn’t do that? Like, that’s a choice. It was a choice. It was a choice that was partially forced on us just because of the scale of the things we could build. But as computers get smaller and smaller and the way Moore’s law is heading, you know, at some point, you’re going to get very close to that point at which you cannot abstract away quantum mechanics, [LAUGHTER] where you must deal with quantum mechanics, and it’s part and parcel of everything. You are not in the fortunate case where, out of quantum theory has emerged the classical world that behaves the way we expect it to intuitively. And, you know, once we go past that, that potentially is really catastrophic and scary because, you know, you’re trying to make things smaller for the sake of, you know, Moore’s law and for making computers faster and potentially more energy efficient. But, you know, if you get down to this place where the momentum and position of things, of the electrons, you know, or of the currents that you’re relying on for computation, if they’re not simultaneously well-defined, how are you going to compute with that? It looks like this is all going to break down. And so it looks like a real crisis. But, you know, what they realized and what Feynman realized was actually it’s an opportunity. It’s actually not just a crisis. Because if you do it the right way, then actually it gives you way more computational power than you would otherwise have. And so rather than looking at it as a crisis, it’s an opportunity. And it’s an opportunity to do something that would be otherwise unimaginable.

HUIZINGA: Chetan, you mentioned a bunch of names there. I have to say I feel sorry for Dr. Schrödinger because most of what he’s known for to people outside your field is a cat, a mysterious cat in a box, meme after meme. But you’ve mentioned a number of really important scientists in the field of quantum everything. I wonder, who are your particular quantum heroes? Are there any particular, sort of, modern-day 21st-century or 20th-century people that have influenced you in such a way that it’s like, I really want to go deep here?

NAYAK: Well, definitely, you know, the one person I mentioned, Feynman, is later, so he’s the second wave, you could say, of, OK, so if the first wave is like Schrödinger and Heisenberg, and you could say Einstein was the leading edge of that first wave, and Planck. But … and the second wave, maybe you’d say is, is, I don’t know, if Dirac is first or second wave. You might say Dirac is second wave and potentially Landau, a great Russian physicist, second wave. Then maybe Feynman’s the third wave, I guess? I’m not sure if he’s second or third wave, but anyway, he’s post-war and was really instrumental in the founding of quantum computing as a field. He had a famous statement, which is, you know, in his lectures, “There’s always room at the bottom.” And, you know, what he was thinking about there was, you can go to these extreme conditions, like very low temperatures and in some cases very high magnetic fields, and new phenomena emerge when you go there, phenomena that you wouldn’t otherwise observe. And in a lot of ways, many of the early quantum theorists, to some extent, were extreme reductionists because, you know, they were really trying to understand smaller and smaller things and things that in some ways are more and more basic. At the same time, you know, some of them, if not all of them, at the same time held in their mind the idea that, you know, actually, more complex behaviors emerge out of simple constituents. Einstein famously, in his miracle year of 1905, one of the things he did was he discovered … he proposed the theory of Brownian motion, which is an emergent behavior that relies on underlying atomic theory, but it is several layers of abstraction away from the underlying atoms and molecules and it’s a macroscopic thing. So Schrödinger famously, among the other things, he’s the person who came up with the concept of entanglement …

HUIZINGA: Yes.

NAYAK: … in understanding his theory. And for that matter, Schrödinger’s cat is a way to understand the paradoxes that occur when the classical world emerges from quantum mechanics. So they were thinking a lot about how these really incredible, complicated things arise or emerge from very simple constituents. And I think Feynman is one those people who really bridged that as a post-war scientist because he was thinking a lot about quantum electrodynamics and the basic underlying theory of electrons and photons and how they interact. But he also thought a lot about liquid helium and ultimately about quantum computing. Motivation for him in quantum computing was, you have these complex systems with many underlying constituents and it’s really hard to solve the equation. The equations are basically unsolvable.

HUIZINGA: Right.

NAYAK: They’re complicated equations. You can’t just, sort of, solve them analytically. Schrödinger was able to do that with his equation because it was one electron, one proton, OK. But when you have, you know, for a typical solid, you’ll have Avogadro’s number of electrons and ions inside something like that, there’s no way you’re going to solve that. And what Feynman recognized, as others did, really, coming back to Schrödinger’s observation on entanglement, is you actually can’t even put it on a computer and solve a problem like that. And in fact, it’s not just that with Avogadro’s number you can’t; you can’t put it on a computer and solve it with a thousand, you know, [LAUGHTER] atoms, right? And actually, you aren’t even going to be able to do it with a hundred, right. And when I say you can’t do that on a computer, it’s not that, well, datacenters are getting bigger, and we’re going to have gigawatt datacenters, and then that’s the point at which we’ll be able to see—no, the fact is the amazing thing about quantum theory is if, you know, you go from, let’s say, you’re trying to solve a problem with 1,000 atoms in it. You know, if you go to 1,001, you’re doubling the size of the problem. As far as if you were to store it on a cloud, just to store the problem on the classical computer, just to store the answer, I should say, on a classical computer, you’d have to double the size. So there’s no chance of getting to 100, even if, you know, with all the buildout of datacenters that’s happening at this amazing pace, which is fantastic and is driving all these amazing advances in AI, that buildout is never going to lead to a classical computer that can even store the answer to a difficult quantum mechanical problem.

HUIZINGA: Yeah, so basically in answer to the “who are your quantum heroes,” you’ve kind of given us a little history of quantum computing, kind of, the leadup and the questions that prompted it. So we’ll get back to that in one second, because I want you to go a little bit further on where we are today. But before we do that, you’ve also alluded to something that’s super interesting to me, which is in light of all the recent advances and claims in AI, especially generative AI, that are making claims like we’ll be able to shorten the timeline on scientific discovery and things like that, why then, do we need quantum computing? Why do we need it?

NAYAK: Great question, so at least AI is … AI and machine learning, at least so far, is only as good as the training data that you have for it. So if you train AI on all the data we have, and if you train AI on problems we can solve, which at some level are classical, you will be able to solve classical problems. Now, protein folding is one of those problems where the solution is basically classical, very complicated and difficult to predict but basically classical, and there was a lot of data on it, right. And so it was clearly a big data problem that’s basically classical. As far as we know, there’s no classical way to simulate or mimic quantum systems at scale, that there’s a clean separation between the classical and quantum worlds. And so, you know, that the quantum theory is the fundamental theory of the world, and there is no hidden classical model that is lurking [LAUGHTER] in the background behind it, and people sometimes would call these things like hidden variable theories, you know, which Einstein actually really was hoping, late in his life, that there was. That there was, hiding behind quantum mechanics, some hidden classical theory that was just obscured from our view. We didn’t know enough about it, and the quantum thing was just our best approximation. If that’s true, then, yeah, maybe an AI can actually discover that classical theory that’s hiding behind the quantum world and therefore would be able to discover it and answer the problems we need to answer. But that’s almost certainly not the case. You know, there’s just so much experimental evidence about the correctness of quantum mechanics and quantum theory and many experiments that really, kind of, rule out many aspects of such a classical theory that I think we’re fairly confident there isn’t going to be some classical approximation or underlying theory hiding behind quantum mechanics. And therefore, an AI model, which at the end of the day is some kind of very large matrix—you know, a neural network is some very large classical model obeying some very classical rules about, you take inputs and you produce outputs through many layers—that that’s not going to produce, you know, a quantum theory. Now, on the other hand, if you have a quantum computer and you can use that quantum computer to train an AI model, then the AI model is learning—you’re teaching it quantum mechanics—and at least within a certain realm of quantum problems, it can interpolate what we’ve learned about quantum mechanics and quantum problems to solve new problems that, you know, you hadn’t already solved. Actually, you know, like I said, in the early days, I was reading these books and flipping through these bookstores, and I’d sometimes figure out my own ways to solve problems different from how it was in the books. And then eventually I ended up solving problems that hadn’t been solved. Well, that’s sort of what an AI does, right? It trains off of the internet or off of playing chess against itself many times. You know, it learns and then takes that and eventually by learning its own way to do things, you know, it learns things that we as humans haven’t discovered yet.

HUIZINGA: Yeah.

NAYAK: And it could probably do that with quantum mechanics if it were trained on quantum data. So, but without that, you know, the world is ultimately quantum mechanical. It’s not classical. And so something classical is not going to be a general-purpose substitute for quantum theory.

HUIZINGA: OK, Chetan, this is fascinating. And as you’ve talked about pretty well everything so far, that’s given us a really good, sort of, background on quantum history as we know it in our time. Talk a little bit about where we are now, particularly—and we’re going get into topology in a minute, topological stuff—but I want to know where you feel like the science is now, and be as concise as you can because I really want get to your cool work that we’re going to talk about. And this question includes, what’s a Majorana and why is it important?

NAYAK: Yeah. So … OK, unfortunately, it won’t be that concise an answer. OK, so, you know, early ’80s, ideas about quantum computing were put forward. But I think most people thought, A, this is going to be very difficult, you know, to do. And I think, B, it wasn’t clear that there was enough motivation. You know, I think Feynman said, yes, if you really want to simulate quantum systems, you need a quantum computer. And I think at that point, people weren’t really sure, is that the most pressing thing in the world? You know, simulating quantum systems? It’s great to understand more about physics, understand more about materials, understand more about chemistry, but we weren’t even at that stage, I think, there where, hey, that’s the limiting thing that’s limiting progress for society. And then, secondly, there was also this feeling that, you know, what you’re really doing is some kind of analog computing. You know, this doesn’t feel digital, and if it doesn’t feel digital, there’s this question about error correction and how reliable is it going to be. So Peter Shor actually, you know, did two amazing things, one of which is a little more famous in the general public but one of which is probably more important technically, is he did these two amazing things in the mid-’90s. He first came up with Shor’s algorithm, where he said, if you have a quantum computer, yeah, great for simulating quantum systems, but actually you can also factor large numbers. You can find the prime factors of large numbers, and the difficulty of that problem is the underlying security feature under RSA [encryption], and many of these public key cryptography systems rely on certain types of problems that are really hard. It’s easy to multiply two large primes together and get the output, and you can use that to encrypt data. But to decrypt it, you need to know those two numbers, and it’s hard to find those factors. What Peter Shor discovered is that ideally, a quantum computer, an ideal quantum computer, would be really good at this, OK. So that was the first discovery. And at that point, what seemed at the time an academic problem of simulating quantum systems, which seemed like in Feynman’s vision, that’s what quantum computers are for, that seemingly academic problem, all of a sudden, also, you know, it turns out there’s this very important both financially and … economically and national security-wise other application of a quantum computer. And a lot of people sat up and took notice at that point. So that’s huge. But then there’s a second thing that he, you know, discovered, which was quantum error correction. Because everyone, when he first discovered it, said, sure, ideally that’s how a quantum computer works. But quantum error correction, you know, this thing sounds like an analog system. How are you going to correct errors? This thing will never work because it’ll never operate perfectly. Schrödinger’s problem with the cat’s going to happen, is that you’re going to have entanglement. The thing is going to just end up being basically classical, and you’ll lose all the supposed gains you’re getting from quantum mechanics. And quantum error correction, that second discovery of Peter Shors, really, you know, suddenly made it look like, OK, at least in principle, this thing can happen. And people built on that. Peter Shor’s original quantum error correction, I would say, it was based on a lot of ideas from classical error correction. Because you have the same problem with classical communication and classical computing. Alexei Kitaev then came up with, you know, a new set of quantum error correction procedures, which really don’t rely in the same way on classical error correction. Or if they do, it’s more indirect and in many ways rely on ideas in topology and physics. And, you know, those ideas, which lead to quantum error correcting codes, but also ideas about what kind of underlying physical systems would have built-in hardware error protection, led to what we now call topological quantum computing and topological qubits, because it’s this idea that, you know, just like people went from the early days of computers from vacuum tubes to silicon, actually, initially germanium transistors and then silicon transistors, that similarly that you had to have the right underlying material in order to make qubits.

HUIZINGA: OK.

NAYAK: And that the right underlying material platform, just as for classical computing, it’s been silicon for decades and decades, it was going to be at one of these so-called topological states of matter. And that these would be states of matter whose defining feature, in a sense, would be that they protect quantum information from errors, at least to some extent. Nothing’s perfect, but, you know, in a controllable way so that you can make it better as needed and good enough that any subsequent error correction that you might call software-level error correction would not be so cumbersome and introduce so much overhead as to make a quantum computer impractical. I would say, you know, there were these … the field had a, I would say, a reboot or a rebirth in the mid-1990s, and pretty quickly those ideas, in addition to the applications and algorithms, you know, coalesced around error correction and what’s called fault tolerance. And many of those ideas came, you know, freely interchanged between ideas in topology and the physics of what are called topological phases and, you know, gave birth to this, I would say, to the set of ideas on which Microsoft’s program has been based, which is to look for the right material … create the right material and qubits based on it so that you can get to a quantum computer at scale. Because there’s a number of constraints there. And the work that we’re really excited about right now is about getting the right material and harnessing that material for qubits.

HUIZINGA: Well, let’s talk about that in the context of this paper that you’re publishing and some pretty big news in topology. You just published a paper in Nature that demonstrates—with receipts—a fundamental operation for a scalable topological quantum computer relying on, as I referred to before, Majorana zero modes. That’s super important. So tell us about this and why it’s important.

NAYAK: Yeah, great. So building on what I was just saying about having the right material, what we’re relying on is, to an extent, is superconductivity. So that’s one of the, you know, really cool, amazing things about the physical world. That many metals, including aluminum, for instance, when you cool them down, they’re able to carry electricity with no dissipation, OK. No energy loss associated with that. And that property, the remarkable … that property, what underlies it is that the electrons form up into pairs. These things called Cooper pairs. And those Cooper pairs, their wave functions kind of lock up and go in lockstep, and as a result, actually the number of them fluctuates wildly, you know, in any place locally. And that enables them to, you know, to move easily and carry current. But also, a fundamental feature, because they form pairs, is that there’s a big difference between an even and odd number of electrons. Because if there’s an odd electron, then actually there’s some electron that’s unpaired somewhere, and there’s an energy penalty associated, an energy cost to that. It turns out that that’s not always true. There’s actually a subclass of superconductors called topological superconductors, or topoconductors, as we call them, and topoconductors have this amazing property that actually they’re perfectly OK with an odd number of electrons! In fact, when there’s an odd number of electrons, there isn’t any unpaired electron floating around. But actually, topological superconductors, they don’t have that. That’s the remarkable thing about it. I’ve been warned not to say what I’m about to say, but I’ll just go ahead [LAUGHTER] and say it anyway. I guess that’s bad way to introduce something …

HUIZINGA: No, it’s actually really exciting!

NAYAK: OK, but since you brought up, you know, Harry Potter and the Half-Blood Prince, you know, Voldemort famously split his soul into seven or, I guess, technically eight, accidentally. [LAUGHTER] He split his soul into seven Horcruxes, so in some sense, there was no place where you could say, well, that’s where his soul is.

HUIZINGA: Oh, my gosh!

NAYAK: So Majorana zero modes do kind of the same thing! Like, there’s this unpaired electron potentially in the system, but you can’t find it anywhere. Because to an extent, you’ve actually figured out a way to split it and put it … you know, sometimes we say like you put it at the two ends of the system, but that’s sort of a mathematical construct. The reality is there is no place where that unpaired electron is!

HUIZINGA: That’s crazy. Tell me, before you go on, we’re talking about Majorana. I had to look it up. That’s a guy’s name, right? So do a little dive into what this whole Majorana zero mode is.

NAYAK: Yeah, so Majorana was an Italian physicist, or maybe technically Sicilian physicist. He was very active in the ’20s and ’30s and then just disappeared mysteriously around 1937, ’38, around that time. So no one knows exactly what happened to him. You know, but one of his last works, which I think may have only been published after he disappeared, he proposed this equation called the Majorana equation. And he was actually thinking about neutrinos at the time and particles, subatomic particles that carry no charge. And so, you know, he was thinking about something very, very different from quantum computing, actually, right. So Majorana—didn’t know anything about quantum computing, didn’t know anything about topological superconductors, maybe even didn’t know much about superconductivity at all—was thinking about subatomic particles, but he wrote down this equation for neutral objects, or some things that don’t carry any charge. And so when people started, you know, in the ’90s and 2000s looking at topological superconductors, they realized that there are these things called Majorana zero modes. So, as I said, and let me explain how they enter the story, so Majorana zero modes are … I just said that topological superconductors, there’s no place you can find that even or odd number of electrons. There’s no penalty. Now superconductors, they do have a penalty—and it’s called the energy gap—for breaking a pair. Even topological superconductors. You take a pair, a Cooper pair, you break it, you have to pay that energy cost, OK. And it’s, like, double the energy, in a sense, of having an unpaired electron because you’ve created two unpaired electrons and you break that pair. Now, somehow a topological superconductor has to accommodate that unpaired electron. It turns out the way it accommodates it is it can absorb or emit one of these at the ends of the wire. If you have a topological superconductor, a topoconductor wire, at the ends, it can absorb or emit one of these things. And once it goes into one end, then it’s totally delocalized over the system, and you can’t find it anywhere. You can say, oh, it got absorbed at this end, and you can look and there’s nothing you can tell. Nothing has changed about the other end. It’s now a global property of the whole thing that you actually need to somehow figure out, and I’ll come to this, somehow figure out how to connect the two ends and actually measure the whole thing collectively to see if there’s an even or odd number of electrons. Which is why it’s so great as a qubit because the reason it’s hard for Schrödinger’s cat to be both dead and alive is because you’re going to look at it, and then you look at it, photons are going to bounce off it and you’re going to know if it’s dead or alive. And the thing is, the thing that was slightly paradoxical is actually a person doesn’t have to perceive it. If there’s anything in the environment that, you know, if a photon bounces off, it’s sort of like if a tree falls in the forest …

HUIZINGA: I was just going to say that!

NAYAK: … it still makes a sound. I know! It still makes a sound in the sense that Schrödinger’s cat is still going to be dead or alive once a photon or an air molecule bounces off it because of the fact that it’s gotten entangled with, effectively, the rest of the universe … you know many other parts of the universe at that point. And so the fact that there is no place where you can go and point to that unpaired electron means it does that “even or oddness” which we call parity, whether something’s even or odd is parity. And, you know, these are wires with, you know, 100 million electrons in them. And it’s a difference between 100 million and 100 million and one. You know, because one’s an even or odd number. And that difference, you have to be able to, like, the environment can’t detect it. So it doesn’t get entangled with anything, and so it can actually be dead and alive at the same time, you know, unlike Schrödinger’s cat, and that’s what you need to make a qubit, is to create those superpositions. And so Majorana zero modes are these features of the system that actually don’t actually carry an electrical charge. But they are a place where a single unpaired electron can enter the system and then disappear. And so they are this remarkable thing where you can hide stuff. [LAUGHS]

HUIZINGA: So how does that relate to your paper and the discoveries that you’ve made here?

NAYAK: Yeah, so in an earlier paper … so now the difficulty is you have to actually make this thing. So, you know, you put a lot of problems up front, is that you’re saying, OK, the solution to our problem is we need this new material and we need to harness it for qubits, right. Great. Well, where are we going to get this material from, right? You might discover it in nature. Nature may hand it to you. But in many cases, it doesn’t. And that’s … this is one of those cases where we actually had to engineer the material. And so engineering the material is, it turns out to be a challenge. People had ideas early on that they could put some combination of semiconductors and superconductors. But, you know, for us to really make progress, we realized that, you know, it’s a very particular combination. And we had to develop—and we did develop—simulation capabilities, classical. Unfortunately, we don’t have a quantum computer, so we had to do this classically with classical computers. We had to classically simulate various kinds of materials combinations to find one, or find a class, that would get us into the topological phase. And it turned out lots of details mattered there, OK. It involves a semiconductor, which is indium arsenide. It’s not silicon, and it’s not the second most common semiconductor, which is gallium nitride, which is used in LED lights. It’s something called indium arsenide. It has some uses as an infrared detector, but it’s a different semiconductor. And we’re using it in a nonstandard way, putting it into contact with aluminum and getting, kind of, the best of both worlds of a superconductor and a semiconductor so that we can control it and get into this topological phase. And that’s a previously published paper in American Physical [Society] journal. But that’s great. So that enables … that shows that you can create this state of matter. Now we need to then build on it; we have to harness it, and we have to, as I said, we have to make one of these wires or, in many cases, multiple wires, qubits, et cetera, complex devices, and we need to figure out, how do we measure whether we have 100 million or 100 million and one electrons in one of these wires? And that was the problem we solved, which is we made a device where we took something called a quantum dot—you should think of [it] as a tiny little capacitor—and that quantum dot is coupled to the wire in such a way that the coupling … that an electron—it’s kind of remarkable—an electron can quantum mechanically tunnel from … you know, this is like an electron, you don’t know where it is at any given time. You know, its momentum and its position aren’t well defined. So it’s, you know, an electron whose, let’s say, energy is well defined … actually, there is some probability amplitude that it’s on the wire and not on the dot. Even though it should be on the dot, it actually can, kind of, leak out or quantum mechanically end up on the wire and come back. And because of that fact—the simple fact that its quantum mechanical wave function can actually have it be on the wire—it actually becomes sensitive to that even or oddness.

HUIZINGA: Interesting.

NAYAK: And that causes a small change in the capacitance of this tiny little parallel plate capacitor, effectively, that we have. And that tiny little change in capacitance, which is, just to put into numbers, is the femtofarad, OK. So that’s a decimal point followed by, you know, 15 zeros and a one … 14 zeros and a one. So that’s how tiny it is. That that tiny change in the capacitance, if we put it into a larger resonant circuit, then that larger resonant circuit shows a small shift in its resonant frequency, which we can detect. And so what we demonstrated is we can detect the difference, that one electron difference, that even or oddness, which is, again, it’s not local property of anywhere in the wire, that we can nevertheless detect. And that’s, kind of, the fundamental thing you have to have if you want to be able to use these things for quantum information processing, you know, this parity, you have to be able to measure what that parity is, right. That’s a fundamental thing. Because ultimately, the information you need is classical information. You’re going to want to know the answer to some problem. It’s going to be a string of zeros and ones. You have to measure that. But moreover, the particular architecture we’re using, the basic operations for us are measurements of this type, which is a … it’s a very digital process. The process … I mentioned, sort of, how quantum computing looks a little analog in some ways, but it’s not really analog. Well, that’s very manifestly true in our architecture, that our operations are a succession of measurements that we turn on and off, but different kinds of measurements. And so what the paper shows is that we can do these measurements. We can do them fast. We can do them accurately.

HUIZINGA: OK.

NAYAK: And the additional, you know, announcements that we’re making, you know, right now are work that we’ve done extending and building on that with showing additional types of measurements, a scalable qubit design, and then building on that to multi-qubit arrays.

HUIZINGA: Right.

NAYAK: So that really unlocked our ability to do a number of things. And I think you can see the acceleration now with the announcements we have right now.

HUIZINGA: So, Chetan, you’ve just talked about the idea of living in a classical world and having to simulate quantum stuff.

NAYAK: Yup.

HUIZINGA: Tell us about the full stack here and how we go from, in your mind, from quantum computing at the bottom all the way to the top.

NAYAK: OK, so one thing to keep in mind is quantum computers are not a general-purpose accelerator for every problem. You know, so people sometimes say, well, quantum computers are just going to be like classical computers but faster. And that’s not the case. So I really want to emphasize the fact that quantum computers are an entirely different modality of computing. You know, there are certain problems which quantum computers are not just faster at than classical computers but quantum computers can solve and classical computers have no chance of solving. On the other hand, there are lots of things that classical computers are good at that quantum computers aren’t going to be good at, because it’s not going to give you any big scale up. Like a lot of big data problems where you have lots of classical data, you know, a quantum computer with, let’s say, let’s call it 1,000 qubits, and here I mean 1,000 logical qubits, and we come back to what that means, but 1,000 error-corrected qubits can solve problems that you have no chance of solving with a classical computer, even with all the world’s computing. But in fact, if it were a 1,000 qubits, you would have to take every single atom in the entire universe, OK, and turn that into a transistor, and it still wouldn’t be big enough. You don’t have enough bytes, even if every single atom in the universe were a byte. So that’s how big these quantum problems are when you try to store them on a classical computer, just to store the answer, let’s say.

HUIZINGA: Yeah.

NAYAK: But conversely, if you have a lot of classical data, like all the data in the internet, which we train, you know, our AI models with, you can’t store that on 1,000 qubits, right. You actually can’t really store more than 1,000 bits of classical information on 1,000 qubits. So many things that we have big data in classically, we don’t have the ability to really, truly store within a quantum computer in a way that you can do anything with it. So we should definitely not view quantum computers as replacing classical computers. There’s lots of things that classical computers are already good at and we’re not trying to do those things. But there many things that classical computers are not good at all. Quantum computer we should think of as a complimentary thing, an accelerator for those types of problems. It will have to work in collaboration with a classical computer that is going to do the classical steps, and the quantum computer will do the quantum steps. So that’s one thing to just keep in mind. When we talk about a quantum computer, it is part of a larger computing, you know, framework where there are many classical elements. It might be CPUs, it might be GPUs, might be custom ASICs for certain things, and then quantum computer, you know, a quantum processor, as well. So …

HUIZINGA: Is that called a QPU?

NAYAK: A QPU is the quantum processing unit, exactly! So we’ll have CPUs, GPUs, and QPUs. And so that is, you know, at the lowest layer of that stack, is the underlying substrate, physical substrate. That’s our topoconductor. It’s the material which we build our QPUs. That’s the quantum processing unit. The quantum processing unit includes all of the qubits that we have in our architecture on a single chip. And that’s, kind of, one of the big key features, key design features, that the qubits be small and small and manufacturable on a single wafer. And then the QPU also has to enable that quantum world to talk to the classical world …

HUIZINGA: Right.

NAYAK: … because you have to send it, you know, instructions and you have to get back answers. And for us, that is turning on and off measurements because our instructions are a sequence of measurements. And then, we ultimately have to get back a string of zeros and ones. But that initially is these measurements where we’re getting, you know, phase shifts on microwaves, and … which are in turn telling us about small capacitance shifts, which are in turn telling us the parity of electrons in a wire.

HUIZINGA: Right.

NAYAK: So really, this is a quantum machine in which, you know, you have the qubits that are built on the quantum plane. You’ve then got this quantum-classical interface where the classical information is going in and out of the quantum processor. And then there’s a lot of classical processing that has to happen, both to enable error correction and to enable computations. And the whole thing has to be inside of a cryogenic environment. So it’s a very special environment in which we … in which, A, it’s kept cold because that’s what you need in order to have a topoconductor, and that’s also what you need in order just in general for the qubits to be very stable. So that … when we talk about the full stack, just on the hardware side, there are many layers to this. And then of course, you know, there is the classical firmware that takes instructions and turns them into the physical things that need to happen. And then, of course, we have algorithms and then ultimately applications.

HUIZINGA: Yeah, so I would say, Chetan, that people can probably go do their own little research on how you go from temperatures that are lower than deep space to the room you’re working in. And we don’t have time to unpack that on this show. And also, I was going to ask you what could possibly go wrong if you indeed got everything right. And you mentioned earlier about, you know, what happens in an AI world if we get everything right. If you put quantum and AI together, it’s an interesting question, what that world looks like. Can you just take a brief second to say that you’re thinking about what could happen to cryptography, to, you know, just all kinds of things that we might be wondering about in a post-quantum world?

NAYAK: Great question. So, you know, first of all, you know, one of the things I want to, kind of, emphasize is, ultimately, a lot of, you know, when we think about the potential for technology, often the limit comes down to physics. There are physics limits. You know, if you think about, like, interstellar travel and things like that, well, the speed of light is kind of a hard cutoff, [LAUGHTER] and actually, you’re not going to be able to go faster than the speed light, and you have to bake that in. That ultimately, you know, if you think of a datacenter, ultimately, like there’s a certain amount of energy, and there’s a certain amount of cooling power you have. And you can say, well, this datacenter is 100 megawatts, and then in the future, we’ll have a gigawatt to use it. But ultimately, then that energy has to come from somewhere, and you’ve got some hard physical constraints. So similarly, you could ask, you know, with quantum computers, what are the hard physical constraints? What are the things that just … because you can’t make a perpetual motion machine; you can’t violate, I think, laws of quantum mechanics. And I think in the early days, there was this concern that, you know, this idea relies on violating something. You’re doing something that’s not going to work. You know, I’d say the theory of quantum error correction, the theory of fault tolerance, you know, many of the algorithms have been developed, they really do show that there is no fundamental physical constraint saying that this isn’t going to happen, you know. That, you know, that somehow you would need to have either more power than you can really generate or you would need to go much colder than you can actually get. That, you know, there’s no physical, you know, no-go result. So that’s an important thing to keep in mind. Now, the thing is, some people might then be tempted to say, well, OK, now it’s just an engineering problem because we know this in principle can work, and we just have to figure out how to work. But the truth is, there isn’t any such, like, hard barrier where you say, well, oh, up until here, it’s fundamental physics, and then beyond this, it’s just an engineering problem. The reality is, you know, new difficulties and challenges arise every step along the way. And one person might call it an engineering or an implementation challenge, and one person may call it a fundamental, you know, barrier obstruction, and I think people will probably profitably disagree, you know, agree to disagree on, like, where that goes. I think for us, like, it was really crucial, you know, as we look out at a scale to realize quantum computers are going to really make an impact. We’re going to need thousands, you know, hundreds to thousands of logical qubits. That is error-corrected qubits. And when you look at what that means, that means really million physical qubits. That is a very large scale in a world in which people have mostly learned what we know about these things from 10 to 100 qubits. To project out from that to a million, you know, it would surprise me if the solutions that are optimal for 10 to 100 qubits are the same solutions that are optimal for a million qubits, right.

HUIZINGA: Yeah.

NAYAK: And that has been a motivation for us, is let’s try to think, based on what we now know, of things that at least have a chance to work at that million qubit. Let’s not do anything that looks like it’s going to clearly hit a dead end before then.

HUIZINGA: Right.

NAYAK: Now, obviously in science, nothing is certain, and you learn new things along the way, but we didn’t want to start out with things that looked like they were not going to be, you know, work for a million qubits. That was the reason that we developed this new material, that we created this, engineered this new material, you know, these topoconductors, precisely because we said we need to have a material that can give us something where we can operate it fast and make it small and be able to control these things. So, you know, I think that’s one key thing. And, you know, what we’ve demonstrated now is that we can harness this; that we’ve got a qubit. And that’s why we have a lot of confidence that, you know, these are things that aren’t going to be decades away. That these things are going to be years away. And that was the basis for our interaction with DARPA [Defense Advanced Research Projects Agency]. We’ve just been … signed a contract with DARPA to go into the next phase of the DARPA US2QC program. And, you know, DARPA, the US government, wants to see a fault-tolerant quantum computer. And … because they do not want any surprises.

HUIZINGA: Right?!? [LAUGHS]

NAYAK: And, you know, there are people out there who said, you know, quantum computers are decades away; don’t worry about it. But I think the US government realizes they might be years, not decades away, and they want to get ahead of that. And so that’s why they’ve entered into this agreement with us and the contract with us.

HUIZINGA: Yeah.

NAYAK: And so that is, you know, the thing I just want to make sure that, you know, listeners to the podcast understand that we are, you know, the reason that we fundamentally re-engineered, re-architected, what we think a quantum computer should look like and what the qubit should be and even … going all the way down to the underlying materials was … which is high risk, right? I mean, there was no guarantee … there’s no guarantee that any of this is going to work, A. And, B, there was no guarantee we would even be able to do the things we’ve done so far. I mean, you know, that’s the nature of it. If you’re going to try to do something really different, you’re going to have to take risks. And we did take risks by really starting at, you know, the ground floor and trying to redesign and re-engineer these things. So that was a necessary part of this journey and the story, was for us to re-engineer these things in a high-risk way. What that leads to is, you know, potentially changing that timeline. And so in that context, it’s really important to make this transition to post-quantum crypto because, you know, the cryptography systems in use up until now are things that are not safe from quantum attacks if you have a utility-scale quantum computer. We do know that there are crypto systems which, at least as far as we know, appear to be safe from quantum attacks. That’s what’s called post-quantum cryptography. You know, they rely on different types of hard math problems, which quantum computers aren’t probably good at. And so, you know, and changing over to a new crypto standard isn’t something that happens at the flip of a switch.

HUIZINGA: No.

NAYAK: It’s something that takes time. You know, first, you know, early part of that was based around the National Institute of Standards and Technology aligning around one or a few standard systems that people would implement, which they certified would be quantum safe and, you know, those processes have occurred. And so now is the time to switch over. Given that we know that we can do this and that it won’t happen overnight, now’s the time to make that switch.

HUIZINGA: And we’ve had several cryptographers on the show who’ve been working on this for years. It’s not like they’re just starting. They saw this coming even before you had some solidity in your work. But listen, I would love to talk to you for hours, but we’re coming to a close here. And as we close, I want to refer to a conversation you had with distinguished university professor Sankar Das Sarma. He suggested that with the emergence of Majorana zero modes, you had reached the end of the beginning and that you were now sort of embarking on the beginning of the end in this work. Well, maybe that’s a sort of romanticized vision of what it is. But could you give us a little bit of a hint on what are the next milestones on your road to a scalable, reliable quantum computer, and what’s on your research roadmap to reach them?

NAYAK: Yeah, so interestingly, we actually just also posted on the arXiv a paper that shows some aspects of our roadmap, kind of the more scientific aspects of our roadmap. And that roadmap is, kind of, continuously going from the scientific discovery phase through the engineering phase, OK. Again, as I said, it’s a matter of debate and even taste of what exactly you want to call scientific discovery versus engineering, but—which will be hotly debated, I’m sure—but it is definitely a continuum that’s going more towards … from one towards the other. And I would say, you know, at a high level, logical qubits, you know, error-corrected, reliable qubits, are, you know, the basis of quantum computation at scale and developing, demonstrating, and building those logical qubits and logic qubits at scale is kind of a big thing that—for us and for the whole industry—is, I would say, is, sort of, the next level of quantum computing. Jason Zander wrote this blog where he talked about level one, level two, level three, where level one was this NISQ—noisy intermediate-scale quantum—era; level two is foundations of, you know, reliable and logical qubits; and level three is the, you know, at-scale logical qubits. I think we’re heading towards level two, and so in my mind, that’s sort of, you know, the next North Star is really around that. I think there will be a lot of very interesting and important things that are more technical and maybe are not as accessible to a big audience. But I’d say that’s, kind of, the … I would say, if you’re, you know, a thing to keep in mind as a big exciting thing happening in the field.

HUIZINGA: Yeah. Well, Chetan Nayak, what a ride this show has been. I’m going to be watching this space—and the timelines thereof because they keep getting adjusted!

[MUSIC]

Thank you for taking time to share your important work with us today.

NAYAK: Thank you very much, my pleasure!

[MUSIC FADES]

The post Ideas: Quantum computing redefined with Chetan Nayak appeared first on Microsoft Research.

Read More

Microsoft Research and Physics Wallah team up to enhance AI-based tutoring

Microsoft Research and Physics Wallah team up to enhance AI-based tutoring

Physics Wallah blog | education icons

In India, limited resources, geographical constraints, and economic factors present barriers to quality higher education for some students.

A shortage of teachers, particularly in remote or low-income areas, makes it harder for students to receive the guidance they need to prepare for highly competitive professional and academic programs. Microsoft Research is developing new algorithms and techniques that are enabling Physics Wallah (opens in new tab), a growing educational company, to make its AI-based tutoring services more accurate and reliable, to better support students on their education journey.

As in other countries, many Indian students purchase coaching and tutoring services to prepare for entrance exams at top institutions. This includes offline coaching, where hundreds of students meet in a classroom staffed by teachers covering a structured curriculum. Online coaching enables students to learn remotely in a virtual classroom. Hybrid coaching delivers virtual lessons in a physical classroom.

Offline courses can cost as much as 100,000 Indian rupees a year—equivalent to hundreds of U.S. dollars. This puts them out of reach for many lower income students living in smaller and mid-sized Indian cities, as well as rural villages. Online courses are much more affordable. They allow students to work at their own pace by providing high-quality web-based content supported by teachers who work remotely.

Vineet Govil
Vineet Govil

Meeting this need is the mission of Physics Wallah. The company uses AI to offer on-demand tutoring at scale, curating volumes of standard science- and math-related content to provide the best answers. Some 2 million students use the Physics Wallah platform every day, at a fraction of the cost of offline tutoring. For example, its prep courses for the Joint Entrance Examination (JEE), which is required for admission to engineering and technology programs, and the National Eligibility cum Entrance Test (NEET), a required entrance exam for medical and dental school candidates, cost between 4,200 and 4,500 rupees per year. That’s roughly 50 U.S. dollars.

“The mantra here really is how do we provide quality education in an affordable manner and accessible to every student, regardless of who they are or where they come from.”

—Vineet Govil, Chief Technology and Product Officer, Physics Wallah

Microsoft Research India’s collaboration with Physics Wallah is part of a 20-year legacy of supporting emerging Indian companies, underscored by the January 2025 announcement that Microsoft will invest $3 billion (opens in new tab) in cloud and AI infrastructure to accelerate the adoption of AI, skilling, and innovation.  

Physics Wallah has developed an AI-driven educational suite, Alakh AI, leveraging OpenAI’s GPT-4o model through Microsoft Azure OpenAI Service. Alakh AI’s flagship offerings include AI Guru and the Smart Doubt Engine, both designed to transform the learning experience in and beyond the classroom.

  • AI Guru acts as a personal academic tutor, delivering adaptive guidance based on a student’s progress, real-time question-solving, and customized content that evolves with their learning journey.
  • Smart Doubt Engine is an AI tool through which students can ask questions (also known as “doubts” in Indian English) during live classes and receive instant responses.

Additionally, the Alakh AI suite includes:

  • AI Grader for subjective answer evaluation without human intervention
  • Sahayak for crafting hyper-personalized learning paths tailored to individual students’ needs

This innovative ecosystem elevates learning efficiency and accessibility for students.

Screenshot of AI Guru interface showing a student’s query about Newton’s First Law. The AI tutor responds with a detailed explanation and includes two video resources for additional learning.
AI Guru in action – A student asks, “Explain Newton’s First Law,” and the AI tutor provides a detailed explanation along with two videos for further learning.
Screenshot of the Smart Doubt Engine interface showing a student asking a question about the directrix during a live classroom session. The AI responds with a detailed explanation to clarify the concept.
Smart Doubt Engine in action – A student asks a clarifying question during a live class, and the AI provides a detailed explanation in real time.

How does AI Guru work?

Let’s say a student had a question about Newton’s laws of motion, a core concept in physics. She would type her query into the AI Guru chat window (she could also just talk to it or upload an image from a textbook) and receive a text answer plus images derived from standard textbooks and curated content, typically in just a few seconds. AI Guru also provides a short video where a teacher offers additional context.

Getting the technology right

The Alakh AI suite is powered by OpenAI’s foundational models GPT-4 and GPT-4o, integrated with a retrieval-augmented generation (RAG) architecture. It leverages Physics Wallah’s rich repository of high-quality curated content—developed and refined over several years—along with continuous updates from subject matter experts to ensure new materials, textbooks, tutorials, and question banks are seamlessly incorporated. Despite considerable progress, the existing AI sometimes falters when navigating complex academic problems.

“The accuracy level of today’s large language models (LLMs) is not up to the mark where we can provide reliable and satisfactory answers to the students all the time—specifically, if it’s a hard mathematical problem involving complex equations,” Govil said.

That’s one important focus of the collaboration. Researchers from Microsoft Research are developing new algorithms and techniques to enhance the accuracy and reasoning capabilities of AI models. They are now collaborating with Physics Wallah to apply these advancements to the Alakh AI suite, improving its ability to solve complex problems and provide more reliable, step-by-step guidance to students. A key challenge is the nature of student queries, which are often ambiguous and involve multimodal inputs—text, images, videos, or audio—requiring unified capabilities to address the problem. Many STEM problems require breaking down complex queries into logical sub-problems and applying high-order, step-by-step reasoning for consistency. Additionally, integrating domain-specific knowledge in advanced math, physics, chemistry, and biology requires contextualization and seamless retrieval of specialized, grade-appropriate information. 

Microsoft Research is working with Physics Wallah to move beyond traditional next-token prediction and develop AI systems that approach reliable, systematic, step-by-step problem-solving.

That includes ongoing work to enhance the model’s reasoning capabilities and deliver more accurate query answers on complex JEE math problems. Instead of just providing the final answer, the underlying models now break problems into step-by-step solutions. That helps students learn how to solve the actual problems. The AI can also review student answers, detect mistakes, and give detailed feedback, acting as a personal tutor to guide students, improve their understanding, and enhance their learning experience.

Microsoft research podcast

Collaborators: Silica in space with Richard Black and Dexter Greene

College freshman Dexter Greene and Microsoft research manager Richard Black discuss how technology that stores data in glass is supporting students as they expand earlier efforts to communicate what it means to be human to extraterrestrials.


Solving complex problems requires enhancing the reasoning capabilities of both large and small language models by training them to not just generate answers, but to systematically think through and reason about complex problems. This requires high-quality reasoning traces—detailed, step-by-step breakdowns of logical problem-solving processes.

To enable this, researchers collaborated with Physics Wallah to curate a dataset of 150,000 high-quality math reasoning traces. These traces serve as the foundation for training specialized small language models (SLMs) using supervised fine-tuning (SFT). Model performance is further refined through training on carefully curated on-policy preference data, ensuring alignment with high-quality reasoning standards. The team’s current Phi-based models have already outperformed leading LLMs and other baselines on complex math problems.

“Building AI systems capable of human-like thinking and reasoning represents a significant challenge.”

—Akshay Nambi, Principal Researcher at Microsoft Research India

The next step is to develop a self-evolving learning pipeline using online reinforcement learning techniques, allowing the model to continuously generate high-quality synthetic data that further enhances its capabilities. Additionally, researchers are building a reward model and integrating it with Monte Carlo Tree Search (MCTS) to optimize reasoning and improve inference-time decision-making.

“The goal is to develop tools that complement education. To do this, we are enhancing the model’s capabilities to process, break down, and solve problems step-by-step. We do this by incorporating high-quality data into training to teach the model how to approach such tasks, alongside algorithmic innovations that enable the model to think and reason more effectively.”


Listen or read along as Microsoft Research Podcast guest Akshay Nambi shares how his passion for tackling real-world challenges across various domains fuels his work in building reliable and robust AI systems.

Outline illustration of Akshay Nambi | Ideas podcast


Opening new doors for students

Chandramouleswar Parida
Chandramouleswar Parida

Getting an education at a top university can be life changing for anyone. For Chandramouleswar Parida, it could change the lives of everyone in his home village in Baniatangi, Khordha, Odisha State, India. Chandra decided to become a doctor after watching his grandfather die from a heart attack. The nearest doctor who could have treated him was at a regional hospital 65 kilometers away.

“He could have been saved if certain procedures had been followed,” Chandra said. He wants to study medicine, perhaps receiving advanced training overseas, and then return home. “I want to be a doctor here in our village and serve our people, because there is a lack of treatment. Being a doctor is a very noble kind of job in this society.”

Chandra is the only student in Baniatangi Village, Khordha, Odisha, currently preparing for the NEET. Without Physics Wallah, students like Chandra would likely have no access to the support and resources that can’t be found locally.

Anushka Sunil Dhanwade
Anushka Sunil Dhanwade

Another student, Anushka Sunil Dhanwade, is optimistic that Physics Wallah will help her dramatically improve her initial score on the NEET exam. While in 11th class, or grade, she joined an online NEET prep class with 800 students. But she struggled to follow the coursework, as the teachers tailored the content to the strongest students. After posting a low score on the NEET exam, her hopes of becoming a doctor were fading.

But after a serious stomach illness reminded her of the value of having a doctor in her family, she tried again, this time with Physics Wallah and AI Guru. After finishing 12th class, she began preparing for NEET and plans to take the exams again in May, confident that she will increase her score.

“AI Guru has made my learning so smooth and easy because it provides me answers related to my study and study-related doubt just within a click.”

—Anushka Sunil Dhanwade, Student

Next steps in the collaboration

The collaboration between Microsoft Research and Physics Wallah aims to apply the advancements in solving math problems across additional subjects, ultimately creating a unified education LLM with enhanced reasoning capabilities and improved accuracy to support student learning.

“We’re working on an education-specific LLM that will be fine-tuned using the extensive data we’ve gathered and enriched by Microsoft’s expertise in LLM training and algorithms. Our goal is to create a unified model that significantly improves accuracy and raises student satisfaction rates to 95% and beyond,” Govil explained.

The teams are also integrating a new tool from Microsoft Research called PromptWizard (opens in new tab), an automated framework for optimizing the instructions given to a model, into Physics Wallah’s offerings. New prompts can now be generated in minutes, eliminating months of manual work, while providing more accurate and aligned answers for students.

For Nambi and the Microsoft Research India team, the collaboration is the latest example of their deep commitment to cultivating the AI ecosystem in India and translating new technology from the lab into useful business applications.

“By leveraging advanced reasoning techniques and domain expertise, we are transforming how AI addresses challenges across multiple subjects. This represents a key step in building AI systems that act as holistic personal tutors, enhancing student understanding and creating a more engaging learning experience,” Nambi said.

Explore more

The post Microsoft Research and Physics Wallah team up to enhance AI-based tutoring appeared first on Microsoft Research.

Read More

ExACT: Improving AI agents’ decision-making via test-time compute scaling

ExACT: Improving AI agents’ decision-making via test-time compute scaling

A gradient blue to green background features a white flowchart with rectangular boxes connected by arrows, ending in a hexagonal “STOP” sign and a check mark on the right side.

Autonomous AI agents are transforming the way we approach multi-step decision-making processes, streamlining tasks like web browsing, video editing, and file management. By applying advanced machine learning, they automate workflows, optimize performance, and reduce the need for human input. 

However, these systems struggle in complex, dynamic environments. A key challenge lies in balancing exploitation, using known strategies for immediate gains, with exploration, which involves seeking new strategies that could yield long-term benefits. Additionally, they often have difficulty adapting to unpredictable changes in conditions and objectives, as well as generalizing knowledge across contexts, limiting their ability to transfer learned strategies between domains. 

In response, we developed ExACT, an approach for teaching AI agents to explore more effectively, enabling them to intelligently navigate their environments, gather valuable information, evaluate options, and identify optimal decision-making and planning strategies. ExACT combines two key techniques: Reflective-MCTS (R-MCTS) and Exploratory Learning.

R-MCTS builds on the traditional Monte Carlo Tree Search (MCTS) algorithm, introducing features like contrastive reflection and a multi-agent debate function. Through contrastive reflection, the agent refines its decision-making by comparing expected outcomes with actual results, allowing it to learn from both its successes and mistakes. The multi-agent debate function provides various evaluations of a given state, where multiple agents offer contrasting perspectives to help provide a balanced and reliable assessment.

Exploratory Learning trains agents to navigate environments effectively. Together, these techniques show strong computational scalability during both training and testing, as demonstrated on VisualWebArena—a benchmark for evaluating multimodal autonomous language agents (Figure 1). 

Evaluation demonstrates the compute scaling properties of GPT-4o during both training and testing. The assessment includes two scenarios: (1) applying the GPT-4o-based R-MCTS agent to all 234 tasks from the Classifieds category in VisualWebArena (left), and (2) testing fine-tuned GPT-4o on 169 previously unseen tasks from Classifieds without using search algorithms (right).
Figure 1. Evaluation demonstrates the compute scaling properties of GPT-4o during both training and testing. The assessment includes two scenarios: (1) applying the GPT-4o-based R-MCTS agent to all 234 tasks from the Classifieds category in VisualWebArena (left), and (2) testing fine-tuned GPT-4o on 169 previously unseen tasks from Classifieds without using search algorithms (right).

R-MCTS extends the classic MCTS by enabling real-time improvements in decision-making. Shown in Figure 2, an iterative feedback loop allows R-MCTS to learn from past experiences, avoid prior mistakes, and focus on more effective actions in similar contexts.

Overview of the R-MCTS process in ExACT. 
Figure 2. Overview of the R-MCTS process in ExACT. 

Evaluating R-MCTS

R-MCTS demonstrates state-of-the-art performance across all VisualWebArena environments, surpassing the previous best-performing method, Search Agent, with improvements ranging from 6% to 30% (Table 1). Additionally, as of January 2025, it holds the second position on the OSWorld leaderboard and demonstrates state-of-the-art performance in the blind test setting, where there is no prior access to the test environment, reflecting its advanced capabilities (Table 2). 

Rank Model Score
1 GPT-4o + ExACT 33.70
2 GPT-4o + Search 26.40
3 GPT-4o + WebDreamer 23.60
4 GPT-4o + ICAL 23.40
5 GPT-4o 19.78
6 Llama-3-70B + Search 16.70
Table 1. The VisualWebArena leaderboard highlights R-MCTS as achieving state-of-the-art performance as of December 2024. 
Rank Model Blind Test Score
1 learn-by-interact w/ Claude-3.5-sonnet 🗶 22.50
2 ExACT w/ GPT-4o ✔ 16.60
3 GPT-4 ✔ 12.24
4 GPT-4o ✔ 11.36
5 GPT-4 Vision (0409) ✔ 10.82
6 learn-by-interact w/ Gemini-1.5-pro ✔ 10.30
Table 2. The OSWorld leaderboard for the category of A11y tree inputs shows that ExACT with GPT-4o ranks second and demonstrates state-of-the-art performance in the blind test setting, as of December 2024.

How Exploratory Learning works

Exploratory Learning enables agents to dynamically search and adjust their computational resources during testing without depending on MCTS. In contrast to Imitation Learning, which centers on training models using the optimal actions identified through search, Exploratory Learning focuses on cultivating the agent’s ability to navigate its environment by teaching it to evaluate states, explore different pathways, and efficiently backtrack from unpromising paths to identify more favorable alternatives. 

In contrast to Imitation Learning, Exploratory Learning uses the entire search trajectory for training.
Figure 3. In contrast to Imitation Learning, Exploratory Learning uses the entire search trajectory for training.

Evaluating Exploratory Learning

We conducted experiments using GPT-4o fine-tuned with Exploratory Learning in the VisualWebArena environment. Results demonstrate the following key benefits: 

  • Improved performance: GPT-4o achieves performance improvement, comparable with scaling test-time compute with MCTS, even without search.
  • Test-time compute scaling: GPT-4o performs better when given more actions per task, leading to improved decision-making and task completion, which increased from 5% to 12.4%. 
  • Improved generalization on unseen tasks: Exploratory Learning helps fine-tuned GPT-4o handle unseen tasks more effectively than agents trained with Imitation Learning or no additional training.

The following video provides a detailed demonstration of how R-MCTS and Exploratory Learning function.

Continued exploration

Advancing autonomous AI agents is key to enabling them to handle complex, multi-step tasks with greater precision and adaptability. ExACT represents a significant step toward creating agents that can perform complex decision-making before taking action, leading to improved performance, but challenges remain. How can AI agents improve decision-making in real-world scenarios, where they may be constrained by time or resources? How can they learn effectively and efficiently from environmental feedback? We are currently investigating these questions, and we invite you to explore them with us by building on the ExACT framework. Access the ExACT code at our GitHub repository (opens in new tab)

The post ExACT: Improving AI agents’ decision-making via test-time compute scaling appeared first on Microsoft Research.

Read More