Amazon SageMaker HyperPod launches model deployments to accelerate the generative AI model development lifecycle

Amazon SageMaker HyperPod launches model deployments to accelerate the generative AI model development lifecycle

Today, we’re excited to announce that Amazon SageMaker HyperPod now supports deploying foundation models (FMs) from Amazon SageMaker JumpStart, as well as custom or fine-tuned models from Amazon S3 or Amazon FSx. With this launch, you can train, fine-tune, and deploy models on the same HyperPod compute resources, maximizing resource utilization across the entire model lifecycle.

SageMaker HyperPod offers resilient, high-performance infrastructure optimized for large-scale model training and tuning. Since its launch in 2023, SageMaker HyperPod has been adopted by foundation model builders who are looking to lower costs, minimize downtime, and accelerate time to market. With Amazon EKS support in SageMaker HyperPod you can orchestrate your HyperPod Clusters with EKS. Customers like Perplexity, Hippocratic, Salesforce, and Articul8 use HyperPod to train their foundation models at scale. With the new deployment capabilities, customers can now leverage HyperPod clusters across the full generative AI development lifecycle from model training and tuning to deployment and scaling.

Many customers use Kubernetes as part of their generative AI strategy, to take advantage of its flexibility, portability, and open source frameworks. You can orchestrate your HyperPod clusters with Amazon EKS support in SageMaker HyperPod so you can continue working with familiar Kubernetes workflows while gaining access to high-performance infrastructure purpose-built for foundation models. Customers benefit from support for custom containers, compute resource sharing across teams, observability integrations, and fine-grained scaling controls. HyperPod extends the power of Kubernetes by streamlining infrastructure setup and allowing customers to focus more on delivering models not managing backend complexity.

New Features: Accelerating Foundation Model Deployment with SageMaker HyperPod

Customers prefer Kubernetes for flexibility, granular control over infrastructure, and robust support for open source frameworks. However, running foundation model inference at scale on Kubernetes introduces several challenges. Organizations must securely download models, identify the right containers and frameworks for optimal performance, configure deployments correctly, select appropriate GPU types, provision load balancers, implement observability, and add auto-scaling policies to meet demand spikes. To address these challenges, we’ve launched SageMaker HyperPod capabilities to support the deployment, management, and scaling of generative AI models:

  1. One-click foundation model deployment from SageMaker JumpStart: You can now deploy over 400 open-weights foundation models from SageMaker JumpStart on HyperPod with just a click, including the latest state-of-the-art models like DeepSeek-R1, Mistral, and Llama4. SageMaker JumpStart models will be deployed on HyperPod clusters orchestrated by EKS and will be made available as SageMaker endpoints or Application Load Balancers (ALB).
  2. Deploy fine-tuned models from S3 or FSx for Lustre: You can seamlessly deploy your custom models from S3 or FSx. You can also deploy models from Jupyter notebooks with provided code samples.
  3. Flexible deployment options for different user personas: We’re providing multiple ways to deploy models on HyperPod to support teams that have different preferences and expertise levels. Beyond the one-click experience available in the SageMaker JumpStart UI, you can also deploy models using native kubectl commands, the HyperPod CLI, or the SageMaker Python SDK—giving you the flexibility to work within your preferred environment.
  4. Dynamic scaling based on demand: HyperPod inference now supports automatic scaling of your deployments based on metrics from Amazon CloudWatch and Prometheus with KEDA. With automatic scaling your models can handle traffic spikes efficiently while optimizing resource usage during periods of lower demand.
  5. Efficient resource management with HyperPod Task Governance: One of the key benefits of running inference on HyperPod is the ability to efficiently utilize accelerated compute resources by allocating capacity for both inference and training in the same cluster. You can use HyperPod Task Governance for efficient resource allocation, prioritization of inference tasks over lower priority training tasks to maximize GPU utilization, and dynamic scaling of inference workloads in near real-time.
  6. Integration with SageMaker endpoints: With this launch, you can deploy AI models to HyperPod and register them with SageMaker endpoints. This allows you to use similar invocation patterns as SageMaker endpoints along with integration with other open-source frameworks.
  7. Comprehensive observability: We’ve added the capability to get observability into the inference workloads hosted on HyperPod, including built-in capabilities to scrape metrics and export them to your observability platform. This capability provides visibility into both:
    1. Platform-level metrics such as GPU utilization, memory usage, and node health
    2. Inference-specific metrics like time to first token, request latency, throughput, and model invocations

With Amazon SageMaker HyperPod, we built and deployed the foundation models behind our agentic AI platform using the same high-performance compute. This seamless transition from training to inference streamlined our workflow, reduced time to production, and ensured consistent performance in live environments. HyperPod helped us go from experimentation to real-world impact with greater speed and efficiency.”
–Laurent Sifre, Co-founder & CTO, H.AI

Deploying models on HyperPod clusters

In this launch, we are providing new operators that manage the complete lifecycle of your generative AI models in your HyperPod cluster. These operators will provide a simplified way to deploy and invoke your models in your cluster.

Prerequisites: 

helm install hyperpod-inference-operator ./sagemaker-hyperpod-cli/helm_chart/HyperPodHelmChart/charts/inference-operator 
     -n kube-system 
     --set region=" + REGION + " 
     --set eksClusterName=" + EKS_CLUSTER_NAME + " 
     --set hyperpodClusterArn=" + HP_CLUSTER_ARN + " 
     --set executionRoleArn=" + HYPERPOD_INFERENCE_ROLE_ARN + " 
     --set s3.serviceAccountRoleArn=" + S3_CSI_ROLE_ARN + " 
     --set s3.node.serviceAccount.create=false 
     --set keda.podIdentity.aws.irsa.roleArn="arn:aws:iam::" + ACCOUNT_ID + ":role/keda-operator-role" 
     --set tlsCertificateS3Bucket=" + TLS_BUCKET_NAME + " 
     --set alb.region=" + REGION + " 
     --set alb.clusterName=" + EKS_CLUSTER_NAME + " 
     --set alb.vpcId=" + VPC_ID + " 
     --set jumpstartGatedModelDownloadRoleArn=" + JUMPSTART_GATED_ROLE_ARN

Architecture:

  • When you deploy a model using the HyperPod inference operator, the operator will identify the right instance type in the cluster, download the model from the provided source, and deploy it.
  • The operator will then provision an Application Load Balancer (ALB) and add the model’s pod IP as the target. Optionally, it can register the ALB with a SageMaker endpoint.
  • The operator will also generate a TLS certificate for the ALB which is saved in S3 at the location specified by the tlsCertificateBucket. The operator will also import the certificate into AWS Certificate Manager (ACM) to associate it with the ALB. This allows clients to connect via HTTPS to the ALB after adding the certificate to their trust store.
  • If you register with a SageMaker endpoint, the operator will allow you to invoke the model using the SageMaker runtime client and handle authentication and security aspects.
  • Metrics can be exported to CloudWatch and Prometheus accessed with Grafana dashboards

Deployment sources

Once you have the operators running in your cluster, you can then deploy AI models from multiple sources using SageMaker JumpStart, S3, or FSx:

SageMaker JumpStart 

Models hosted in SageMaker JumpStart can be deployed to your HyperPod cluster. You can navigate to SageMaker Studio, go to SageMaker JumpStart and select the open-weights model you want to deploy, and select SageMaker HyperPod. Once you provide the necessary details choose Deploy. The inference operator running in the cluster will initiate a deployment in the namespace provided.

Once deployed, you can monitor deployments in SageMaker Studio.

Alternatively, here is a YAML file that you can use to deploy the JumpStart model using kubectl. For example, the following YAML snippet will deploy DeepSeek-R1 Qwen 1.5b from SageMaker JumpStart on an ml.g5.8xlarge instance:

apiVersion: inference.sagemaker.aws.amazon.com/v1alpha1
kind: JumpStartModel
metadata:
  name: deepseek-llm-r1-distill-qwen-1-5b-july03
  namespace: default
spec:
  model:
    modelHubName: SageMakerPublicHub
    modelId: deepseek-llm-r1-distill-qwen-1-5b
    modelVersion: 2.0.7
  sageMakerEndpoint:
    name: deepseek-llm-r1-distill-qwen-1-5b
  server:
    instanceType: ml.g5.8xlarge
  tlsConfig:
    tlsCertificateOutputS3Uri: s3://<bucket_name>/certificates

Deploying model from S3 

You can deploy model artifacts directly from S3 to your HyperPod cluster using the InferenceEndpointConfig resource. The inference operator will use the S3 CSI driver to provide the model files to the pods in the cluster. Using this configuration the operator will download the files located under the prefix deepseek15b as set by the modelLocation parameter. Here is the complete YAML example and documentation:

apiVersion: inference.sagemaker.aws.amazon.com/v1alpha1
kind: InferenceEndpointConfig
metadata:
  name: deepseek15b
  namespace: default
spec:
  endpointName: deepseek15b
  instanceType: ml.g5.8xlarge
  invocationEndpoint: invocations
  modelName: deepseek15b
  modelSourceConfig:
    modelLocation: deepseek15b
    modelSourceType: s3
    s3Storage:
      bucketName: mybucket
      region: us-west-2

Deploying model from FSx

Models can also be deployed from FSx for Lustre volumes, high-performance storage that can be used to save model checkpoints. This provides the capability to launch a model without having to download artifacts from S3, thus saving the time taken to download the models during deployment or scaling up. Setup instructions for FSx in HyperPod cluster is provided in the Set Up an FSx for Lustre File System workshop. Once set up, you can deploy models using InferenceEndpointConfig. Here is the complete YAML file and a sample:

apiVersion: inference.sagemaker.aws.amazon.com/v1alpha1
kind: InferenceEndpointConfig
metadata:
  name: deepseek15b
  namespace: default
spec:
  endpointName: deepseek15b
  instanceType: ml.g5.8xlarge
  invocationEndpoint: invocations
  modelName: deepseek15b
  modelSourceConfig:
    fsxStorage:
      fileSystemId: fs-abcd1234
    modelLocation: deepseek-1-5b
    modelSourceType: fsx

Deployment experiences

We are providing multiple experiences to deploy, kubectl, the HyperPod CLI, and the Python SDK. All deployment options will need the HyperPod inference operator to be installed and running in the cluster.

Deploying with kubectl 

You can deploy models using native kubectl with YAML files as shown in the previous sections.

To deploy and monitor the status, you can run kubectl apply -f <manifest_name>.yaml.

Once deployed, you can monitor the status with:

  • kubectl get inferenceendpointconfig will show all InferenceEndpointConfig resources.
  • kubectl describe inferenceendpointconfig <name> will give detailed status information.
  • If using SageMaker JumpStart, kubectl get jumpstartmodels will show all deployed JumpStart models.
  • kubectl describe jumpstartmodel <name> will give detailed status information
  • kubectl get sagemakerendpointregistrations and kubectl describe sagemakerendpointregistration <name> will provide information on the status of the generated SageMaker endpoint and the ALB.

Other resources that are generated are deployments, services, pods, and ingress. Each resource will be visible from your cluster.

To control the invocation path on your container, you can modify the invocationEndpoint parameter. Your ELB can route requests that are sent to alternate paths such as /v1/chat/completions. To modify the health check path for the container to another path such as /health, you can annotate the generated Ingress object with:

kubectl annotate ingress --overwrite <name> alb.ingress.kubernetes.io/healthcheck-path=/health.

Deploying with the HyperPod CLI

The SageMaker HyperPod CLI also offers a method of deploying using the CLI. Once you set your context, you can deploy a model, for example:

!hyp create hyp-jumpstart-endpoint 
  --version 1.0 
  --model-id deepseek-llm-r1-distill-qwen-1-5b 
  --model-version 2.0.4 
  --instance-type ml.g5.8xlarge 
  --endpoint-name endpoint-test-jscli 
  --tls-certificate-output-s3-uri s3://<bucket_name>/

For more information, see Installing the SageMaker HyperPod CLI and SageMaker HyperPod deployment documentation.

Deploying with Python SDK

The SageMaker Python SDK also provides support to deploy models on HyperPod clusters. Using the Model, Server and SageMakerEndpoint configurations, we can construct a specification to deploy on a cluster. An example notebook to deploy with Python SDK is provided here, for example:

from sagemaker.hyperpod.inference.config.hp_jumpstart_endpoint_config import Model, Server,SageMakerEndpoint, TlsConfig, EnvironmentVariables
from sagemaker.hyperpod.inference.hp_jumpstart_endpoint import HPJumpStartEndpoint
# create configs
model=Model(
    model_id='deepseek-llm-r1-distill-qwen-1-5b',
    model_version='2.0.4',
)
server=Server(
    instance_type='ml.g5.8xlarge',
)
endpoint_name=SageMakerEndpoint(name='deepseklr1distill-qwen')
tls_config=TlsConfig(tls_certificate_output_s3_uri='s3://<bucket_name>')

# create spec
js_endpoint=HPJumpStartEndpoint(
    model=model,
    server=server,
    sage_maker_endpoint=endpoint_name,
    tls_config=tls_config,
)

# use spec to deploy
js_endpoint.create()

Run inference with deployed models

Once the model is deployed, you can access the model by invoking the model with a SageMaker endpoint or invoking directly using the ALB.

Invoking the model with a SageMaker endpoint

Once a model has been deployed and the SageMaker endpoint is created successfully, you can invoke your model with the SageMaker Runtime client. You can check the status of the deployed SageMaker endpoint by going to the SageMaker AI console, choosing Inference, and then Endpoints. For example, given an input file input.json we can invoke a SageMaker endpoint using the AWS CLI. This will route the request to the model hosted on HyperPod:

!aws sagemaker-runtime invoke-endpoint 
        --endpoint-name "<ENDPOINT NAME>" 
        --body fileb://input.json 
        --content-type application/json 
        --accept application/json 
        output2.json

Invoke the model directly using ALB

You can also invoke the load balancer directly instead of using the SageMaker endpoint. You must download the generated certificate from S3 and then you can include it in your trust store or request. You can also bring your own certificates.

For example, you can invoke a vLLM container deployed after setting the invocationEndpoint  in the deployment YAML shown in previous section value to /v1/chat/completions.

For example, using curl:

curl --cacert /path/to/cert.pem https://<name>.<region>.elb.amazonaws.com/v1/chat/completions 
     -H "Content-Type: application/json" 
     -d '{
        "model": "/opt/ml/model",
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the world series in 2020?"}
        ]
    }'

User experience

These capabilities are designed with different user personas in mind:

  • Administrators: Administrators create the required infrastructure for HyperPod clusters such as provisioning VPCs, subnet, Security groups, EKS Cluster. Administrators also install required operators in the cluster to support deployment of models and allocation of resources across the cluster.
  • Data scientists: Data scientists deploy foundation models using familiar interfaces—whether that’s the SageMaker console, Python SDK, or Kubectl, without needing to understand all Kubernetes concepts. Data scientists can deploy and iterate on FMs efficiently, run experiments, and fine-tune model performance without needing deep infrastructure expertise.
  • Machine Learning Operations (MLOps) engineers: MLOps engineers set up observability and autoscaling policies in the cluster to meet SLAs. They identify the right metrics to export, create the dashboards, and configure autoscaling based on metrics.

Observability

Amazon SageMaker HyperPod now provides a comprehensive, out-of-the-box observability solution that delivers deep insights into inference workloads and cluster resources. This unified observability solution automatically publishes key metrics from multiple sources including Inference Containers, NVIDIA DCGM, instance-level Kubernetes node exporters, Elastic Fabric Adapter, integrated file systems, Kubernetes APIs, and Kueue to Amazon Managed Service for Prometheus and visualizes them in Amazon Managed Grafana dashboards. With a one-click installation of this HyperPod EKS add-on, along with resource utilization and cluster utilization, users gain access to critical inference metrics:

  • model_invocations_total – Total number of invocation requests to the model
  • model_errors_total – Total number of errors during model invocation
  • model_concurrent_requests – Active concurrent model requests
  • model_latency_milliseconds – Model invocation latency in milliseconds
  • model_ttfb_milliseconds – Model time to first byte latency in milliseconds

These metrics capture model inference request and response data regardless of your model type or serving framework when deployed using inference operators with metrics enabled. You can also expose container-specific metrics that are provided by the model container such as TGI, LMI and vLLM.

You can enable metrics in JumpStart deployments by setting the metrics.enabled: true parameter:

apiVersion: inference.sagemaker.aws.amazon.com/v1alpha1
kind: JumpStartModel
metadata:
  name:mistral-model
  namespace: ns-team-a
spec:
  model:
    modelId: "huggingface-llm-mistral-7b-instruct"
    modelVersion: "3.19.0"
  metrics:
    enabled:true # Default: true (can be set to false to disable)

You can enable metrics for fine-tuned models for S3 and FSx using the following configuration. Note that the default settings are set to port 8000 and /metrics:

apiVersion: inference.sagemaker.aws.amazon.com/v1alpha1
kind: InferenceEndpointConfig
metadata:
  name: inferenceendpoint-deepseeks
  namespace: ns-team-a
spec:
  modelName: deepseeks
  modelVersion: 1.0.1
  metrics:
    enabled: true # Default: true (can be set to false to disable)
    metricsScrapeIntervalSeconds: 30 # Optional: if overriding the default 15s
    modelMetricsConfig:
        port: 8000 # Optional: if overriding the default 8080
        path: "/custom-metrics" # Optional: if overriding the default "/metrics"

For more details, check out the blog post on HyperPod observability and documentation.

Autoscaling

Effective autoscaling handles unpredictable traffic patterns with sudden spikes during peak hours, promotional events, or weekends. Without dynamic autoscaling, organizations must either overprovision resources, leading to significant costs, or risk service degradation during peak loads. LLMs require more sophisticated autoscaling approaches than traditional applications due to several unique characteristics. These models can take minutes to load into GPU memory, necessitating predictive scaling with appropriate buffer time to avoid cold-start penalties. Equally important is the ability to scale in when demand decreases to save costs. Two types of autoscaling are supported, the HyperPod interference operator and KEDA.

Autoscaling provided by HyperPod inference operator

HyperPod inference operator provides built-in autoscaling capabilities for model deployments using metrics from AWS CloudWatch and Amazon Managed Prometheus (AMP). This provides a simple and quick way to setup autoscaling for models deployed with the inference operator. Check out the complete example to autoscale in the SageMaker documentation.

Autoscaling with KEDA

If you need more flexibility for complex scaling capabilities and need to manage autoscaling policies independently from model deployment specs, you can use Kubernetes Event-driven Autoscaling (KEDA). KEDA ScaledObject configurations support a wide range of scaling triggers including Amazon CloudWatch metrics, Amazon SQS queue lengths, Prometheus queries, and resource-based metrics like GPU and memory utilization. You can apply these configurations to existing model deployments by referencing the deployment name in the scaleTargetRef section of the ScaledObject specification. For more information, see the Autoscaling documentation.

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: nd-deepseek-llm-scaler
  namespace: default
spec:
  scaleTargetRef:
    name: deepseek-llm-r1-distill-qwen-1-5b
    apiVersion: apps/v1
    kind: Deployment
  minReplicaCount: 1
  maxReplicaCount: 3
  pollingInterval: 30     # seconds between checks
  cooldownPeriod: 300     # seconds before scaling down
  triggers:
    - type: aws-cloudwatch
      metadata:
        namespace: AWS/ApplicationELB        # or your metric namespace
        metricName: RequestCount              # or your metric name
        dimensionName: LoadBalancer           # or your dimension key
        dimensionValue: app/k8s-default-albnddee-cc02b67f20/0991dc457b6e8447
        statistic: Sum
        threshold: "3"                        # change to your desired threshold
        minMetricValue: "0"                   # optional floor
        region: us-east-2                     # your AWS region
        identityOwner: operator               # use the IRSA SA bound to keda-operator

Task governance

With HyperPod task governance, you can optimize resource utilization by implementing priority-based scheduling. With this approach you can assign higher priority to inference workloads to maintain low-latency requirements during traffic spikes, while still allowing training jobs to utilize available resources during quieter periods. Task governance leverages Kueue for quota management, priority scheduling, and resource sharing policies. Through ClusterQueue configurations, administrators can establish flexible resource sharing strategies that balance dedicated capacity requirements with efficient resource utilization.

Teams can configure priority classes to define their resource allocation preferences. For example, teams should create a dedicated priority class for inference workloads, such as inference with a weight of 100, to ensure they are admitted and scheduled ahead of other task types. By giving inference pods the highest priority, they are positioned to preempt lower-priority jobs when the cluster is under load, which is essential for meeting low-latency requirements during traffic surges.Additionally, teams must appropriately size their quotas. If inference spikes are expected within a shared cluster, the team should reserve a sufficient amount of GPU resources in their ClusterQueue to handle these surges. When the team is not experiencing high traffic, unused resources within their quota can be temporarily allocated to other teams’ tasks. However, once inference demand returns, those borrowed resources can be reclaimed to prioritize pending inference pods.

Here is a sample screenshot that shows both training and deployment workloads running in the same cluster. Deployments have inference-priority class which is higher than training-priority class. So a spike in inference requests has suspended the training job to enable scaling up of deployments to handle traffic.

For more information, see the SageMaker HyperPod documentation.

Cleanup

You will incur costs for the instances running in your cluster. You can scale down the instances or delete instances in your cluster to stop accruing costs.

Conclusion

With this launch, you can quickly deploy open-weights and custom models foundation model from SageMaker JumpStart, S3, and FSx to your SageMaker HyperPod cluster. SageMaker automatically provisions the infrastructure, deploys the model on your cluster, enables auto-scaling, and configures the SageMaker endpoint. You can use SageMaker to scale the compute resources up and down through HyperPod task governance as the traffic on model endpoints changes, and automatically publish metrics to the HyperPod observability dashboard to provide full visibility into model performance. With these capabilities you can seamlessly train, fine tune, and deploy models on the same HyperPod compute resources, maximizing resource utilization across the entire model lifecycle.

You can start deploying models to HyperPod today in all AWS Regions where SageMaker HyperPod is available. To learn more, visit the Amazon SageMaker HyperPod documentation or try the HyperPod inference getting started guide in the AWS Management Console.

Acknowledgements:

We would like to acknowledge the key contributors for this launch: Pradeep Cruz, Amit Modi, Miron Perel, Suryansh Singh, Shantanu Tripathi, Nilesh Deshpande, Mahadeva Navali Basavaraj, Bikash Shrestha, Rahul Sahu.


About the authors

Vivek Gangasani is a Worldwide Lead GenAI Specialist Solutions Architect for SageMaker Inference. He drives Go-to-Market (GTM) and Outbound Product strategy for SageMaker Inference. He also helps enterprises and startups deploy, manage, and scale their GenAI models with SageMaker and GPUs. Currently, he is focused on developing strategies and content for optimizing inference performance and GPU efficiency for hosting Large Language Models. In his free time, Vivek enjoys hiking, watching movies, and trying different cuisines.

Kareem Syed-Mohammed is a Product Manager at AWS. He is focuses on enabling Gen AI model development and governance on SageMaker HyperPod. Prior to this, at Amazon QuickSight, he led embedded analytics, and developer experience. In addition to QuickSight, he has been with AWS Marketplace and Amazon retail as a Product Manager. Kareem started his career as a developer for call center technologies, Local Expert and Ads for Expedia, and management consultant at McKinsey.

Piyush Daftary is a Senior Software Engineer at AWS, working on Amazon SageMaker. His interests include databases, search, machine learning, and AI. He currently focuses on building performant, scalable inference systems for large language models. Outside of work, he enjoys traveling, hiking, and spending time with family.

Chaitanya Hazarey leads software development for inference on SageMaker HyperPod at Amazon, bringing extensive expertise in full-stack engineering, ML/AI, and data science. As a passionate advocate for responsible AI development, he combines technical leadership with a deep commitment to advancing AI capabilities while maintaining ethical considerations. His comprehensive understanding of modern product development drives innovation in machine learning infrastructure.

Andrew Smith is a Senior Cloud Support Engineer in the SageMaker, Vision & Other team at AWS, based in Sydney, Australia. He supports customers using many AI/ML services on AWS with expertise in working with Amazon SageMaker. Outside of work, he enjoys spending time with friends and family as well as learning about different technologies.

Read More

Supercharge your AI workflows by connecting to SageMaker Studio from Visual Studio Code

Supercharge your AI workflows by connecting to SageMaker Studio from Visual Studio Code

AI developers and machine learning (ML) engineers can now use the capabilities of Amazon SageMaker Studio directly from their local Visual Studio Code (VS Code). With this capability, you can use your customized local VS Code setup, including AI-assisted development tools, custom extensions, and debugging tools while accessing compute resources and your data in SageMaker Studio. By accessing familiar model development features, data scientists can maintain their established workflows, preserve their productivity tools, and seamlessly develop, train, and deploy machine learning, deep learning and generative AI models.

In this post, we show you how to remotely connect your local VS Code to SageMaker Studio development environments to use your customized development environment while accessing Amazon SageMaker AI compute resources.

The local integrated development environment (IDE) connection capability delivers three key benefits for developers and data scientists:

  • Familiar development environment with scalable compute: Work in your familiar IDE environment while harnessing the purpose-built model development environment of SageMaker AI. Keep your preferred themes, shortcuts, extensions, productivity, and AI tools while accessing SageMaker AI features.
  • Simplify operations: With a few clicks, you can minimize the complex configurations and administrative overhead of setting up remote access to SageMaker Studio spaces. The integration provides direct access to Studio spaces from your IDE.
  • Enterprise grade security: Benefit from secure connections between your IDE and SageMaker AI through automatic credentials management and session maintenance. In addition, code execution remains within the controlled boundaries of SageMaker AI.

This feature bridges the gap between local development preferences and cloud-based machine learning resources, so that teams can improve their productivity while using the features of Amazon SageMaker AI.

Solution overview

The following diagram showcases the interaction between your local IDE and SageMaker Studio spaces.

The solution architecture consists of three main components:

  • Local computer: Your development machine running VS Code with AWS Toolkit extension installed.
  • SageMaker Studio: A unified, web-based ML development environment to seamlessly build, train, deploy, and manage machine learning and analytics workflows at scale using integrated AWS tools and secure, governed access to your data.
  • AWS Systems Manager: A secure, scalable remote access and management service that enables seamless connectivity between your local VS Code and SageMaker Studio spaces to streamline ML development workflows.

The connection flow supports two options:

  • Direct launch (deep link): Users can initiate the connection directly from the SageMaker Studio web interface by choosing Open in VS Code, which automatically launches their local VS Code instance.
  • AWS Toolkit connection: Users can connect through AWS Toolkit extension in VS Code by browsing available SageMaker Studio spaces and selecting their target environment.

In addition to the preceding, users can also connect to their space directly from their IDE terminal using SSH. For instructions on connecting using SSH, refer to documentation here.

After connecting, developers can:

  • Use their custom VS Code extensions and tools
  • Remotely access and use their space’s storage
  • Run their AI and ML workloads in SageMaker compute environments
  • Work with notebooks in their preferred IDE
  • Maintain the same security parameters as the SageMaker Studio web environment

Solution implementation

Prerequisites

To try the remote IDE connection, you must meet the following prerequisites:

  1. You have access to a SageMaker Studio domain with connectivity to the internet. For domains set up in VPC-only mode, your domain should have a route out to the internet through a proxy, or a NAT gateway. If your domain is completely isolated from the internet, see Connect to VPC with subnets without internet access for setting up the remote connection. If you do not have a Studio domain, you can create one using the quick setup or custom setup option.
  2. You have permissions to update the SageMaker Studio domain or user execution role in AWS Identity and Access Management (IAM).
  3. You have the latest stable VS Code with Microsoft Remote SSH (version 0.74.0 or later), and AWS Toolkit extension (version v3.68.0 or later) installed on your local machine. Optionally, if you want to connect to SageMaker spaces directly from VS Code, you should be authenticated to access AWS resources using IAM or AWS IAM Identity Center credentials. See the administrator documentation for AWS Toolkit authentication support.
  4. You use compatible SageMaker Distribution images (2.7+ and 3.1+) for running SageMaker Studio spaces, or a custom image.
  5. If you’re initiating the connection from the IDE, you already have a user profile in the SageMaker Studio domain you want to connect to, and the spaces are already created using the Studio UI or through APIs. The AWS Toolkit does not allow creation or deletion of spaces.

Set up necessary permissions

We’ve launched the StartSession API for remote IDE connectivity. Add the sagemaker:StartSession permission to your user’s role so that they can remotely connect to a space.

For the deep-linking experience, the user starts the remote session from the Studio UI. Hence, the domain default execution role, or the user’s execution role should allow the user to call the StartSession API. Modify the permissions on your domain or user execution role by adding the following policy statement:

{
    "Version": "2012-10-17", 
    "Statement": [
        {
            "Sid": "RestrictStartSessionOnSpacesToUserProfile",
            "Effect": "Allow",
            "Action": [
                "sagemaker:StartSession"
            ],
            "Resource": "arn:*:sagemaker:${aws:Region}:${aws:AccountId}:space/${sagemaker:DomainId}/*",
            "Condition": {
                "ArnLike": {
                    "sagemaker:ResourceTag/sagemaker:user-profile-arn": "arn:*:sagemaker:${aws:Region}:${aws:AccountId}:user-profile/${sagemaker:DomainId}/${sagemaker:UserProfileName}"
                }
            }
        }
    ]
}

If you’re initializing the connection to SageMaker Studio spaces directly from VS Code, your AWS credentials should allow the user to list the spaces, start or stop a space, and initiate a connection to a running space. Make sure that your AWS credentials allow the following API actions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "sagemaker:ListSpaces",
                "sagemaker:DescribeSpace",
                "sagemaker:UpdateSpace",
                "sagemaker:ListApps",
                "sagemaker:CreateApp",
                "sagemaker:DeleteApp",
                "sagemaker:DescribeApp",
                "sagemaker:StartSession",
                "sagemaker:DescribeDomain",
                "sagemaker:AddTags"
            ],
            "Resource": "*"
        }
    ]
}

This initial IAM policy provides a quick-start foundation for testing SageMaker features. Organizations can implement more granular access controls using resource Amazon Resource Name (ARN) constraints or attribute-based access control (ABAC). With the introduction of the StartSession API, you can restrict access by defining space ARNs in the resource section or implementing condition tags according to your specific security needs, as shown in the following example.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowRemoteAccessByTag",
            "Effect": "Allow",
            "Action": [
                "sagemaker:StartSession"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/User": <user-identifier>
                }
            }
        }
    ]
}

Enable remote connectivity and launch VS Code from SageMaker Studio

To connect to a SageMaker space remotely, the space must have remote access enabled.

  1. Before running a space on the Studio UI, you can toggle Remote access on to enable the feature, as shown in the following screenshot.

  1. After the feature is enabled, choose Run space to start the space. After the space is running, choose Open in VS Code to launch VS Code.

  1. The first time you choose this option, you’ll be prompted by your browser to confirm opening VS Code. Select the checkbox Always allow studio to confirm and then choose Open Visual Studio Code.

  1. This will open VS Code, and you will be prompted to update your SSH configuration. Choose Update SSH config to complete the connection. This is also a one-time setup, and you will not be prompted for future connections.

  1. On successful connection, a new window launches that is connected to the SageMaker Studio space and has access to the Studio space’s storage.

Connect to the space from VS Code

Using the AWS Toolkit, you can list the spaces, start, connect to a space, or connect to a running space that has remote connection enabled. If a running space doesn’t have remote connectivity enabled, you can stop the space from the AWS Toolkit and then select the Connect icon to automatically turn on remote connectivity and start the space. The following section describes the experience in detail.

  1. After you’re authenticated into AWS, from AWS Toolkit, access the AWS Region where your SageMaker Studio domain is. You will now see a SageMaker AI section. Choose the SageMaker AI section to list the spaces in your Region. If you’re connected using IAM, the toolkit lists the spaces across domains and users in your Region. See the [Optional] Filter spaces to a specific domain or user below on instructions to view spaces for a particular user profile. For Identity Center users, the list is already filtered to display only the spaces owned by you.

  1. After you identify the space, choose the connectivity icon as shown in the screenshot below to connect to the space.

Optional: Filter spaces to a specific domain or user

When connecting to an account using IAM, you will see a list of spaces in the account and region. This can be overwhelming if the account has tens or hundreds of domains, users and spaces. The toolkit provides a filter utility that helps you quickly filter the list of spaces to a specific user profile or a list of user profiles.

  1. Next to SageMaker AI, choose the filter icon as shown in the following screenshot.

  1. You will now see a list of user profiles and domains. Scroll through the list or enter user profile or domain name, and then select or unselect to filter the list of spaces by domain or user profile.

Use cases

Following use cases demonstrate how AI developers and machine learning (ML) engineers can use local integrated development environment (IDE) connection capability.

Connecting to a notebook kernel

After you’re connected to the space, you can start creating and running notebooks and scripts right from your local development environment. By using this method, you can use the managed infrastructure provided by SageMaker for resource-intensive AI tasks while coding in a familiar environment. You can run notebook cells on your SageMaker Distribution or custom image kernels, and can choose the IDE that maximizes your productivity. Use the following steps to create and connect your notebook to a remote kernel –

  1. On your VS Code file explorer, choose the plus (+) icon to create a new file, name it remote-kernel.ipynb.
  2. Open the notebook and run a cell (for example, print ("Hello from remote IDE"). VS Code will show a pop-up for installing the Python and Jupyter extension.
  3. Choose Install/Enable suggested extensions.
  4. After the extensions are installed, VS Code will automatically launch the kernel selector. You can also choose Select Kernel on the right to view the list of kernels.

For the next steps, follow the directions for the space you’re connected to.

Code Editor spaces:

  1. Select Python environments… and choose from a list of provided Python environments. After you are connected, you can start running the cells in your notebook.

JupyterLab spaces:

  1. Select the Existing Jupyter Server… option to have the same kernel experience as the JupyterLab environment.
    If this is the first time connecting to JupyterLab spaces, you will need to configure the Jupyter server to view the same kernels as the remote server using the following steps.

    1. Choose Enter the URL of the running Jupyter Server and enter http://localhost:8888/jupyterlab/default/lab as the URL and press Enter.
    2. Enter a custom server display name, for example, JupyterLab Space Default Server and press Enter.You will now be able to view the list of kernels that’s available on the remote Jupyter server. For consequent connections, this display name will be available for you to choose from when you select the existing Jupyter server option.

The following graphic shows the entire workflow. In this example, we’re running a JupyterLab space with the SageMaker Distribution image, so we can view the list of kernels available in the image.

You can choose the kernel of your choice, for example, the Python 3 kernel, and you can start running the notebook cells on the remote kernel. With access to the SageMaker managed kernels, you can now focus on model development rather than infrastructure and runtime management, while using the development environment you know and trust.

Best practices and guardrails

  1. Follow the principle of least privilege when allowing users to connect remotely to SageMaker Studio spaces applications. SageMaker Studio supports custom tag propagation, we recommend tagging each user with a unique identifier and using the tag to allow the StartSession API to only their private applications.
  2. As an administrator, if you want to disable this feature for your users, you can enforce it using the sagemaker:RemoteAccess condition key. The following is an example policy.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowCreateSpaceWithRemoteAccessDisabled",
            "Effect": "Allow",
            "Action": [
                "sagemaker:CreateSpace",
                "sagemaker:UpdateSpace"
                ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "sagemaker:RemoteAccess": [
                        "DISABLED"
                    ]
                }
            }
        },
        {
            "Sid": "AllowCreateSpaceWithNoRemoteAccess",
            "Effect": "Allow",
            "Action":  [
                "sagemaker:CreateSpace",
                "sagemaker:UpdateSpace"
                ],
            "Resource": "*",
            "Condition": {
                "Null": {
                    "sagemaker:RemoteAccess": "true"
                }
            }
        }
    ]
}
  1. When connecting remotely to the SageMaker Studio spaces from your local IDE, be aware of bandwidth constraints. For optimal performance, avoid using the remote connection to transfer or access large datasets. Instead, use data transfer methods built for cloud and in-place data processing to facilitate a smooth user experience. We recommend an instance with at least 8 GB of storage to start with, and the SageMaker Studio UI will throw an exception if you choose a smaller instance.

Cleanup

If you have created a SageMaker Studio domain for the purposes of this post, remember to delete the applications, spaces, user profiles, and the domain. For instructions, see Delete a domain.

For the SageMaker Studio spaces, use the idle shutdown functionality to avoid incurring charges for compute when it is not in use.

Conclusion

The remote IDE connection feature for Amazon SageMaker Studio bridges the gap between local development environments and powerful ML infrastructure of SageMaker AI. With direct connections from local IDEs to SageMaker Studio spaces, developers and data scientists can now:

  • Maintain their preferred development environment while using the compute resources of SageMaker AI
  • Use custom extensions, debugging tools, and familiar workflows
  • Access governed data and ML resources within existing security boundaries
  • Choose between convenient deep linking or AWS Toolkit connection methods
  • Operate within enterprise-grade security controls and permissions

This integration minimizes the productivity barriers of context switching while facilitating secure access to SageMaker AI resources. Get started today with SageMaker Studio remote IDE connection to connect your local development environment to SageMaker Studio and experience streamlined ML development workflows using your familiar tools while the powerful ML infrastructure of SageMaker AI.


About the authors


Durga Sury
 is a Senior Solutions Architect at Amazon SageMaker, where she helps enterprise customers build secure and scalable AI/ML systems. When she’s not architecting solutions, you can find her enjoying sunny walks with her dog, immersing herself in murder mystery books, or catching up on her favorite Netflix shows.

Edward Sun is a Senior SDE working for SageMaker Studio at Amazon Web Services. He is focused on building interactive ML solution and simplifying the customer experience to integrate SageMaker Studio with popular technologies in data engineering and ML landscape. In his spare time, Edward is big fan of camping, hiking, and fishing, and enjoys spending time with his family.

Raj Bagwe is a Senior Solutions Architect at Amazon Web Services, based in San Francisco, California. With over 6 years at AWS, he helps customers navigate complex technological challenges and specializes in Cloud Architecture, Security and Migrations. In his spare time, he coaches a robotics team and plays volleyball. He can be reached at X handle @rajesh_bagwe.

Sri Aakash Mandavilli is a Software Engineer on the Amazon SageMaker Studio team, where he has been building innovative products since 2021. He specializes in developing various solutions across the Studio service to enhance the machine learning development experience. Outside of work, SriAakash enjoys staying active through hiking, biking, and taking long walks.

Read More

How AI will accelerate biomedical research and discovery

How AI will accelerate biomedical research and discovery

Illustrated images of Peter Lee, Daphne Koller, Noubar Afeyan, and Dr. Eric Topol for the Microsoft Research Podcast

In November 2022, OpenAI’s ChatGPT kick-started a new era in AI. This was followed less than a half year later by the release of GPT-4. In the months leading up to GPT-4’s public release, Peter Lee, president of Microsoft Research, cowrote a book full of optimism for the potential of advanced AI models to transform the world of healthcare. What has happened since? In this special podcast series, The AI Revolution in Medicine, Revisited, Lee revisits the book, exploring how patients, providers, and other medical professionals are experiencing and using generative AI today while examining what he and his coauthors got right—and what they didn’t foresee.

In this episode, Daphne Koller (opens in new tab), Noubar Afeyan (opens in new tab), and Dr. Eric Topol (opens in new tab), leaders in AI-driven medicine, join Lee to explore the rapidly evolving role of AI across the biomedical and healthcare landscape. Koller, founder and CEO of Insitro, shares how machine learning is transforming drug discovery, especially target identification for complex diseases like ALS, by uncovering biological patterns across massive datasets. Afeyan, founder and CEO of Flagship Pioneering and co-founder and chairman of Moderna, discusses how AI is being applied across biotech research and development, from protein design to autonomous science platforms. Topol, executive vice president of Scripps Research and founder and director of the Scripps Research Translational Institute, highlights how AI can today help mitigate and prevent the core diseases that erode our health and the possibility of realizing a virtual cell. Through his conversations with the three, Lee investigates how AI is reshaping the discovery, deployment, and delivery of medicine. 

Transcript 

[MUSIC] [BOOK PASSAGE] 

PETER LEE: “Can GPT-4 indeed accelerate the progression of medicine … ? It seems like a tall order, but if I had been told six months ago that it could rapidly summarize any published paper, that alone would have satisfied me as a strong contribution to research productivity. … But now that I’ve seen what GPT-4 can do with the healthcare process, I expect a lot more in the realm of research.” 

[END OF BOOK PASSAGE] [THEME MUSIC]

This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.

Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?

In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.

[THEME MUSIC FADES]

The book passage I read at the top was from “Chapter 8: Smarter Science,” which was written by Zak.

In writing the book, we were optimistic about AI’s potential to accelerate biomedical research and help get new and much-needed treatments and drugs to patients sooner. One area we explored was generative AI as a designer of clinical trials. We looked at generative AI’s adeptness at summarizing helping speed up pre-trial triage and research. We even went so far as to predict the arrival of a large language model that can serve as a central intellectual tool. 

For a look at how AI is impacting biomedical research today, I’m excited to welcome Daphne Koller, Noubar Afeyan, and Eric Topol. 


Daphne Koller is the CEO and founder of Insitro, a machine learning-driven drug discovery and development company that recently made news for its identification of a novel drug target for ALS and its collaboration with Eli Lilly to license Lilly’s biochemical delivery systems. Prior to founding Insitro, Daphne was the co-founder, co-CEO, and president of the online education platform Coursera.

Noubar Afeyan is the founder and CEO of Flagship Pioneering, which creates biotechnology companies focused on transforming human health and environmental sustainability. He is also co-founder and chairman of the messenger RNA company Moderna. An entrepreneur and biochemical engineer, Noubar has numerous patents to his name and has co-founded many startups in science and technology.

Dr. Eric Topol is the executive vice president of the biomedical research non-profit Scripps Research, where he founded and now directs the Scripps Research Translational Institute. One of the most cited researchers in medicine, Eric has focused on promoting human health and individualized medicine through the use of genomic and digital data and AI. 

These three are likely to have an outsized influence on how drugs and new medical technologies soon will be developed.

[TRANSITION MUSIC] 

Here’s my interview with Daphne Koller:

LEE: Daphne, I’m just thrilled to have you join us. 

DAPHNE KOLLER: Thank you for having me, Peter. It’s a pleasure to be here. 

LEE: Well, you know, you’re quite well-known across several fields. But maybe for some audience members of this podcast, they might not have encountered you before. So where I’d like to start is a question I’ve been asking all of our guests.

How would you describe what you do? And the way I kind of put it is, you know, how do you explain to someone like your parents what you do for a living? 

KOLLER: So that answer obviously has shifted over the years.

What I would say now is that we are working to leverage the incredible convergence of very powerful technologies, of which AI is one but not the only one, to change the way in which we discover and develop new treatments for diseases for which patients are currently suffering and even dying. 

LEE: You know, I think I’ve known you for a long time. 

KOLLER: Longer than I think either of us care to admit. 

LEE: [LAUGHS] In fact, I think I remember you even when you were still a graduate student. But of course, I knew you best when you took up your professorship at Stanford. And I always, in my mind, think of you as a computer scientist and a machine learning person. And in fact, you really made a big name for yourself in computer science research in machine learning.

But now you’re, you know, leading one of the most important biotech companies on the planet. How did that happen?

KOLLER: So people often think that this is a recent transition. That is, after I left Coursera, I looked around and said, “Hmm. What should I do next? Oh, biotech seems like a good thing,” but that’s actually not the way it transpired.

This goes all the way back to my early days at Stanford, where, in fact, I was, you know, as a young faculty member in machine learning, because I was the first machine learning hire into Stanford’s computer science department, I was looking for really exciting places in which this technology could be deployed, and applications back then, because of scarcity of data, were just not that inspiring.

And so I looked around, and this was around the late ’90s, and realized that there was interesting data emerging in biology and medicine. My first application actually was in, interestingly, in epidemiology—patient tracking and tuberculosis. You know, you can think of it as a tiny microcosm of the very sophisticated models that COVID then enabled in a much later stage.

LEE: Right. 

KOLLER: And so initially, this was based almost entirely on just technical interest. It’s kind of like, oh, this is more interesting as a question to tackle than spam filtering. But then I became interested in biology in its own right, biology and medicine, and ended up having a bifurcated existence as a Stanford professor where half my lab continued to do core computer science research published in, you know, NeurIPS and ICML. And the other half actually did biomedical research that was published in, you know, Nature Cell [and] Science. So that was back in, you know, the early, early 2000s, and for most of my Stanford career, I continued to have both interests.

And then the Coursera experience kind of took me out of Stanford and put me in an industry setting for the first time in my life actually. But then when my time at Coursera came to an end, you know, I’d been there for five years. And if you look at the timeline, I left Stanford in early 2012, right as the machine learning revolution was starting. So I missed the beginning.

And it was only in like 2016 or so that, as I picked my head up over the trenches, like, “Oh my goodness, this technology is going to change the world.” And I wanted to deploy that big thing towards places where it would have beneficial impact on the world, like to make the world a better place.

LEE: Yeah. 

KOLLER: And so I decided that one of the areas where I could make a unique, differentiated impact was in really bringing AI and machine learning to the life sciences, having spent, you know, the majority of my career at the boundary of those two disciplines. And notice I say “boundary” with deliberation because there wasn’t very much of an intersection.

LEE: Right. 

KOLLER: I felt like I could do something that was unique. 

LEE: So just to stick on you for a little bit longer, you know, we have been sort of getting into your origin story about what we call AI today—but machine learning, so deep learning. 

And, you know, there has always been a kind of an emotional response for people like you and me and now the general public about their first encounters with what we now call generative AI. I’d love to hear what your first encounter was with generative AI and how you reacted to this. 

KOLLER: I think my first encounter was actually an indirect one. Because, you know, the earlier generations of generative AI didn’t directly touch our work at Insitro (opens in new tab)

And yet at the same time, I had always had an interest in computer vision. That was a large part of my non-bio work when I was at Stanford. 

And so some of my earlier even presentations, when I was trying to convey to people back in 2016 how this technology was going to transform the world, I was talking about the incredible progress in image recognition that had happened up until that point. 

So my first interaction was actually in the generative AI for images, where you are able to go the other way … 

LEE: Yes. 

KOLLER: … where you can take a verbal description of an image and create—and this was back in the days when the images weren’t particularly photorealistic, but still a natural language description to an image was magic given that only two or three years before that, we were barely able to look at an image and write a short phrase saying, “This is a dog on the beach.” And so that arc, that hockey curve, was just mind blowing to me. 

LEE: Did you have moments of skepticism? 

KOLLER: Yeah, I mean the early, you know, early versions of ChatGPT, where it was more like parlor tricks and poking it a little bit revealed all of the easy ways that one could break it and make it do really stupid things. I was like, yeah, OK, this is kind of cute, but is it going to actually make a difference? Is it going to solve a problem that matters? 

And I mean, obviously, I think now everyone agrees that the answer is yes, although there are still people who are like, yeah, but maybe it’s around the edges. I’m not among them, by the way, but … yeah, so initially there were like, “Yeah, this is cute and very impressive, but is it going to make a difference to a problem that matters?” 

LEE: Yeah. So now, maybe this is a good time to get into what you’ve been doing with ALS [amyotrophic lateral sclerosis]. You know, there’s a knee-jerk reaction from the technology side to focus on designing small molecules, on predicting, you know, their properties, you know, maybe binding affinity or aspects of ADME [absorption, distribution, metabolism, and excretion], you know, like absorption or dispersion or whatever. 

And all of that is very useful, but if I understand the work on ALS, you went to a much harder place, which is to actually identify and select targets. 

KOLLER: That’s right. 

LEE: So first off, just for the benefit of the standard listeners of this podcast, explain what that problem is in general. 

KOLLER: No, for sure. And I think maybe I’ll start by just very quickly talking about the drug discovery and development arc, …

LEE: Yeah.

KOLLER: … which, by and large, consists of three main phases. That’s the standard taxonomy. The first is what’s called sometimes target discovery or identifying a therapeutic hypothesis, which looks like: if I modulate this target in this disease, something beneficial will happen. 

Then, you have to take that target and turn it into a molecule that you can actually put into a person. It could be a small molecule. It could be a large molecule like an antibody, whatever. And then you have that construct, that molecule. And the last piece is you put it into a person in the context of a clinical trial, and you measure what has happened. And there’s been AI deployed towards each of those three stages in different ways. 

The last one is mostly like an efficiency gain. You know, the trial is kind of already defined, and you want to deploy technology to make it more efficient and effective, which is great because those are expensive operations. 

LEE: Yep. 

KOLLER: The middle one is where I would say the vast majority of efforts so far has been deployed in AI because it is a nice, well-defined problem. It doesn’t mean it’s easy, but it’s one where you can define the problem. It is, I need to inhibit this protein by this amount, and the molecule needs to be soluble and whatever and go past the blood-brain barrier. And you know probably within a year and a half or so, or two, if you succeeded or not. 

The first stage is the one where I would say the least amount of energy has gone because when you’re uncovering a novel target in the context of an indication, you don’t know that you’ve been successful until you go all the way to the end, which is the clinical trial, which is what makes this a long and risky journey. And not a lot of people have the appetite or the capital to actually do that. 

However, in my opinion, and that of, I think, quite a number of others, it is where the biggest impact can be made. And the reason is that while pharma has its deficiencies, making good molecules is actually something they’re pretty good at. 

It might take them longer than it should, maybe it’s not as efficient as it could be, but at the end of the day, if you tell them to drug A target, pharma is actually pretty good at generating those molecules. However, when you put those molecules into the clinic, 90% of them fail. And the reason they fail is not by and large because the molecule wasn’t good. In the majority of cases, it’s because the target you went after didn’t do anything useful in the context of the patient population in which you put it. 

And so in order to fix the inefficiency of this industry, which is incredible inefficiency, you need to address the problem at the root, and the root is picking the right targets to go after. And so that is what we elected to do. 

It doesn’t mean we don’t make molecules. I mean, of course, you can’t just end up with a target because a target is not actionable. You need to turn it into a molecule. And we absolutely do that. And by the way, the partnership with Lilly (opens in new tab) is actually one where they help us make a molecule. 

LEE: Yes. 

KOLLER: I mean, it’s our target. It’s our program. But Lilly is deploying its very state-of-the-art molecule-making capabilities to help us turn that target into a drug. 

LEE: So let’s get now into the machine learning of this. Again, this just strikes me as such a difficult problem to solve. 

KOLLER: Yeah. 

LEE: So how does machine learning … how does AI help you? 

KOLLER: So I think when you look at how people currently select targets, it’s a combination of oftentimes at this point, with an increasing respect for the power of human genetics, some search for a genetic association, oftentimes with a human-defined, highly subjective, highly noisy clinical outcome, like some ICD [International Classification of Diseases] code. 

And those are often underpowered and very difficult to deconvolute the underlying biology. You combine that with some mechanistic interrogation in a highly reductionist model system looking at a small number of readouts, biochemical readouts, that a biologist thinks are relevant to the disease. Like does this make this, whatever, cholesterol go up or amyloid beta go down? Or whatever. And then you take that as the second stage, and you pick, based on typically human intuition about, Oh, this one looks good to me, and then you take that forward. 

What we’re doing is an attempt to be as unbiased and holistic as possible. So, first of all, rather than rely on human-defined clinical endpoints, like this person has been diagnosed with diabetes or fatty liver, we try and measure as much as we can a holistic physiological state and then use machine learning to find structure, patterns in that human physiological readouts, imaging readouts, and omics readouts from blood, from tissue, different kinds of imaging, and say, these are different vectors that this disease takes, this group of individuals, and here’s a different group of individuals that maybe from a diagnostical perspective are all called the same thing, but they are actually exhibiting a very different biology underlying it. 

And so that is something that doesn’t emerge when a human being takes a reductionist view to looking at this high-content data, and oftentimes, they don’t even look at it and produce an ICD code. 

LEE: Right. Yep. 

KOLLER: The same approach, actually even the same code base, is taken in the cellular data. So we don’t just say, “Well, the thing that matters is, you know, the total amount of lipid in the cell or whatever.” Rather, we say, “Let’s look at multiple readouts, multiple ways of looking at the cells, combine them using the power of machine learning.” And again, looking at imaging readouts where a human’s eyes just glaze over looking at even a few dozen cells, far less a few hundreds of millions of cells, and understand what are the different biological processes that are going on. What are the vectors that the disease might take you in this direction, in this group of cells, or in that direction? 

And then importantly, we take all of that information from the human side, from the cellular side, across these different readouts, and we combine them using an integrative approach that looks at the combined weight of evidence and says, these are the targets that I have the greatest amount of conviction about by looking across all of that information. Whereas we know, and we know this, I’m sure you’ve seen this analysis done for clinicians, a human being typically is able to keep three or four things in their head at the same time. 

LEE: Right. 

KOLLER: A really good human being who’s really expert at what they do can maybe get to six to eight. 

LEE: Yeah. 

KOLLER: The machine learning has no problem doing a few hundred. 

LEE: Right. 

KOLLER: And so you put that together, and that allows you, to your earlier question, really select the targets around which you have the highest conviction. And then those are the ones that we then prioritize for interrogation in more expensive systems like mice and monkeys and then at the end of the day pick the small handful that one can afford to actually take into clinical trials. 

LEE: So now, Insitro recently received $25 million in milestone payments from Bristol Myers Squibb (opens in new tab) after discovering and selecting a novel drug target for ALS. Can you tell us a little bit more about that? 

KOLLER: We are incredibly excited about the first novel target, and there is a couple of others just behind it in line that seem, you know, quite efficacious, as well, that truly seem to reverse, albeit in a cellular system, what we now understand to be ALS pathology across multiple different dimensions. There’s been obviously many attempts made to try and address ALS, which by the way, horrible, horrible disease, worse than most cancers. It kills you almost inevitably in three to five years in a particularly horrific way. 

And what we have in our hands is a target that seems to revert a lot of the pathologies that are associated with the disease, which we now understand has to do with the mis-splicing of multiple proteins within the cell and creating defective versions of those proteins that are just not operational. And we are seeing reversion of many of those. 

So can I tell you for sure it’ll work in a human? No, there’s many steps between now and then. But we couldn’t be more excited about the opportunity to provide what we hope will be a disease-modifying intervention for these patients who really desperately need something. 

LEE: Well, it’s certainly been making waves in the biotech and biomedical world. 

KOLLER: Thank you. 

LEE: So we’ll be really watching very closely. 

So, you know, I think just reflecting on, you know, what we missed and what we got right in our book, I think in our book, we did have the insight that there would be an ability to connect, say, genotypic and phenotypic data and, you know, just broadly the kinds of clinical measurements that get made on real patients and that these things could be brought together. And I think the work that you’re doing really illustrates that in a very, very sophisticated, very ambitious way. 

But the fact that this could be connected all the way down to the biology, to the biochemistry, I think we didn’t have any clue what would happen, at least not this quickly. 

KOLLER: Well, I think the … 

LEE: And I realize, you’ve been at this for quite a few years, but still, it’s quite amazing. 

KOLLER: The thread that connects them is human genetics. And I think that has, to us, been, sort of, the, kind of, the connective tissue that allows you to translate across different systems and say, “What does this gene do? What does this gene do in this organ and in that organ? What does it do in this type of cell and in that type of cell?” 

And then use that as sort of the thread, if you will, that follows the impact of modulating this gene all the way from the simple systems where you can do the experiment to the complex systems where you can’t do the experiment until the very end, but you have the human genetics as a way of looking at the statistics and understanding what the impact might be. 

LEE: So I’d like to now switch gears and take … I want to take two steps in the remainder of this conversation towards the future. So one step into that future, of course, we’re living through now, which is just all of the crazy pace of work and advancement in generative AI generally, you know, just the scale of transformers, of post-training, and now inference scale and reasoning models and so on. And where do you see all of that going with respect to the goals that you have and that Insitro has? 

KOLLER: So I think first and foremost is the parallel, if you will, to the predictions that you focused on in your book, which is this will transform a lot of the core data processing tasks, the information tasks. And sure, the doctors and nurses is one thing. But if you just think of clinical trial operations or the submission of regulatory documents, these are all kind of simple data … they’re not simple, obviously, but they’re data processing tasks. They involve natural language. That’s not going to be our focus, but I hope that others will use that to make clinical trials faster, more efficient, less expensive. 

There’s already a lot of progress that’s happening on the molecular design side of things and taking hypotheses and turning them quickly and effectively into molecules. As I said, this is part of our work that we absolutely do and we don’t talk about it very much, simply because it’s a very crowded landscape and a lot of companies are engaged on that. But I think it’s really important to be able to take biological insights and turn them into new molecules. 

And then, of course, the transformer models and their likes play a very significant role in that sort of turning insights into molecules because you can have foundation models for proteins. There are increasing efforts to create foundation models for other categories of molecules. And so that will undoubtedly accelerate the process by which you can quickly generate different molecular hypotheses and test them and learn from what you did so that you can do fewer iterations … 

LEE: Right. 

KOLLER: … before you converge on a successful molecule. 

I do think that arguably the biggest impact as yet to be had is in that understanding of core human biology and what are the right ways to intervene in it. And that plays a role in a couple different ways. First of all, it certainly plays a role in which … if we are able to understand the human physiological state and, you know, the state of different systems all the way down to the cell level, that will inform our ability to pick hypotheses that are more likely to actually impact the right biologies underneath. 

LEE: Yep. Yeah. 

KOLLER: And the more data we’re able to collect about humans and about cells, the more successful our models will be at representing that human physiological state or the cell biological state and making predictions reliably on the impact of these interventions. 

The other side of it, though, and this comes back, I think, to themes that were very much in your book, is this will impact not only the early stages of which hypotheses we interrogate, which molecules we move forward, but also hopefully at the end of the day, which molecule we prescribe to which patient. 

LEE: Right. 

KOLLER: And I think there’s been obviously so much narrative over the years about precision medicine, personalized medicine, and very little of that has come to fruition, with the exception of, you know, certain islands in oncology, primarily on genetically driven cancers. 

But I think the opportunity is still there. We just haven’t been able to bring it to life because of the lack of the right kind of data. And I think with the increasing amount of human, kind of, foundational data that we’re able to acquire, things that are not sort of distilled through the eye of a clinician, for example, … 

LEE: Yes. 

KOLLER: … but really measurements of human pathology, we can start to get to some of that precision, carving out of the human population and then get to a world where we can prescribe the right medicine to the right patient and not only in cancer but also in other diseases that are also not a single disease. 

LEE: All right, so now to wrap up this time together, I always try to ask one more provocative last question. One of the dreams that comes naturally to someone like me or any of my colleagues, probably even to you, is this idea of, you know, wouldn’t it be possible someday to have a foundation model for biology or for human biology or foundation model for the human cell or something along these lines? 

And in fact, there are, of course, you and I are both aware of people who are taking that idea seriously and chasing after it. I have people in our labs that think hard about this kind of thing. Is it a reasonable thought at all? 

KOLLER: I have learned over the years to avoid saying the word never because technology proceeds in ways that you often don’t expect. And so will we at some point be able to measure the cell in enough different ways across enough different channels at the same time that you can piece together what a cell does? I think that is eminently feasible, not today, but over time. 

I don’t think it’s feasible using today’s technology, although the efforts to get there may expose where the biggest opportunities lie to, you know, build that next layer. So I think it’s good that people are working on really hard problems. I would also point out that even if one were to solve that really challenging problem of creating a model of a cell, there is thousands of different types of cells within the human body. 

They’re very different. They also talk to each other … 

LEE: Yep. 

KOLLER: … both within the cell type and across different cell types. So the combinatorial complexity of that system is, I think, unfathomable to many people. I mean, I would say to all of us. 

LEE: Yeah. 

KOLLER: And so even from that very lofty goal, there is multiple big steps that would need to be taken to a mechanistic model of the full organism. So will we ever get there? Again, you know, I don’t see a reason why this is impossible to do. So I think over time, technology will get better and will allow us to build more and more elaborate models of more and more complex systems. 

Patients can’t wait …

LEE: Right. Yeah. 

KOLLER: … for that to happen in order for us to get them better medicines. So I think there is a great basic science initiative on that side of things. And, in parallel, we need to make do with the data that we have or can collect or can print. We print a lot of data in our internal wet labs and get to drugs that are effective even though they don’t benefit from having a full-blown mechanistic model. 

LEE: Last question: where do you think we’ll be in five years? 

KOLLER: Phew. If I had answered that question five years ago, I would have been very badly embarrassed at the inaccuracy of my answer. [LAUGHTER] So I will not answer it today either. 

I will say that the thing about exponential curves is that they are very, very tricky, and they move in unexpected ways. I would hope that in five years, we will have made a sufficient investment in the generation of scientific data that we will be able to move beyond data that was generated entirely by humans and therefore insights that are derivative of what people already know to things that are truly novel discoveries. 

And I think in order to do that in, you know, math, maybe because math is entirely conceptual, maybe you can do that today. Math is effectively a construct of the human mind. I don’t think biology is a construct of the human mind, and therefore one needs to collect enough data to really build those models that will give rise to those novel insights. 

And that’s where I hope we will have made considerable progress in five years. 

LEE: Well, I’m with you. I hope so, too. Well, you know, thank you, Daphne, so much for this conversation. I learn a lot talking to you, and it was great to, you know, connect again on this. And congratulations on all of this success. It’s really groundbreaking. 

KOLLER: Thank you very much, Peter. It was a pleasure chatting with you, as well. 

[TRANSITION MUSIC] 

LEE: I still think of Daphne first and foremost as an AI researcher. And for sure, her research work in machine learning continues to be incredibly influential to this day. But it’s her work on AI-enhanced drug development that now is on the verge of making a really big difference on some of the most difficult diseases afflicting people today. 

In our book, Carey, Zak, and I predicted that AI might be a meaningful accelerant in biomedical research, but I don’t know that we foresaw the incredible potential specifically in drug development. 

Today, we’re seeing a flurry of activity at companies, universities, and startups on generative AI systems that aid and maybe even completely automate the design of new molecules as drug candidates. But now, in our conversation with Daphne, seeing AI go even further than that to do what one might reasonably have assumed to be impossible, to identify and select novel drug targets, especially for a neurodegenerative disease like ALS, it’s just, well, mind blowing. 

Let’s continue our deep dive on AI and biomedical research with this conversation with Noubar Afeyan: 

LEE: Noubar, thanks so much for joining. I’m really looking forward to this conversation. 

NOUBAR AFEYAN: Peter, thanks. Thrilled to be here. 

LEE: While I think most of the listeners to this podcast have heard of Flagship Pioneering (opens in new tab), it’s still worth hearing from you, you know, what is Flagship? And maybe a little bit about your background. And finally, you found a way to balance science and business creation. And so, you know, your approach and philosophy to all of that. 

AFEYAN: Well, great. So maybe I’ll just start out by way of quick background. You know, my … and since we’re going talk about AI, I’ll also highlight my first contact with the topic of AI. So as an undergraduate in 1980 up at McGill University, I was an engineering student, but I was really captivated by, at that time, the talk on the campus around the expert system, heuristic-based, rule-based kind of programs. 

LEE: Right. 

AFEYAN: And so actually I had the dubious distinction of writing my one and only college newspaper article. [LAUGHTER] That was a short career. And it was all about how artificial intelligence would be impacting medicine, would be impacting, you know, speech capture, translation, and some of the ideas that were there that it’s interesting to see now 45 years later re-emerge with some of the new learning-based models. 

My journey after college ended up taking me into biotechnology. In the early ’80s, I came to MIT to do a PhD. At the time, the field was brand new. I ended up being the first PhD graduate from MIT in this combination biology and engineering degree. And since then, I’ve basically been—so since 1987—a founder, a technologist in the space of biotechnology for human health and as well for planetary health. 

And then in 1999/2000 formed what is now Flagship Pioneering, which essentially was an attempt to bring together the three elements of what we know are important in startups. That is scientific capital, human capital, and financial capital. Right now, startups get that from different places. The science in our fields mostly come from academia, research hospitals. The human capital comes from other startups … 

LEE: Yeah. 

AFEYAN: … or large companies or some academics leave. And then the financial capital is usually venture capital, but there’s also now more and more other deeper pockets of money. 

What we thought was, what if all that existed in one entity and instead of having to convince each other how much they should believe the other if we just said, “Let’s use that power to go work on much further out things”? But in a way where nobody would believe it in the beginning, but we could give ourselves a little bit of time to do impactful big things. 

Twenty-five years later, that’s the road we’ve stayed on. 

LEE: OK. So let’s get into AI. Now, you know, what I’ve been asking guests is kind of an origin story. And there’s the origin story of contact with AI, you know, before the emergence of generative AI and afterwards. I don’t think there’s much of a point to asking you the pre-ChatGPT. But … so let’s focus on your first encounter with ChatGPT or generative AI. When did that happen, and what went through your head? 

AFEYAN: Yeah. So, if you permit me, Peter, just for very briefly, let me actually say I had the interesting opportunity over the last 25 years to actually stay pretty close to the machine learning world … 

LEE: Yeah. Yeah. 

AFEYAN: … because one, as you well know, among the most prolific users of machine learning has been the bioinformatics computational biology world because it’s been so data rich that anything that can be done, people have thrown at these problems because unlike most other things, we’re not working on man-made data. We’re looking at data that comes from nature, the complexity of which far exceeds our ability to comprehend. 

So you could imagine that any approach to statistically reduce complexity, get signal out of scant data—that’s a problem that’s been around. 

The other place where I’ve been exposed to this, which I’m going to come back to because that’s where it first felt totally different to me, is that some 25 years ago, actually the very first company we started was a company that attempted to use evolutionary algorithms to essentially iteratively evolve consumer-packaged goods online. Literally, we tried to, you know, consider features of products as genes and create little genomes of them. And by recombination and mutation, we could create variety. And then we could get people through panels online—this was 2002/2003 timeframe—we could essentially get people through iterative cycles of voting to create a survival of the fittest. And that’s a company that was called Affinnova. 

The reason I say that is that I knew that there’s a much better way to do this if only: one, you can generate variety … 

LEE: Yeah. 

AFEYAN: … without having to prespecify genes. We couldn’t do that before. And, two, which we’ve come back to nowadays, you can actually mimic how humans think about voting on things and just get rid of that element of it. 

So then to your question of when does this kind of begin to feel different? So you could imagine that in biotechnology, you know, as an engineer by background, I always wanted to do CAD, and I picked the one field in which CAD doesn’t exist, which is biology. Computer-aided design is kind of a notional thing in that space. But boy, have we tried. For a long time, …

LEE: Yep. 

AFEYAN: … people would try to do, you know, hidden Markov models of genomes to try to figure out what should be the next, you know, base that you may want to or where genes might be, etc. But the notion of generating in biology has been something we’ve tried for a while. And in the late teens, so kind of 2018, ’17, ’18, because we saw deep learning come along, and you could basically generate novelty with some of the deep learning models … and so we started asking, “Could you generate a protein basically by training a correspondence table, if you will, between protein structures and their underlying DNA sequence?” Not their protein sequence, but their DNA sequence. 

LEE: Yeah. 

AFEYAN: So that’s a big leap. So ’17/’18, we started this thing. It was called 56. It was FL56, Flagship Labs 56, our 56th project. 

By the way, we started this parallel one called “57” that did it in a very different way. So one of them did pure black box model-building. The other one said, you know what, we don’t want to do the kind of … at that time, AlphaFold was in its very early embodiments. And we said, “Is there a way we could actually take little, you know, multi amino acid kind of almost grammars, if you will, a little piece, and then see if we could compose a protein that way?” So we were experimenting. 

And what we found was that actually, if you show enough instances and you could train a transformer model—back in the day, that’s what we were using—you could actually, say, predict another sequence that should have the same activity as the first one. 

LEE: Yeah. 

AFEYAN: So we trained on green fluorescent proteins. Now, we’re talking about seven years ago. We trained on enzymes, and then we got to antibodies. 

With antibodies, we started seeing that, boy, this could be a pretty big deal because it has big market impact. And we started bringing in some of the diffusion models that were beginning to come along at that time. And so we started getting much more excited. This was all done in a company that subsequently got renamed from FL56 to Generate:Biomedicines (opens in new tab), … 

LEE: Yep, yep. 

AFEYAN: … which is one of the leaders in protein design using the generative techniques. It was interesting because Generate:Biomedicines is a company that was called that before generative AI was a thing, [LAUGHTER] which was kind of very ironic. 

And, of course, that team, which operates today very, very kind of at the cutting edge, has published their models. They came up with this first Chroma (opens in new tab) model, which is a diffusion-based model, and then started incorporating a lot of the LLM capabilities and fusing them. 

Now we’re doing atomistic models and many other things. The point being, that gave us a glimpse of how quickly the capability was gaining, … 

LEE: Yeah. Yeah. 

AFEYAN: … just like evolution shows you. Sometimes evolution is super silent, and then all of a sudden, all hell breaks loose. And that’s what we saw. 

LEE: Right. One of the things that I reflect on just in my own journey through this is there are other emotions that come up. One that was prominent for me early on was skepticism. Were there points when even in your own work, transformer-based work on this early on, that you had doubts or skepticism that these transformer architectures would be or diffusion-based approaches would be worth anything? 

AFEYAN: You know, it’s interesting, I think that, I’m going to say this to you in a kind of a friendly way, but you’ll understand what I mean. In the world I live in, it’s kind of like the slums of innovation, [LAUGHTER] kind of like just doing things that are not supposed to work. The notion of skepticism is a luxury, right. I assume everything we do won’t work. And then once in a while I’m wrong. 

And so I don’t actually try to evaluate whether before I bring something in, like just think about it. We, some hundred or so times a year, ask “what if” questions that lead us to totally weird places of thought. We then try to iterate, iterate, iterate to come up with something that’s testable. Then we go into a lab, and we test it. 

So in that world, right, sitting there going, like, “How do I know this transformer is going to work?” The answer is, “For what?” Like, it’s going to work. To make something up … well, guess what? We knew early on with LLMs that hallucination was a feature, not a bug for what we wanted to do. 

So it’s just such a different use that, of course, I have trained scientific skepticism, but it’s a little bit like looking at a competitive situation in an ecology and saying, “I bet that thing’s going to die.” Well, you’d be right—most of the time, you’d be right. [LAUGHTER] 

So I just don’t … like, it … and that’s why—I guess, call me an early adopter—for us, things that could move the needle even a little, but then upon repetition a lot, let alone this, … 

LEE: Yeah. 

AFEYAN: … you have to embrace. You can’t wait there and say, I’ll embrace it once it’s ready. And so that’s what we did. 

LEE: Hmm. All right. So let’s get into some specifics and what you are seeing either in your portfolio companies or in the research projects or out in the industry. What is going on today with respect to AI really being used for something meaningful in the design and development of drugs? 

AFEYAN: In companies that are doing as diverse things as—let me give you a few examples—a project that’s now become a named company called ProFound Therapeutics (opens in new tab) that literally discovered three, four years ago, and would not have been able to without some of the big data-model-building capabilities, that our cells make literally thousands, if not tens of thousands, of more proteins than we were aware of, full stop. 

We had done the human genome sequence, there was 20,000 genes, we thought that there was … 

LEE: Wow. 

AFEYAN: … maybe 70-80,000, 100,000 proteins, and that’s that. And it turns out that our cells have a penchant to express themselves in the form of proteins, and they have many other ways than we knew to do that. 

Now, so what does that mean? That means that we have generated a massive amount of data, the interpretation of which, the use of which to guide what you do and what these things might be involved with is purely being done using the most cutting-edge data-trained models that allow you to navigate such complexity. 

LEE: Wow. Hmm. 

AFEYAN: That’s just one example. Another example: a company called Quotient Therapeutics (opens in new tab), again three, four years old. I can talk about the ones that are three, four years old because we’ve kind of gotten to a place where we’ve decided that it’s not going to fail yet, [LAUGHTER] so we can talk about it. 

You know, we discovered—our team discovered—that in our cells, right, so we know that when we get cancer, our cells have genetic mutations in them or DNA mutations that are correlated and often causal to the hyperproliferative stages of cancer. But what we assume is that all the other cells in our body, pretty much, have one copy of their genes from our mom, one copy from our dad, and that’s that. 

And when very precise deep sequencing came along, we always asked the question, “How much variation is there cell to cell?” 

LEE: Right. 

AFEYAN: And the answer was it’s kind of noise, random variation. Well, our team said, “Well, what if it’s not really that random?” because upon cell division cycles, there’s selection happening on these cells. And so not just in cancer but in liver cells, in muscle cells, in skin cells … 

LEE: Oh, interesting. 

AFEYAN: … can you imagine that there’s an evolutionary experiment that is favoring either compensatory mutations that are helping you avoid disease or disease-caused mutations that are gaining advantage as a way to understand the mechanism? Sure enough—I wouldn’t be telling you otherwise—with massive amount of single cell sequencing from individual patient samples, we’ve now discovered that the human genome is mutated on average in our bodies 10,000 times, like over every base, like, it’s huge numbers. 

And we’re finding very interesting big signals come out of this massive amount of data. By the way, data of the sort that the human mind, if it tries to assign causal explanations to what’s happening … 

LEE: Right. 

AFEYAN: … is completely inadequate. 

LEE: When you think about a language model, we’re learning from human language, and the totality of human language—at least relative to what we’re able to compute today in terms of constructing a model—the totality of human language is actually pretty limited. And in fact, you know, as is always written about in click-baity titles, you know, the big model builders are actually starting to run short. 

AFEYAN: Running out, running out, yes. [LAUGHTER] 

LEE: But one of the things that perplexes me and maybe even worries me—like these two examples—are generally in the realm of cellular biology and the complexity. Let’s just take the example of your company, ProFound. You know, the complexity of what’s going on and the potential genetic diversity is such that, can we ever have enough data? You know, because there just aren’t that many human beings. There just aren’t that many samples. 

AFEYAN: Well, it depends on what you want to train, right. So if you want to train a de novo evolutionary model that could take you from bacteria to human mammalian cells and the like, there may not be—and I’m not an expert in that—but that’s a question that we often kind of think about. 

But if you’re trying to train a … like you know what the proteins we know about, how they interact with pathways and disease mechanisms and the like. Now all of a sudden you find out that there’s a whole continent of them missing in your explanations. But there are things you can reason, in quotations, through analogy, functional analogy, sequence analogy, homology. So there’s a lot of things that we could do to essentially make use of this, even though you may not have the totality of data needed to, kind of, predict, based on a de novo sequence, exactly what it’s going to do. 

So I agree with the comparison. But … but you’re right. The complexity is … just keep in mind, on average, a protein may be interacting with 50 to 100 other proteins. 

LEE: Right. 

AFEYAN: So if you find thousands of proteins, you’ve found a massive interaction space through which information is being processed in a living cell. 

LEE: But do you find in your AI companies that access to data ends up being a key challenge? Or, you know, how central is that? 

AFEYAN: Access to data is a key challenge for the companies we have that are trying to build just models. But that’s the minority of things we do. The majority of things we do is to actually co-develop the data and the models. And as you know well, because you guys, you know, have given us some ideas around this space, that, you know, you could generate data and then think about what you’re to do with it, which is the way biotech is operated with bioinformatics. 

LEE: Right, right. 

AFEYAN: Or you could generate bespoke data that is used to train the model that’s quite separate from what you would have done in the natural course of biology. So we’re doing much more of the latter of late, and I think that’ll continue. So, but these things are proliferating. 

I mean, it’s hard to find a place where we’re not using this. And the “this” is any and all data-driven model building, generative, LLM-based, but also every other technique to make progress. 

LEE: Sure. So now moving away from the straight biochemistry applications, what about AI in the process of building a business, of making investment decisions, of actually running an operation? What are you seeing there? 

AFEYAN: So, well, you know, Moderna, which is a company that I’m quite proud of being a founder and chairman of, has adopted a significant, significant amount of AI embedded into their operations in all aspects: from the manufacturing, quality control, the clinical monitoring, the design—every aspect. And in fact, they’ve had a partnership that they’ve had for a little while here with OpenAI, and they’ve tried many different ways to stay at the cutting edge of that. 

So we see that play out at some scale. That’s a 5,000-, 6,000-person organization, and what they’re doing is a good example of what early adopters would do, at least in our kind of biotechnology company. 

But then, you know, in our space, I would say the efficiency impact is kind of no different, than, you know, anywhere else in academia you might adopt it or in other kinds of companies. But where I find it an interesting kind of maybe segue is the degree to which it may fundamentally change the way we think about how to do science, which is a whole other use, right? 

LEE: Right. 

AFEYAN: So it’s not an efficiency gain per se, although it’s maybe an effectiveness gain when it comes to science, but can you just fundamentally train models to generate hypotheses? 

LEE: Yep. 

AFEYAN: And we have done that, and we’ve been doing this for the last three years. And now it’s getting better and better, the better these reasoning engines are getting and kind of being able to extrapolate and train for novelty. Can you convert that to the world’s best experimental protocol to very precisely falsify your hypothesis, on and on? 

That closing of that loop, kind of what we call autonomous science, which we’ve been trying to do for the last two, three years and are making some progress in, that to me is another kind of bespoke use of these things, not to generate molecules in its chemistry, but to change the behavior of how science is done. 

LEE: Yeah. So I always end with a couple of provocative questions, but I need—before we do that, while we’re on this subject—to get your take on Lila Sciences (opens in new tab)

And there is a vision there that I think is very interesting. It’d be great to hear it described by you. 

AFEYAN: Sure. So Lila, after operating for two to three years in kind of a preparatory kind of stealth mode, we’ve now had a little bit more visibility around, and essentially what we’re trying to do there is to create what we call automated science factories, and such a factory would essentially be able to take problems, either computationally specified or human-specified, and essentially do the experimental work in order to either make an optimization happen or enable something that just didn’t exist. And it’s really, at this point, we’ve shown proof of concept in narrow areas. 

LEE: Yep. 

AFEYAN: But it’s hard to say that if you can do this, you can’t do some other things, so we’re just expanding it that way. We don’t think we need a complete proof or complete demonstration of it for every aspect. 

LEE: Right. 

AFEYAN: So we’re just kind of being opportunistic. The idea for Lila is to partner with a number of companies. The good news is, within Flagship, there’s 48 of them. And so there’s a whole lot of them they can partner with to get their learning cycles. But eventually they want to be a real alternative to every time somebody has an idea, having to kind of go into a lab and manually do this. 

I do want to say one thing we touched on, Peter, though, just on that front, which is … 

LEE: Yep. 

AFEYAN: … if you say, like, “What problem is this going to solve?” It’s several but an important one is just the flat-out human capacity to reason on this much data and this much complexity that is real. Because nature doesn’t try to abstract itself in a human understandable form. 

LEE: Right. Yeah. 

AFEYAN: In biology, since it’s kind of like progress happens through evolutionary kind of selections, the evidence of which [has] long been lost, and so therefore, you just see what you have, and then it has a behavior. I really do think that there’s something to be said, and I want to—just for your audience—lay out a provocative, at least, thought on all this, which Lila is a beginning embodiment of, which is that I really think that what’s going to happen over the next five, 10 years, even while we’re all fascinated with the impending arrival of AGI [artificial general intelligence] is really what I call poly-intelligence, which is the combination of human intelligence, machine intelligence, AI, and nature’s intelligence. 

We’re all fascinated at the human-machine interface. We know the human-nature interface, but imagine the machine-nature interface—that is, actually letting loose a digital kind of information processing life form through the algorithms that are being developed and the commensurately complex, maybe much more complex. We’ll see. And so now the question becomes, what does the human do? 

And we’re living in a world which is human dominated, which means the humans say, “If I don’t understand it, it’s not real, basically. And if I don’t understand it, I can’t regulate it.” And we’re going to have to make peace with the fact that we’re not going to be able to predictably affect things without necessarily understanding them the way we could if we just forced ourselves to only work on problems we can understand. And that world we’re not ready for at all. 

LEE: Yeah. All right. So this one I predict is going to be a little harder for you because I think while you think about the future, you live very much in the present. But I’d like you to make some predictions about what the biotech and biopharmaceutical industries are going to be able to do two years from now, five years from now, 10 years from now. 

AFEYAN: Yeah, well, it’s hard for me because you know my nature, which is that I think this is all emergent. 

LEE: Right. 

AFEYAN: And so I would be the conceit of predicting. So I would say with likelihood positive predictive value of less than 10%, I’m happy to answer your question. So I’m not trying to score high [LAUGHTER] because I really think that my job is to envision it, not to predict it. And that’s a little bit different, right? 

LEE: Yeah, I actually was trying to pick what would be the hardest possible question I could ask you, [LAUGHTER] and this is what I came up with. 

AFEYAN: Yeah, no, no, I’m kidding here. So now look, I think that we will cross this threshold of understandability. And of course you’re seeing that in a lot of LLM things today. And of course, people are trying to train for things that are explainers and all that whole, there’s a whole world of that. But I think at some point we’re going to have to kind of let go and get comfortable working on things that, you know … 

I sometimes tell people, you know, and I’m not the first, but scientists and engineers are different, it’s said, in that engineers work on things that they don’t wait until they get a full understanding of before they work with them. Well, now scientists are going to have to get used to that, too, right? 

LEE: Yeah. Yeah. 

AFEYAN: Because insisting that it’s only valid if it’s understandable. So, I would say, look, I hope that the time … for example, I think major improvements will be made in patient selection. If we can test drugs on patients that are more synchronized as to the stage of their disease … 

LEE: Yep. 

AFEYAN: … I think the answer will be much better. We’re working on that. It’s a company called Etiome (opens in new tab), very, very early stage. It’s really beautiful data, very early data that shows that when we talk about MASH [metabolic dysfunction-associated steatohepatitis], liver disease, when we talk about Parkinson’s, there’s such a heterogeneity, not only of the subset type of the disease, but the stage of the disease, that this notion that you have stage one cancer, stage two cancer, again, nobody told nature there’s stages of that kind. It’s a continuum. 

But if you can synchronize based on training, kind of, the ability to detect who are the patients that are in enough of a close proximity that should be treated so that the trial—much smaller a trial size—could give you a drug, then afterwards, you can prescribe it using these approaches. 

Kind of we’re going to find that what we thought is one disease is more like 15 diseases. That’s bad news because we’re not going to be able to claim that we can treat everything which we can. It’s good news in that there’s going to be people who are going to start making much more specific solutions to things. 

LEE: Right. 

AFEYAN: So I can imagine that. I can imagine a generation of, kind of, students who are going to be able to play in this space without having 25 years of graduate education on the subject. So what is deemed knowledge sufficient to do creative things will change. I can go on and on, but I think all this is very close by and it’s very exciting. 

LEE: Noubar, I just always have so much fun, and I learn really a lot. It’s high-density learning when I talk to you. And so I hope our listeners feel the same way. It’s something I really appreciate. 

AFEYAN: Well, Peter, thanks for this. And I think your listeners know that if I was asking you questions, you would be answering them with equal if not more fascinating stuff. So, thanks for giving me the chance to do that today. 

[TRANSITION MUSIC] 

LEE: I’m always fascinated by Noubar’s perspectives on fundamental research and how it connects to human health and the building of successful companies. I see him as a classic “systems thinker,” and by that, I mean he builds impressive things like Flagship Pioneering itself, which he created as a kind of biomedical innovation system. 

In our conversation, I was really struck by the fact that he’s been thinking about the potential impact of transformers—transformers being the fundamental building block of large language models—as far back as 2017, when the first paper on the attention mechanism in transformers was published by Google. 

But, you know, it isn’t only about using AI to do things like understand and design molecules and antibodies faster. It’s interesting that he is also pushing really hard towards a future where AI might “close the loop” from hypothesis generation, to experiment design, to analysis, and so on. 

Now, here’s my conversation with Dr. Eric Topol: 

LEE: Eric, it’s really great to have you here. 

ERIC TOPOL: Oh, Peter, I’m thrilled to be here with you here at Microsoft. 

LEE: You’re a super famous person. Extremely well known to researchers even in computer science, as we have here at Microsoft Research. 

But the question I’d like to ask is, how would you explain to your parents what you do every day? 

TOPOL: [LAUGHS] That’s a good question. If I was just telling them I’m trying to come up with better ways to keep people healthy, that probably would be the easiest way to do it because if I ever got in deeper, I would lose them real quickly. They’re not around, but just thinking about what they could understand. 

LEE: Right. 

TOPOL: I think as long as they knew it was work centered on innovative paths to promoting and preserving human health, that would get to them, I think. 

LEE: OK, so now, kind of the second topic, and then we let the conversation flow, is about origin stories with respect to AI. And with most of our guests, you know, I factor that into two pieces: the encounters with AI before ChatGPT and what we call generative AI and then the first contacts after. 

And, of course, you have extensive contact with both now. But let’s start with how you got interested in machine learning and AI prior to ChatGPT. How did that happen? 

TOPOL: Yeah, it was out of necessity. So back, you know, when I started at Scripps at the end of ’06, we started accumulating, you know, massive datasets. First, it was whole genomes. We did one of the early big cohorts of 1,400 people of healthy aging. We called the Wellderly whole genome sequence (opens in new tab)

And then we started big in the sensor world, and then we started saying, what are we going to do with all this data, with electronic health records and all those sensors? And now we got whole genomes. 

And basically, what we were doing, we were in hoarding mode. We didn’t have a way to meaningfully analyze it. 

LEE: Right. 

TOPOL: You would read about how, you know, data is the new oil and, you know, gold and whatnot. But we just didn’t have a way to extract the juice. And even when we wanted to analyze genomes, it was incredibly laborious. 

LEE: Yeah. 

TOPOL: And we weren’t extracting a lot of the important information. So that’s why … not having any training in computer science, when I was doing the … about three years of work to do the book Deep Medicine, I started really, first auto-didactic about, you know, machine learning. And then I started contacting a lot of the real top people in the field and hanging out with them, and learning from them, getting their views as to, you know, where we are today, what models are coming in the future. 

And then I said, “You know what? We are going to be able to fix this mess.” [LAUGHS] We’re going to get out of the hoarding phase, and we’re going to get into, you know, really making a difference. 

So that’s when I embraced the future of AI. And I knew, you know, back—that was six years ago when it was published and probably eight or nine years ago when I was doing the research, and I knew that we weren’t there yet. 

You know, at the time, we were seeing the image interpretation. That was kind of the early promise. But really, the models that were transformative, the transformer models, they were incubating back in 2017. So people knew something was brewing. 

LEE: Right. Yes. 

TOPOL: And everyone said we’re going to get there. 

LEE: So then, ChatGPT comes out November of 2022; there’s GPT-4 in 2023, and now a lot has happened. Do you remember what your first encounter with that technology was? 

TOPOL: Oh, sure. First, ChatGPT. You know, in the last days of November ’22, I was just blown away. I mean, I’m having a conversation. I’m having fun. And this is humanoid responding to me. I said, “What?” You know? So that was to me, a moment I’ll never forget. And so I knew that the world was, you know, at a very kind of momentous changing point. 

Of course, knowing, too, that this is going to be built on, and built on quickly. Of course, I didn’t know how soon GPT-4 and all the others were going to come forward, but that was a wake-up call that the capabilities of AI had just made a humongous jump, which seemingly was all of a sudden, although I did know this had been percolating … 

LEE: Right. 

TOPOL: … you know, for what, at least five years, that, you know, it really was getting into its position to do this. 

LEE: I know one of the things that was challenging psychologically and emotionally for me is, it made me rethink a lot of things that were going on in Microsoft Research in areas like causal reasoning, natural language processing, speech processing, and so on. 

I’m imagining you must have had some emotional struggles too because you have this amazing book, Deep Medicine. Did you have to … did it go through your mind to rethink what you wrote in Deep Medicine in light of this or, or, you know, how did that feel? 

TOPOL: It’s funny you ask that because in this one chapter I have on the virtual health coach, I wrote a whole bunch of scenarios … 

LEE: Yeah. 

TOPOL: … that were very kind of futuristic. You know, about how the AI interacts with the person’s health and schedules their appointment for this and their scan and tells them what lab tests they should tell their doctor to have, and, you know, all these things. And I sent a whole bunch of these, thinking that they were a little too far-fetched. 

LEE: Yes. 

TOPOL: And I sent them to my editor when I wrote the book, and he says, “Oh, these are great. You should put them all in.” [LAUGHTER] What I didn’t realize is they weren’t that, you know, they were all going to happen. 

 LEE: Yeah. They weren’t that far-fetched at all. 

TOPOL: Not at all. If there’s one thing I’ve learned from all this, is our imagination isn’t big enough. 

 LEE: Yeah. 

TOPOL: We think too small. 

LEE: Now in our book that Carey, Zak, and I wrote, you know, we made, you know, we sort of guessed that GPT-4 might help biomedical researchers, but I don’t think that any of us had the thought in mind that the architecture around generative AI would be so directly applicable to, you know, say, protein structures or, you know, to clinical health records and so on. 

And so a lot of that seems much more obvious today. But two years ago, it wasn’t. But we did guess that biomedical researchers would find this interesting and be helped along. 

So as you reflect over the past two years, you know, do you have things that you think are very important, kind of, meaningful applications of generative AI in the kinds of research that Scripps does? 

TOPOL: Yeah. I mean, I think for one, you pointed out how the term generative AI is a misnomer. 

LEE: Yeah. 

TOPOL: And so it really was prescient about how, you know, it had a pluripotent capability in every respect, you know, of editing and creating. So that was something that I think was telling us, an indicator that this is, you know, a lot bigger than how it’s being labeled. And our expectations can actually be more than what we had seen previously with the earlier version. 

So I think what’s happened is that now, we keep jumping. It’s so quick that we can’t … you know, first we think, oh, well, we’ve gone into the agentic era, and then we could pass that with reasoning. [LAUGHTER] And, you know, we just can’t … 

LEE: Right. 

TOPOL: It’s just wild. 

LEE: Yeah. 

TOPOL: So I think so many of us now will put in prompts that will necessitate or ideally result in a not-immediate gratification, but rather one that requires, you know, quite a bit of combing through the corpus of knowledge … 

LEE: Yeah. 

TOPOL: … and getting, with all the citations, a report or a response. And I think now this has been a reset because to do that on our own, it takes, you know, many, many hours. And it’s usually incomplete. 

But one of the things that was so different in the beginning was you would get the references from up to a year and a half previously. 

LEE: Yep. 

TOPOL: And that’s not good enough. [LAUGHS] 

LEE: Right. 

TOPOL: And now you get references, like, from the day before. 

LEE: Yes. Yeah. 

TOPOL: And so, you say, “Why would you do a regular search for anything when you could do something like this?” 

LEE: Yeah. 

TOPOL: And then, you know, the reasoning power. And a lot of people who are not using this enough still are talking about, “Well, there’s no reasoning.” 

LEE: Yeah.

TOPOL: Which you dealt with really well in the book. But what, of course, you couldn’t have predicted is the new dimensions. 

LEE: Right. 

TOPOL: I think you nailed it with GPT-4. But it’s all these just, kind of, stepwise progressions that have been occurring because of the velocity that’s unprecedented. I just can’t believe it. 

LEE: We were aware of the idea of multi-modality, but we didn’t appreciate, you know, what that would mean. Like AlphaFold (opens in new tab) [protein structure database], you know, the ability for AI to understand—or crystal structures—to really start understanding something more fundamental about biochemistry or medicinal chemistry. 

I have to admit, when we wrote the book, we really had no idea. 

TOPOL: Well, I feel the same way. I still today can’t get over it because the reason AlphaFold and Demis [Hassabis] and John Jumper [AlphaFold’s co-creators] were so successful is there was this protein databank. 

LEE: Yes. 

TOPOL: And it had been kept for decades. And so, they had the substrate to work with. 

LEE: Right. 

TOPOL: So, you say, “OK, we can do proteins.” But then how do you do everything else? 

LEE: Right. 

TOPOL: And so this whole, what I call, “large language of life model” work, which has gone into high gear like I’ve never seen. 

LEE: Yeah. 

TOPOL: You know, now to this holy grail of a virtual cell, and … 

LEE: Yeah. 

TOPOL: You know, it’s basically … it’s … it was inspired by proteins. But now it’s hitting on, you know, ligands and small molecules, cells. I mean, nothing is being held back here. 

LEE: Yeah. 

TOPOL: So how could anybody have predicted that? 

LEE: Right. 

TOPOL: I sure wouldn’t have thought it would be possible at this point. 

LEE: Yeah. So just to challenge you, where do you think that is going to be two years from now? Five years from now? Ten years from now? Like, so you talk about a virtual cell. Is that achievable within 10 years, or is that still too far out? 

TOPOL: No, I think within 10 years for sure. You know the group that got assembled that Steve Quake (opens in new tab) pulled together? 

LEE: Right. 

TOPOL: I think has 42 authors in a paper (opens in new tab) in Cell. The fact that he could get these 42 experts in life science and some in computer science to come together and all agree … 

LEE: Yeah. 

TOPOL: … that not only is this a worthy goal, but it’s actually going to be realized, that was impressive. 

I challenged him about that. How did you get these people all to agree? So many of them were naysayers. And by the time the workshop finished, they were fully convinced. I think that what we’re seeing is so much progress happening so quickly. And then all the different models, you know, across DNA, RNA, and everything are just zooming forward. 

LEE: Yeah. 

TOPOL: And it’s just a matter of pulling this together. Now when we have that, and I think it could easily be well before a decade and possibly, you know, between the five- and 10-year mark—that’s just a guess—but then we’re moving into another era of life science because right now, you know, this whole buzz about drug discovery. 

LEE: Yep. 

TOPOL: It’s not… with the ability to do all these perturbations at a cellular level. 

LEE: Right. 

TOPOL: Or the cell of interest. 

LEE: Yeah. 

TOPOL: Or the cell-to-cell interactions or the intra-cell interaction. So once you nail that, yeah, it takes it to a kind of another predictive level that we haven’t really fathomed. So, yes, there’s going to be drug discovery that’s accelerated. But this would make that and also the underpinnings of diseases. 

LEE: Yeah. 

TOPOL: So the idea that there’s so many diseases we don’t understand now. And if you had virtual cell, … 

LEE: Yeah. 

TOPOL: … you would probably get to that answer … 

LEE: Yeah. 

TOPOL: … much more quickly. So whether it’s underpinnings of diseases or what it’s going to take to really come up with far better treatments—preventions—I think that’s where virtual cell will get us. 

LEE: There’s a technical question … I wonder if you have an opinion. You may or may not. There is sort of what I would refer to as ab initio approaches to this. You know, you start from the fundamental physics and chemistry, and we know the laws, we have the math and, you know, we can try to derive from there … in fact, we can even run simulations of that math to generate training data to build generative models and work up to a cell, or forget all of that and just take as many observations and measurements of, say, living cells as possible, and just have faith that hidden amongst all of the observational data, there is structure and language that can be derived. 

So that’s sort of bottom-up versus top-down approaches. Do you have an opinion about which way? 

TOPOL: Oh, I think you go after both. And clearly whenever you’re positing that you’ve got a virtual cell model that’s working, you’ve got to do the traditional methods as well to validate it, and … so all that. You know, I think if you’re going to go out after this seriously, you have to pull out all the stops. Both approaches, I think, are going to be essential. 

LEE: You know, if what you’re saying is true, and it is amazing to hear the confidence, the one thing I tried to explain to someone nontechnical is that for a lot of problems in medicine, we just don’t have enough data in a really profound way. And the most profound way to say that is, since Adam and Eve, there have only been an estimated 106 billion people who have ever lived. 

So even if we had the DNA of every human being, every individual of Homo sapiens, there are certain problems for which we would not have enough data. 

TOPOL: Sure. 

LEE: And so I think another thing that seems profound to me, if we can actually have a virtual cell, is we can actually make trillions of virtual … 

TOPOL: Yeah 

LEE: … human beings. The true genetic diversity could be realized for our species. 

TOPOL: I think you nailed it. The ability to have that type of data, no less synthetic data, I mean, it’s just extraordinary. 

LEE: Yeah. 

TOPOL: We will get there someday. I’m confident of that. We may be wrong in projections. And I do think [science writer] Philip Ball won’t be right that it will never happen, though. [LAUGHTER] No, I think that if there’s a holy grail of biology, this is it. 

LEE: Yeah. 

TOPOL: And I think you’re absolutely right about where that will get us. 

LEE: Yeah. 

TOPOL: Transcending the beginning of the species. 

LEE: Yeah. 

TOPOL: Of our species. 

LEE: Yeah. All right. So now, we’re starting to run short on time here. And so I wanted to ask you about, I’m in my 60s, so I actually think about this a lot more. [LAUGHTER] And I know you’ve been thinking a lot about longevity. And, of course, your new book, Super Agers

And one of the reasons I’m so eager to read is it’s a topic very top of mind for me and actually for a lot of people. Where is this going? Because this is another area where you hear so much hype. At the same time, you see Nobel laureate scientists … 

TOPOL: Yeah. 

LEE: … working on this. 

TOPOL: Yeah. 

LEE: So, so what’s, what’s real there? 

TOPOL: Yeah. Well, it’s really … the real deal is the science of aging is zooming forward. 

And that’s exciting. But I see it bifurcating. On the one hand, all these new ideas, strategies to reverse aging are very ambitious. Like cell reprogramming and senolytics and, you know, the rejuvenation of our thymus gland, and it’s a long list. 

LEE: Yeah. 

TOPOL: And they’re really cool science, and it used to be the mouse lived longer. Now it’s the old mouse looks really young. 

LEE: Yeah. Yeah. 

TOPOL: All the different features. A blind mouse with cataracts is all of a sudden there’s no cataracts. I mean, so these things are exciting, but none of them are proven in people, and they all have significant risk, no less, you know, the expense that might be attached. 

LEE: Right. 

TOPOL: And some people are jumping the gun. They’re taking rapamycin, which can really knock out their immune system. So they all carry a lot of risk. And people are just getting a little carried away. We’re not there yet. 

But the other side, which is what I emphasize in the book, which is exciting, is that we have all these new metrics that came out of the science of aging. 

LEE: Yes. 

TOPOL: So we have clocks of the body. Our biological clock versus our chronological clock, and we have organ clocks. So I can say, you know, Peter, we’ve assessed all your organs and your immune system. And guess what? Every one of them is either at or less than your actual age. 

LEE: Right. 

TOPOL: And that’s very reassuring. And by the way, your methylation clock is also … I don’t need to worry about you so much. And then I have these other tests that I can do now, like, for example, the brain. We have an amazing protein p-Tau217 that we can say over 20 years in advance of you developing Alzheimer’s, … 

LEE: Yeah. 

TOPOL: … we can look at that, and it’s modifiable by lifestyle, bringing it down. It should be you can change the natural history. So what we’ve seen is an explosion of knowledge of metrics, proteins, no less, you know, our understanding at the gene level, the gut microbiome, the immune system. So that’s what’s so exciting. How our immune system ages. Immunosenescence. How we have more inflammation—inflammaging—with aging. So basically, we have three diseases that kill us, that take away our health: heart, cancer, and neurodegenerative. 

LEE: Yep. 

TOPOL: And they all take more than 20 years. They all have a defective immune system inflammation problem, and they’re all going to be preventable. 

LEE: Yeah. 

TOPOL: That’s what’s so exciting. So we don’t have to have reverse aging. We can actually work on … 

LEE: Just prevent aging in the first place. 

TOPOL: the age-related diseases. So basically, what it means is: I got to find out if you have a risk, if you’re in this high-risk group for this particular condition, because if you are—and we have many levels, layers, orthogonal ways to check—we don’t just bank it all on one polygenic test. We’re going to have several ways, say this is the one we are going … 

And then we go into high surveillance, where, let’s say if it’s your brain, we do more p-Tau, if we need to do brain imaging—whatever it takes. And also, we do preventive treatments on top of the lifestyle [changes], that one of the problems we have today is a lot of people know generally, what are good lifestyle factors. Although, I go through a lot more than people generally acknowledge. 

But they don’t incorporate them because they don’t know that they’re at risk and they could change their … extend their health span and prevent that disease. So what I at least put out there, a blueprint, is how we can use AI, because it’s multimodal AI, with all these layers of data, and then temporally, it’s like today you could say if you have two protein tests, not only are you going to have Alzheimer’s, but within a two-year time frame when … 

LEE: Yep. 

TOPOL: … and if you don’t change things, if we don’t gear up … you know, we can … we can completely prevent this, so … or at least defer it for a decade or more. So that’s why I’m excited, is that we made these strides in the science of aging. But we haven’t acknowledged the part that doesn’t require reversing aging. There’s this much less flashy, attainable, less risky approach … 

LEE: Yeah. 

TOPOL: than the one that … when you reverse aging, you’re playing with the hallmarks of cancer. They are like, if you look at the hallmarks of cancer … 

LEE: That has been one of the primary challenges. 

TOPOL: They’re lined up. 

LEE: Yeah. 

TOPOL: They’re all the same, you know, whether it’s telomeres, or whether it’s … you know … so this is the problem. I actually say in the book, I do think one of these—we have so many shots on goal—one of these reverse aging things will likely happen someday. But we’re nowhere close. 

On the other hand, let’s gear up. Let’s do what we can do. Because we have these new metrics that’s … people don’t … like, when I read the organ clock paper (opens in new tab) from Tony Wyss-Coray from Stanford. It was published end of ’23; it was the cover of Nature. It blew me away. 

LEE: Yeah. 

TOPOL: And I wrote a Substack (opens in new tab) [article] on it. And Tony said, “Well, that’s so nice of you.” I said, “So nice? This is revolutionary, you know.” [LAUGHTER] So … 

LEE: By the way, what’s so interesting is, how these things, this kind of understanding and AI, are coming together.

TOPOL: Yes. 

LEE: It’s almost eerie the timing of these things. 

TOPOL: Absolutely. Because you couldn’t take all these layers of data, just like we were talking about data hoarding.

LEE: Yep.

TOPOL: Now we have data hoarding on individual with no way to be able to make these assessments of what level of risk, when, what are we going to do in this individual to prevent that? We can do that now. 

We can do it today. And we could keep building on that. So I’m really excited about it. I think that, you know, when I wrote the last book on deep medicine, it was our overarching goal should be to bring back the patient-doctor relationship. I’m an old dog, and I know what it used to be when I got out of medical school. 

It’s totally … you couldn’t imagine how much erosion from the ’70s, ’80s to now. But now I have a new overarching goal. I’m thinking that that still is really important—humanity in medicine—but let’s prevent these three … big three diseases because it’s an opportunity that we’re not … you know, in medicine, all my life we’ve been hearing and talking about we need to prevent diseases. 

Curing is much harder than prevention. And the economics. Oh my gosh. But we haven’t done it. 

LEE: Yeah. 

TOPOL: Now we can do it. Primary prevention. We’d do really well. Somebody’s had heart attack. 

LEE: Yeah. 

TOPOL: Oh, we’re going to get all over it. Why did they have a heart attack in the first place? 

LEE: Well, the thing that makes so much sense in what you’re saying is that we understand we have an understanding both economically and medically that prevention is a good thing. And extending the concept of prevention to these age-related conditions, I think, makes all the sense in the world. 

You know, Eric, maybe on that optimistic note, it’s time to wrap up this conversation. Really appreciate you coming. Let me just brag in closing that I’m now the proud owner of an autographed copy of your latest book, and, really, thank you for that. 

TOPOL: Oh, thank you. I could spend the rest of the day talking to you. I’ve really enjoyed it. Thanks. 

[TRANSITION MUSIC] 

LEE: For me, the biggest takeaway from our conversation was Eric’s supremely optimistic predictions about what AI will allow us to do in much less than 10 years. 

You know, for me personally, I started off several years ago with the typical techie naivete that if we could solve protein folding using machine learning, we would solve human biology. But as I’ve gotten smarter, I’ve realized that things are way, way more complicated than that, and so hearing Eric’s techno-optimism on this is really both heartening and so interesting. 

Another thing that really caught my attention are Eric’s views on AI in medical diagnosis. That really stood out to me because within our labs here at Microsoft Research, we have been doing a lot of work on this, for example in creating foundation models for whole-slide digital pathology. 

The bottom line, though, is that biomedical research and development is really changing and changing quickly. It’s something that we thought about and wrote briefly about in our book, but just hearing it from these three people gives me reason to believe that this is going to create tremendous benefits in the diagnosis and treatment of disease. 

And in fact, I wonder now how regulators, such as the Food and Drug Administration here in the United States, will be able to keep up with what might become a really big increase in the number of animal and human studies that need to be approved. On this point, it’s clear that the FDA and other regulators will need to use AI to help process the likely rise in the pace of discovery and experimentation. And so stay tuned for more information about that. 

[THEME MUSIC] 

I’d like to thank Daphne, Noubar, and Eric again for their time and insights. And to our listeners, thank you for joining us. There are several episodes left in the series, including discussions on medical students’ experiences with AI and AI’s influence on the operation of health systems and public health departments. We hope you’ll continue to tune in. 

Until next time. 

[MUSIC FADES] 

The post How AI will accelerate biomedical research and discovery appeared first on Microsoft Research.

Read More

Use K8sGPT and Amazon Bedrock for simplified Kubernetes cluster maintenance

Use K8sGPT and Amazon Bedrock for simplified Kubernetes cluster maintenance

As Kubernetes clusters grow in complexity, managing them efficiently becomes increasingly challenging. Troubleshooting modern Kubernetes environments requires deep expertise across multiple domains—networking, storage, security, and the expanding ecosystem of CNCF plugins. With Kubernetes now hosting mission-critical workloads, rapid issue resolution has become paramount to maintaining business continuity.

Integrating advanced generative AI tools like K8sGPT and Amazon Bedrock can revolutionize Kubernetes cluster operations and maintenance. These solutions go far beyond simple AI-powered troubleshooting, offering enterprise-grade operational intelligence that transforms how teams manage their infrastructure. Through pre-trained knowledge and both built-in and custom analyzers, these tools enable rapid debugging, continuous monitoring, and proactive issue identification—allowing teams to resolve problems before they impact critical workloads.

K8sGPT, a CNCF sandbox project, revolutionizes Kubernetes management by scanning clusters and providing actionable insights in plain English through cutting-edge AI models including Anthropic’s Claude, OpenAI, and Amazon SageMaker custom and open source models. Beyond basic troubleshooting, K8sGPT features sophisticated auto-remediation capabilities that function like an experienced Site Reliability Engineer (SRE), tracking change deltas against current cluster state, enforcing configurable risk thresholds, and providing rollback mechanisms through Mutation custom resources. Its Model Communication Protocol (MCP) server support enables structured, real-time interaction with AI assistants for persistent cluster analysis and natural language operations. Amazon Bedrock complements this ecosystem by providing fully managed access to foundation models with seamless AWS integration. This approach represents a paradigm shift from reactive troubleshooting to proactive operational intelligence, where AI assists in resolving problems with enterprise-grade controls and complete audit trails.

This post demonstrates the best practices to run K8sGPT in AWS with Amazon Bedrock in two modes: K8sGPT CLI and K8sGPT Operator. It showcases how the solution can help SREs simplify Kubernetes cluster management through continuous monitoring and operational intelligence.

Solution overview

K8sGPT operates in two modes: the K8sGPT CLI for local, on-demand analysis, and the K8sGPT Operator for continuous in-cluster monitoring. The CLI offers flexibility through command-line interaction, and the Operator integrates with Kubernetes workflows, storing results as custom resources and enabling automated remediation. Both operational models can invoke Amazon Bedrock models to provide detailed analysis and recommendations.

K8sGPT CLI architecture

The following architecture diagram shows that after a user’s role is authenticated through AWS IAM Identity Center, the user runs the K8sGPT CLI to scan Amazon Elastic Kubernetes Service (Amazon EKS) resources and invoke an Amazon Bedrock model for analysis. The K8sGPT CLI provides an interactive interface for retrieving scan results, and model invocation logs are sent to Amazon CloudWatch for further monitoring. This setup facilitates troubleshooting and analysis of Kubernetes resources in the CLI, with Amazon Bedrock models offering insights and recommendations on the Amazon EKS environment.

The K8sGPT CLI comes with rich features, including a custom analyzer, filters, anonymization, remote caching, and integration options. See the Getting Started Guide for more details.

K8sGPT Operator architecture

The following architecture diagram shows a solution where the K8sGPT Operator installed in the EKS cluster uses Amazon Bedrock models to analyze and explain findings from the EKS cluster in real time, helping users understand issues and optimize workloads. The user collects these instance insights from the K8sGPT Operator by simply querying through a standard Kubernetes method such as kubectl. Model invocation logs, including detailed findings from the K8sGPT Operator, are logged in CloudWatch for further analysis.

In this model, no additional CLI tools are required to install other than the kubectl CLI. In addition, the single sign-on (SSO) role that the user assumed doesn’t need to have Amazon Bedrock access, because the K8sGPT Operator will assume an AWS Identity and Access Management (IAM) machine role to invoke the Amazon Bedrock large language model (LLM).

When to use which modes

The following table provides a comparison of the two modes with common use cases.

K8sGPT CLI K8sGPT Operator
Access Management Human role (IAM Identity Center) Machine role (IAM)
Feature Rich features:

  • Analyzer
  • Filters
  • Anonymization
  • Integration
  • Continuous scan and error reconciliation
  • Straightforward integration with AWS services
  • Flexibility in IAM permission changes
Common Use cases
  • Integration with supported tooling (such as Prometheus and Grafana)
  • Custom analyzer and filtering for detailed and custom analysis
  • Anonymization requirement
  • User-based troubleshooting
  • Continuous monitoring and operation
  • Kubernetes Operational Dashboard and Business as Usual (BAU) operation
  • Integration with observability tools, or additional custom analyzers

In the following sections, we walk you through the two installation modes of K8sGPT.

Install the K8sGPT CLI

Complete the following steps to install the K8sGPT CLI:

  1. Enable Amazon Bedrock in the US West (Oregon) AWS Region. Make sure to include the following role-attached policies to request or modify access to Amazon Bedrock FMs:
    1. aws-marketplace:Subscribe
    2. aws-marketplace:Unsubscribe
    3. aws-marketplace:ViewSubscriptions
  2. Request access to Amazon Bedrock FMs in US West (Oregon) Region:
    1. On the Amazon Bedrock console, in the navigation pane, under Bedrock configurations, choose Model access.
    2. On the Model access page, choose Enable specific models.
    3. Select the models, then choose Next and Submit to request access.
  3. Install K8sGPT following the official instructions.
  4. Add Amazon Bedrock and the FM as an AI backend provider to the K8sGPT configuration:
k8sgpt auth add --backend amazonbedrock --model anthropic.claude-3-5-sonnet-20240620-v1 --providerRegion <region-name>

Note: At the time of writing, K8sGPT includes support for Anthropic’s state-of-the-art Claude 4 Sonnet and 3.7 Sonnet models.

  1. Make the Amazon Bedrock backend default:
k8sgpt auth default -p amazonbedrock
  1. Update Kubeconfig to connect to an EKS cluster:
aws eks update-kubeconfig --region <region-name> --name my-cluster
  1. Analyze issues within the cluster using Amazon Bedrock:
k8sgpt analyze --explain --backend amazonbedrock

Install the K8sGPT Operator

To install the K8sGPT Operator, first complete the following prerequisites:

  1. Install the latest version of Helm. To check your version, run helm version.
  2. Install the latest version of eksctl. To check your version, run eksctl version.

Create the EKS cluster

Create an EKS cluster with eksctl with the pre-defined eksctl config file:

cat >cluster-config.yaml <<EOF
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: eks
  region: us-west-2
  version: "1.32"
  tags:
    environment: eks
iam:
  withOIDC: true
  podIdentityAssociations: 
  - namespace: kube-system
    serviceAccountName: cluster-autoscaler
    roleName: pod-identity-role-cluster-autoscaler
    wellKnownPolicies:
      autoScaler: true  
managedNodeGroups:
  - name: managed-ng
    instanceType: m5.large
    minSize: 2
    desiredCapacity: 3
    maxSize: 5
    privateNetworking: true
    volumeSize: 30
    volumeType: gp3 
    tags:
      k8s.io/cluster-autoscaler/enabled: "true"
      k8s.io/cluster-autoscaler/eks: "owned"      
addonsConfig:
  autoApplyPodIdentityAssociations: true
addons:
  - name: eks-pod-identity-agent 
    tags:
      team: eks
  - name: vpc-cni
    version: latest  
  - name: aws-ebs-csi-driver
    version: latest  
  - name: coredns
    version: latest 
  - name: kube-proxy
    version: latest
cloudWatch:
 clusterLogging:
   enableTypes: ["*"]
   logRetentionInDays: 30
EOF

eksctl create cluster -f cluster-config.yaml

You should get the following expected output:
EKS cluster "eks" in "us-west-2" region is ready

Create an Amazon Bedrock and CloudWatch VPC private endpoint (optional)

To facilitate private communication between Amazon EKS and Amazon Bedrock, as well as CloudWatch, it is recommended to use a virtual private cloud (VPC) private endpoint. This will make sure that the communication is retained within the VPC, providing a secure and private channel.

Refer to Create a VPC endpoint to set up the Amazon Bedrock and CloudWatch VPC endpoints.

Create an IAM policy, trust policy, and role

Complete the following steps to create an IAM policy, trust policy, and role to only allow the K8sGPT Operator to interact with Amazon Bedrock for least privilege:

  1. Create a role policy with Amazon Bedrock permissions:
cat >k8sgpt-bedrock-permission.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "bedrock:InvokeModel",
        "bedrock:InvokeModelWithResponseStream"
      ],
      "Resource": "arn:aws:bedrock:us-west-2::foundation-model/anthropic.claude-3-5-sonnet-20240620-v1:0" 
    }
  ]
}
EOF
  1. Create a permission policy:
aws iam create-policy 
    --policy-name bedrock-k8sgpt-policy 
    --policy-document file://k8sgpt-bedrock-permission.json
  1. Create a trust policy:
cat >k8sgpt-bedrock-Trust-Policy.json <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowEksAuthToAssumeRoleForPodIdentity",
            "Effect": "Allow",
            "Principal": {
                "Service": "pods.eks.amazonaws.com"
            },
            "Action": [
                "sts:AssumeRole",
                "sts:TagSession"
            ]
        }
    ]
}
EOF
  1. Create a role and attach the trust policy:
aws iam create-role 
    --role-name k8sgpt-bedrock 
    --assume-role-policy-document file://k8sgpt-bedrock-Trust-Policy.json
aws iam attach-role-policy --role-name k8sgpt-bedrock --policy-arn=arn:aws:iam::123456789:policy/bedrock-k8sgpt-policy

Install Prometheus

Prometheus will be used for monitoring. Use the following command to install Prometheus using Helm in the k8sgpt-operator-system namespace:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update 
helm install prometheus prometheus-community/kube-prometheus-stack -n k8sgpt-operator-system --create-namespace

Install the K8sGPT Operator through Helm

Install the K8sGPT Operator through Helm with Prometheus and Grafana enabled:

helm upgrade --install release k8sgpt/k8sgpt-operator -n k8sgpt-operator-system --set serviceAccount.annotations."eks.amazonaws.com/role-arn"=arn:aws:iam::123456789:role/k8sgpt-bedrock --set serviceMonitor.enabled=true --set grafanaDashboard.enabled=true

Patch the K8sGPT controller manager to be recognized by the Prometheus operator:

kubectl -n k8sgpt-operator-system patch serviceMonitor release-k8sgpt-operator-controller-manager-metrics-monitor -p '{"metadata":{"labels":{"release":"prometheus"}}}' --type=merge

Associate EKS Pod Identity

EKS Pod Identity is an AWS feature that simplifies how Kubernetes applications obtain IAM permissions by empowering cluster administrators to associate IAM roles that have least privileged permissions with Kubernetes service accounts directly through Amazon EKS. It provides a simple way to allow EKS pods to call AWS services such as Amazon Simple Storage Service (Amazon S3). Refer to Learn how EKS Pod Identity grants pods access to AWS services for more details.

Use the following command to perform the association:

aws eks create-pod-identity-association 
          --cluster-name eks 
          --namespace k8sgpt-operator-system 
          --service-account k8sgpt-k8sgpt-operator-system  
          --role-arn arn:aws:iam::123456789:role/k8sgpt-bedrock

Scan the cluster with Amazon Bedrock as the backend

Complete the following steps:

  1. Deploy a K8sGPT resource using the following YAML, using Anthropic’s Claude 3.5 model on Amazon Bedrock as the backend:
cat > k8sgpt-bedrock.yaml<<EOF
apiVersion: core.k8sgpt.ai/v1alpha1
kind: K8sGPT
metadata:
  name: bedrock
  namespace: k8sgpt-operator-system
spec:
  ai:
    enabled: true
    model: anthropic.claude-3-5-sonnet-20240620-v1:0
    region: us-west-2
    backend: amazonbedrock
    language: english
  noCache: false
  repository: ghcr.io/k8sgpt-ai/k8sgpt
  version: v0.4.12
EOF

kubectl apply -f k8sgpt-bedrock.yaml
  1. When the k8sgpt-bedrock pod is running, use the following command to check the list of scan results:
kubectl get results -n k8sgpt-operator-system
  1. Use the following command to check the details of each scan result:
kubectl get results <scanresult> -n k8sgpt-operator-system -o json

Set up Amazon Bedrock invocation logging

Complete the following steps to enable Amazon Bedrock invocation logging, forwarding to CloudWatch or Amazon S3 as log destinations:

  1. Create a CloudWatch log group:
    1. On the CloudWatch console, choose Log groups under Logs in the navigation pane.
    2. Choose Create log group.
    3. Provide details for the log group, then choose Create.

  1. Enable model invocation logging:
    1. On the Amazon Bedrock console, under Bedrock configurations in the navigation pane, choose Settings.
    2. Enable Model invocation logging.
    3. Select which data requests and responses you want to publish to the logs.
    4. Select CloudWatch Logs only under Select the logging destinations and enter the invocation logs group name.
    5. For Choose a method to authorize Bedrock, select Create and use a new role.
    6. Choose Save settings.

Use case- Continuously scan the EKS cluster with the K8sGPT Operator

This section demonstrates how to leverage the K8sGPT Operator for continuous monitoring of your Amazon EKS cluster. By integrating with popular observability tools, the solution provides comprehensive cluster health visibility through two key interfaces: a Grafana dashboard that visualizes scan results and cluster health metrics, and CloudWatch logs that capture detailed AI-powered analysis and recommendations from Amazon Bedrock. This automated approach eliminates the need for manual kubectl commands while ensuring proactive identification and resolution of potential issues. The integration with existing monitoring tools streamlines operations and helps maintain optimal cluster health through continuous assessment and intelligent insights.

Observe the health status of your EKS cluster through Grafana

Log in to Grafana dashboard using localhost:3000 with the following credentials embedded:

kubectl port-forward service/prometheus-grafana -n k8sgpt-operator-system 3000:80
admin-password: prom-operator
admin-user: admin

The following screenshot showcases the K8sGPT Overview dashboard.

The dashboard features the following:

  • The Result Kind types section represents the breakdown of the different Kubernetes resource types, such as services, pods, or deployments, that experienced issues based on the K8sGPT scan results
  • The Analysis Results section represents the number of scan results based on the K8sGPT scan
  • The Results over time section represents the count of scan results change over time
  • The rest of the metrics showcase the performance of the K8sGPT controller over time, which help in monitoring the operational efficiency of the K8sGPT Operator

Use a CloudWatch dashboard to check identified issues and get recommendations

Amazon Bedrock model invocation logs are logged into CloudWatch, which we set up previously. You can use a CloudWatch Logs Insights query to filter model invocation input and output for cluster scan recommendations and output as a dashboard for quick access. Complete the following steps:

  1. On the CloudWatch console, create a dashboard.

  1. On the CloudWatch console, choose the CloudWatch log group and run the following query to filter the scan result performed by the K8sGPT Operator:
fields ,input.inputBodyJson.prompt,output.outputBodyJson.completion
| sort  desc
| filter identity.arn like "k8sgpt-bedrock"
  1. Choose Create Widget to save the dashboard.

It will automatically show the model invocation log with input and output from the K8sGPT Operator. You can expand the log to check the model input for errors and output for recommendations given by the Amazon Bedrock backend.

Extend K8sGPT with Custom Analyzers

K8sGPT’s custom analyzers feature enables teams to create specialized checks for their Kubernetes environments, extending beyond the built-in analysis capabilities. This powerful extension mechanism allows organizations to codify their specific operational requirements and best practices into K8sGPT’s scanning process, making it possible to monitor aspects of cluster health that aren’t covered by default analyzers.

You can create custom analyzers to monitor various aspects of your cluster health. For example, you might want to monitor Linux disk usage on nodes – a common operational concern that could impact cluster stability. The following steps demonstrate how to implement and deploy such an analyzer:

First, create the analyzer code:

package analyzer

import (
    "context"
    rpc "buf.build/gen/go/k8sgpt-ai/k8sgpt/grpc/go/schema/v1/schemav1grpc"
    v1 "buf.build/gen/go/k8sgpt-ai/k8sgpt/protocolbuffers/go/schema/v1"
    "github.com/ricochet2200/go-disk-usage/du"
)

func (a *Handler) Run(context.Context, *v1.RunRequest) (*v1.RunResponse, error) {
    usage := du.NewDiskUsage("/")
    diskUsage := int((usage.Size() - usage.Free()) * 100 / usage.Size())
    return &v1.RunResponse{
        Result: &v1.Result{
            Name:    "diskuse",
            Details: fmt.Sprintf("Disk usage is %d%%", diskUsage),
            Error: []*v1.ErrorDetail{{
                Text: fmt.Sprintf("High disk usage detected: %d%%", diskUsage),
            }},
        },
    }, nil
}

Build your analyzer into a docker image and deploy the analyzer to your cluster:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: diskuse-analyzer
  namespace: k8sgpt-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: diskuse-analyzer
  template:
    metadata:
      labels:
        app: diskuse-analyzer
    spec:
      containers:
      - name: diskuse-analyzer
        image: <your-registry>/diskuse-analyzer:latest
        ports:
        - containerPort: 8085
---
apiVersion: v1
kind: Service
metadata:
  name: diskuse-analyzer
  namespace: k8sgpt-system
spec:
  selector:
    app: diskuse-analyzer
  ports:
    - protocol: TCP
      port: 8085
      targetPort: 8085

Finally, configure K8sGPT to use your custom analyzer:

apiVersion: core.k8sgpt.ai/v1alpha1
kind: K8sGPT
metadata:
  name: k8sgpt-instance
  namespace: k8sgpt-system
spec:
  customAnalyzers:
    - name: diskuse
      connection:
        url: diskuse-analyzer
        port: 8085

This approach allows you to extend K8sGPT’s capabilities while maintaining its integration within the Kubernetes ecosystem. Custom analyzers can be used to implement specialized health checks, security scans, or any other cluster analysis logic specific to your organization’s needs. When combined with K8sGPT’s AI-powered analysis through Amazon Bedrock, these custom checks provide detailed, actionable insights in plain English, helping teams quickly understand and resolve potential issues.

K8sGPT privacy considerations

K8sGPT collects data through its analyzers, including container status messages and pod details, which can be displayed to users or sent to an AI backend when the --explain flag is used. Data sharing with the AI backend occurs only if the user opts in by using this flag and authenticates with the backend. To enhance privacy, you can anonymize sensitive data such as deployment names and namespaces with the --anonymize flag before sharing. K8sGPT doesn’t collect logs or API server data beyond what is necessary for its analysis functions. These practices make sure users have control over their data and that it is handled securely and transparently. For more information, refer to Privacy in the K8sGPT documentation.

Clean Up

Complete the following steps to clean up your resources:

  1. Run the following command to delete the EKS cluster:
eksctl delete cluster -f cluster-config.yaml
  1. Delete the IAM role (k8sgpt-bedrock).
  2. Delete the CloudWatch logs and dashboard.

Conclusion

The K8sGPT and Amazon Bedrock integration can revolutionize Kubernetes maintenance using AI for cluster scanning, issue diagnosis, and actionable insights. The post discussed best practices for K8sGPT on Amazon Bedrock in CLI and Operator modes and highlighted use cases for simplified cluster management. This solution combines K8sGPT’s SRE expertise with Amazon Bedrock FMs to automate tasks, predict issues, and optimize resources, reducing operational overhead and enhancing performance.

You can use these best practices to identify and implement the most suitable use cases for your specific operational and management needs. By doing so, you can effectively improve Kubernetes management efficiency and achieve higher productivity in your DevOps and SRE workflows.

To learn more about K8sGPT and Amazon Bedrock, refer to the following resources:


About the authors

Angela Wang is a Technical Account Manager based in Australia with over 10 years of IT experience, specializing in cloud-native technologies and Kubernetes. She works closely with customers to troubleshoot complex issues, optimize platform performance, and implement best practices for cost optimized, reliable and scalable cloud-native environments. Her hands-on expertise and strategic guidance make her a trusted partner in navigating modern infrastructure challenges.

Haofei Feng is a Senior Cloud Architect at AWS with over 18 years of expertise in DevOps, IT Infrastructure, Data Analytics, and AI. He specializes in guiding organizations through cloud transformation and generative AI initiatives, designing scalable and secure GenAI solutions on AWS. Based in Sydney, Australia, when not architecting solutions for clients, he cherishes time with his family and Border Collies.

Eva Li is a Technical Account Manager at AWS located in Australia with over 10 years of experience in the IT industry. Specializing in IT infrastructure, cloud architecture and Kubernetes, she guides enterprise customers to navigate their cloud transformation journeys and optimize their AWS environments. Her expertise in cloud architecture, containerization, and infrastructure automation helps organizations bridge the gap between business objectives and technical implementation. Outside of work, she enjoys yoga and exploring Australia’s bush walking trails with friends.

Alex Jones is a Principal Engineer at AWS. His career has focused largely on highly constrained environments for physical and digital infrastructure. Working at companies such as Microsoft, Canoncial and American Express, he has been both an engineering leader and individual contributor. Outside of work he has founded several popular projects such as OpenFeature and more recently the GenAI accelerator for Kubernetes, K8sGPT. Based in London, Alex has a partner and two children.

Read More

How Rocket streamlines the home buying experience with Amazon Bedrock Agents

How Rocket streamlines the home buying experience with Amazon Bedrock Agents

Rocket Companies is a Detroit-based FinTech company with a mission to “Help Everyone Home.” Although known to many as a mortgage lender, Rocket’s mission extends to the entire home ownership journey from finding the perfect home, purchasing, financing, and using your home equity. Rocket has grown by making the complex simple, empowering clients to navigate the home ownership journey through intuitive, technology-driven solutions. Rocket’s web and mobile app brings together home search, financing, and servicing in one seamless experience. By combining data analytics and their 11PB of data with advanced automation, Rocket speeds up everything from loan approval to servicing, while maintaining a personalized touch at scale.

Rocket’s client-first approach is central to everything they do. With customizable digital tools and expert guidance from skilled mortgage bankers, Rocket aims to match every client with the right product and the right support quickly, accurately, and securely.

With the advent of generative AI, Rocket recognized an opportunity to go further. Buying a home can still feel overwhelming. This led Rocket to ask: How can we offer the same trusted guidance our clients expect at any hour, on any channel? The result is Rocket AI Agent, a conversational AI assistant designed to transform how clients engage with Rocket’s digital properties. Built on Amazon Bedrock Agents, the Rocket AI Agent combines deep domain knowledge, personalized guidance, and the ability to perform meaningful actions on behalf of clients. Since its launch, it has become a central part of Rocket’s client experience. Clients who interact with Rocket AI Agent are three times more likely to close a loan compared to those who don’t.

Because it’s embedded directly into Rocket’s web and mobile services, it delivers support exactly when and where clients need it. This post explores how Rocket brought that vision to life using Amazon Bedrock Agents, powering a new era of AI-driven support that is consistently available, deeply personalized, and built to take action.

Introducing Rocket AI Agent: A personalized AI homeownership guide

Rocket AI Agent is now available across the majority of Rocket’s web pages and mobile apps. It’s helping clients during loan origination, in servicing, and even within Rocket’s third-party broker system (Rocket Pro), essentially meeting clients wherever they interact with Rocket digitally.The Rocket AI Agent is a purpose-built AI agent designed to do more than answer questions. It delivers real-time, personalized guidance and takes action when needed. It offers:

  • 24/7, multilingual assistance through Rocket’s website and mobile services
  • Contextual awareness. Rocket AI agent knows what page the client was viewing and tailors its responses based upon this context
  • Real-time answers about mortgage options, rates, documents, and processes
  • Guided self-service actions, such as filling out preapproval forms or scheduling payments
  • Personalized experiences using Rocket’s proprietary data and user context
  • Seamless transitions to Rocket Mortgage bankers when human support is needed

Whether someone wants to know why their escrow changed or how to qualify for a refinance, Rocket AI Agent is designed to respond with clarity, confidence, and action.

Amazon Bedrock Agents

Amazon Bedrock Agents is a fully managed, cloud-based capability that customers use to quickly build, test, and scale agentic AI applications on Amazon Web Services (AWS). With built-in integrations and security, customers like Rocket use Amazon Bedrock Agents to accelerate from proof-of-concept to production securely and reliably. These agents extend foundation models (FMs) using the Reasoning and acting (ReAct) framework, allowing them to interpret user intent, plan and execute tasks, and integrate seamlessly with enterprise data and APIs much like a skilled digital assistant.

Agents use the FM to analyze a user’s request, break it into actionable steps, retrieve relevant data, and trigger downstream APIs to complete tasks. This allows Rocket AI Agent to move beyond passive support into proactive assistance, helping clients navigate complex financial processes in real time.Key capabilities of Amazon Bedrock Agents used in Rocket AI Agent include:

  • Agent instructions – Set the agent’s objective and role (for example, a mortgage servicing expert), enabling goal-oriented behavior
  • Amazon Bedrock Knowledge Bases – Provide fast, accurate retrieval of information from Rocket’s Learning Center and other proprietary documents
  • Action group – Define secure operations—such as submitting leads or scheduling payments—that the agent can execute by interacting with Rocket’s backend services
  • Agent memory – Memory retention allows Rocket AI Agent to maintain contextual awareness across multiple turns, enhancing user experience with more natural, personalized interactions.
  • Amazon Bedrock Guardrails – Supports Rocket’s responsible AI goals by making sure that the agent stays within appropriate topic boundaries.

By combining structured reasoning with the ability to act across systems, Amazon Bedrock Agents empower Rocket AI Agent to deliver outcomes, not just answers.

How the Rocket AI Agent works: Architecture overview

The Rocket AI Agent is a centralized capability deployed across Rocket’s suite of digital properties, designed for scale, flexibility, and job-specific precision. At the core of its architecture is a growing network of domain-specific agents currently eight each focused on distinct functions such as loan origination, servicing, or broker support. These agents work together behind a unified interface to provide seamless, context-aware assistance. The following diagram shows the solution architecture.

Here are three foundational elements that shape Rocket AI Agent’s architecture:

  1. Client initiation: The client uses the chat function within Rocket’s mobile app or web page
  2. Rocket AI Agent API: Rocket’s AI Agent API provides a unified API interface to the agents supporting the chat functionality
  3. Agent routing: The AI Agent API routes the request to the correct Amazon Bedrock agent based on static criteria, such as web or mobile property that the client entered the chat through, or the use of LLM-based intent identification
  4. Agent processing: The agent breaks the task into subtasks, determines the right sequence, and executes actions and knowledge as it works
  5. Task execution: The agent uses Rocket data in knowledge bases to find info, send results to the client, and perform actions to get work done
  6. Guardrails: Enforce Rocket’s responsible AI policies by blocking topics and language that deviate from the goals of the experience
  7. Prompt management: Helps Rocket manage a library of prompts for its AI agents and optimize prompts for particular FMs

This modular, scalable design has allowed Rocket to serve diverse client needs efficiently and consistently across services and across the homeownership lifecycle.

Impact and outcomes

Since launching Rocket AI Agent, we’ve seen transformative improvements across the client journey and internal operations:

  • Threefold increase in conversion rates from web traffic to closed loans, as Rocket AI Agent captures leads around the clock even outside traditional business hours.
  • Operational efficiency gains, particularly through chat containment. With the implementation of the AI assistant to support prospective clients exploring Rocket’s offerings, Rocket saw an 85% decrease in transfer to customer care and a 45% decrease in transfer to servicing specialists. This reduction in handoffs to human agents has freed up team capacity to focus on more complex, high-impact client needs.
  • Higher customer satisfaction (CSAT) scores, with 68% of clients providing high satisfaction ratings across servicing and origination chat interactions. Top drivers include quick response times, clear communication, and accurate information, all contributing to greater client trust and reduced friction.
  • Stronger client engagement, with users completing more tasks independently, driven by intuitive, personalized self-service capabilities.
  • Greater personalization and flexibility. Rocket AI Agents adapt to each client’s stage in the homeownership journey and their preferences, offering the ability to escalate to a banker on their terms. This personalized support reflects Rocket’s core mission to “Help Everyone Home,” by meeting clients where they are and giving them the confidence to move forward.
  • Expanded language support, including Spanish-language assistance, to better serve a diverse and growing demographic.

Rocket has deployed Rocket AI Agents across its digital services, including the servicing portal and third-party broker systems facilitating, providing continuity of experience wherever clients engage. By delivering consistent, on-brand support across these touchpoints, Rocket is transforming the way clients experience homeownership. Through the personalization capabilities of Amazon Bedrock Agents, Rocket can tailor every interaction to a client’s context and preferences bringing its mission to “Help Everyone Home” to life through scalable, intelligent engagement.

Lessons learned

Throughout the development and deployment of the Rocket AI Agent, the Rocket team uncovered several key lessons that shaped both its technical strategy and the overall client experience. These insights can serve as valuable guidance for other organizations building generative AI applications at scale:

  • Curate your data carefully: The quality of responses generated by generative AI is closely tied to the quality and structure of its source data. Rocket built their enterprise knowledge base using Amazon Bedrock Knowledge Bases, which internally uses Amazon Kendra for retrieval across Rocket’s content libraries, including FAQs, compliance documents, and servicing workflows.
  • Limit the agent’s scope per task: Rocket found that assigning each agent a tight scope of 3–5 actions led to more maintainable, testable, and high-performing agents. For example, the payment agent focuses only on tasks like scheduling payments and providing due dates, while the refinance agent handles rate simulations and lead capture. Each agent’s capabilities use Amazon Bedrock action groups with well-documented interfaces and monitored task resolution rates separately.
  • Prioritize graceful escalation: Escalation isn’t failure, it’s a critical part of user trust. Rocket implemented uncertainty thresholds using confidence scores and specific keyword triggers to detect when an interaction might require human assistance. In those cases, Rocket AI Agent proactively transitions the session to a live support agent or gives the user the option to escalate. This avoids frustrating conversational loops and makes sure that complex or sensitive interactions receive the appropriate level of human care.
  • Expect user behavior to evolve: Real-world usage is dynamic. Clients will interact with the system in unexpected ways, and patterns change over time. Investing in observability and user feedback loops is essential for adapting quickly.
  • Using cross-Region inference from the start: To provide scalable, resilient model performance, Rocket enabled cross-Region inference early in development. This allows inference requests to be routed to the optimal AWS Region within the supported geography, improving latency and model availability by automatically distributing load based on capacity. During peak traffic windows such as product launches or interest rate shifts this architecture has allowed Rocket to avoid Regional service quota bottlenecks, maintain responsiveness, and increase throughput by taking advantage of compute capacity across multiple AWS Regions. The result is a smoother, more consistent user experience even under bursty, unpredictable load conditions.

These lessons are a reminder that although generative AI can unlock powerful capabilities, thoughtful implementation is key to delivering sustainable value and trusted experiences.

What’s Next: Moving toward multi-agent collaboration

Rocket is just beginning to realize the potential of agentic AI. Building on the success of domain-specific agents, the next phase focuses on scaling these capabilities through multi-agent collaboration powered by Amazon Bedrock Agents. This evolution will allow Rocket to orchestrate agents across domains and deliver intelligent, end-to-end experiences that mirror the complexity of real client journeys.

By enabling agents to work together seamlessly, Rocket is laying the groundwork for a future where AI not only responds to questions but proactively navigates entire workflows from discovery and qualification to servicing and beyond.

Benefits for Rocket

Multi-agent collaboration marks a transformative step forward in Rocket’s journey to build agentic AI–powered experiences that reimagine homeownership from the very first question to the final signature. By enabling multiple specialized agents to coordinate within a single conversation, Rocket can unlock a new level of intelligence, automation, and personalization across its digital services.

  • End-to-end personalization: By allowing multiple domain-specific agents (such as refinance, servicing, and loan options) to share context and coordinate, Rocket can deliver more tailored, intelligent responses that evolve with the client’s homeownership journey in real time.
  • Back-office integration: With agents capable of invoking secure backend APIs and workflows, Rocket can begin to automate parts of its back-office operations, such as document verification, follow-ups, and lead routing, improving speed, accuracy, and operational efficiency.
  • Context switching: Move fluidly between servicing, origination, and refinancing within one chat.
  • Orchestration: Handle multistep tasks that span multiple Rocket business units.

With multi-agent orchestration, Rocket is laying the foundation for a consistently-available, deeply personalized assistant that not only answers questions but drives meaningful outcomes from home search to loan closing and beyond. It represents the next chapter in Rocket’s mission to “Help Everyone Home.”

Conclusion

Rocket AI Agent is more than a digital assistant. It’s a reimagined approach to client engagement, powered by agentic AI. By combining Amazon Bedrock Agents with Rocket’s proprietary data and backend systems, Rocket has created a smarter, more scalable, and more human experience available 24/7, without the wait.

To dive deeper into building intelligent, multi-agent applications with Amazon Bedrock Agents, explore the AWS workshop, Unified User Experiences with Hierarchical Multi-Agent Collaboration. This hands-on workshop includes open source code and best practices drawn from real-world financial services implementations, demonstrating how multi-agent systems can automate complex workflows to deliver next-generation customer experience.

Rocket puts it simply: “Together with AWS, we’re getting started. Our goal is to empower every client to move forward with confidence and, ultimately, to Help Everyone Home.”


About the authors

Manali Sapre is a Senior Director at Rocket Mortgage, bringing over 20 years of experience leading transformative technology initiatives across the company. She has been at the forefront of innovation—spearheading Rocket’s first-generation AI chat platform, building the company’s original digital mortgage application, and launching scalable lead generation systems. Manali has also led multiple AI-driven initiatives focused on banker efficiency and internal productivity, helping to embed smart, human-centric technology into the daily workflows of team members. Her passion lies in solving complex challenges through collaboration, mentoring the next generation of tech leaders, and creating intuitive, high-impact experiences. Outside of work, Manali enjoys hiking, traveling, and spending quality time with her family.

Seshidhar Raghupathi is a software architect at Rocket with over 12 years of experience driving innovation, scalability, and system resilience across AI and client communication platforms. He was instrumental in developing Rocket’s first cloud-based digital mortgage application and has since led several impactful initiatives to enhance intelligent, personalized client experiences. His expertise spans backend architecture, AI integration, platform modernization, and cross-team enablement. He is known for his ability to execute tactically while aligning with long-term strategic goals, particularly in enhancing security, scalability, and user experience. Outside of work, Seshi enjoys spending time with family, playing sports, and connecting with friends.

Venkata Santosh Sajjan Alla is a Senior Solutions Architect at AWS Financial Services, driving AI-led transformation across North America’s FinTech sector. He partners with organizations to design and execute cloud and AI strategies that speed up innovation and deliver measurable business impact. His work has consistently translated into millions in value through enhanced efficiency and additional revenue streams. With deep expertise in AI/ML, Generative AI, and cloud-native architectures, Sajjan enables financial institutions to achieve scalable, data-driven outcomes. When not architecting the future of finance, he enjoys traveling and spending time with family. Connect with him on LinkedIn.

Axel Larsson is a Principal Solutions Architect at AWS based in the greater New York City area. He supports FinTech customers and is passionate about helping them transform their business through cloud and AI technology. Outside of work, he is an avid tinkerer and enjoys experimenting with home automation.

Read More

Build an MCP application with Mistral models on AWS

Build an MCP application with Mistral models on AWS

This post is cowritten with Siddhant Waghjale and Samuel Barry from Mistral AI.

Model Context Protocol (MCP) is a standard that has been gaining significant traction in recent months. At a high level, it consists of a standardized interface designed to streamline and enhance how AI models interact with external data sources and systems. Instead of hardcoding retrieval and action logic or relying on one-time tools, MCP offers a structured way to pass contextual data (for example, user profiles, environment metadata, or third-party content) into a large language model (LLM) context and to route model outputs to external systems. For developers, MCP abstracts away integration complexity and creates a unified layer for injecting external knowledge and executing model actions, making it more straightforward to build robust and efficient agentic AI systems that remain decoupled from data-fetching logic.

Mistral AI is a frontier research lab that emerged in 2023 as a leading open source contender in the field of generative AI. Mistral has released many state-of-the-art models, from Mistral 7B and Mixtral in the early days up to the recently announced Mistral Medium 3 and Small 3—effectively popularizing the mixture of expert architecture along the way. Mistral models are generally described as extremely efficient and versatile, frequently reaching state-of-the-art levels of performance at a fraction of the cost. These models are now seamlessly integrated into Amazon Web Services (AWS) services, unlocking powerful deployment options for developers and enterprises. Through Amazon Bedrock, users can access Mistral models using a fully managed API, enabling rapid prototyping without managing infrastructure. Amazon Bedrock Marketplace further extends this by allowing quick model discovery, licensing, and integration into existing workflows. For power users seeking fine-tuning or custom training, Amazon SageMaker JumpStart offers a streamlined environment to customize Mistral models with their own data, using the scalable infrastructure of AWS. This integration makes it faster than ever to experiment, scale, and productionize Mistral models across a wide range of applications.

This post demonstrates building an intelligent AI assistant using Mistral AI models on AWS and MCP, integrating real-time location services, time data, and contextual memory to handle complex multimodal queries. This use case, restaurant recommendations, serves as an example, but this extensible framework can be adapted for enterprise use cases by modifying MCP server configurations to connect with your specific data sources and business systems.

Solution overview

This solution uses Mistral models on Amazon Bedrock to understand user queries and route the query to relevant MCP servers to provide accurate and up-to-date answers. The system follows this general flow:

  1. User input – The user sends a query (text, image, or both) through either a terminal-based or web-based Gradio interface
  2. Image processing – If an image is detected, the system processes and optimizes it for the AI model
  3. Model request – The query is sent to the Amazon Bedrock Converse API with appropriate system instructions
  4. Tool detection – If the model determines it needs external data, it requests a tool invocation
  5. Tool execution – The system routes the tool request to the appropriate MCP server and executes it
  6. Response generation – The model incorporates the tool’s results to generate a comprehensive response
  7. Response delivery – The final answer is displayed to the user

In this example, we demonstrate the MCP framework using a general use case of restaurant or location recommendation and route planning. Users can provide multimodal input (such as text plus image), and the application integrates Google Maps, Time, and Memory MCP servers. Additionally, this post showcases how to use the Strands Agent framework as an alternative approach to build the same MCP application with significantly reduced complexity and code. Strands Agent is an open source, multi-agent coordination framework that simplifies the development of intelligent, context-aware agent systems across various domains. You can build your own MCP application by modifying the MCP server configurations to suit your specific needs. You can find the complete source code for this example in our Git repository. The following diagram is the solution architecture.

MCP Module architecture with Host, Clients, Servers components bridging UI and Bedrock foundation models

Prerequisites

Before implementing the example, you need to set up the account and environment. Use the following steps.To set up the AWS account :

  1. Create an AWS account. If you don’t already have one, sign up at https://aws.amazon.com
  2. To enable Amazon Bedrock access, go to the Amazon Bedrock console and request access to the models you plan to use (for this walkthrough, request access to Mistral Pixtral Large). Or deploy Mistral Small 3 model from Amazon Bedrock Marketplace. (For more details, refer to the Mistral Model Deployments on AWS section later in this post.) When your request is approved, you’ll be able to use these models through the Amazon Bedrock Converse API

To set up the local environment:

  1. Install the required tools:
    1. Python 3.10 or later
    2. Node.js (required for MCP tool servers)
    3. AWS Command Line Interface (AWS CLI), which is needed for configuration
  2. Clone the Repository:
git clone https://github.com/aws-samples/mistral-on-aws.git
cd mistral-on-aws/MCP/MCP_Mistral_app_demo/
  1. Install Python dependencies:
pip install -r requirements.txt
  1. Configure AWS credentials:
aws configure

Then enter your AWS access key ID, secret access key, and preferred AWS Region.

  1. Set up MCP tool servers. The server configurations are provided in file: server_configs.py. The system uses Node.js-based MCP servers. They’ll be installed automatically when you run the application for the first time using NPM. You can add other MCP server configurations in this file. This solution can be quickly modified and extended to meet your business requirements.

Mistral model deployments on AWS

Mistral models can be accessed or deployed using the following methods. To use foundation models (FMs) in MCP applications, the models must support tool use functionality.

Amazon Bedrock serverless (Pixtral Large)

To enable this model, follow these steps:

  1. Go to the Amazon Bedrock console.
  2. From the left navigation pane, select Model access.
  3. Choose Manage model access.
  4. Search for the model using the keyword Pixtral, select it, and choose Next, as shown in the following screenshot. The model will then be ready to use.

This model has cross-Region inference enabled. When using the model ID, always add the Region prefix eu or us before the model ID, such as eu.mistral.pixtral-large-2502-v1:0. Provide this model ID in config.py. You can now test the example with the Gradio web-based app.

Amazon Bedrock interface for managing base model access with Pixtral Large model highlighted

Amazon Bedrock Marketplace (Mistral-Small-24B-Instruct-2501)

Amazon Bedrock Marketplace and SageMaker JumpStart deployments are dedicated instances (serverful) and incur charges as long as the instance remains deployed. For more information, refer to Amazon Bedrock pricing and Amazon SageMaker pricing.

To enable this model, follow these steps:

  1. Go to the Amazon Bedrock console
  2. In the left navigation pane, select Model catalog
  3. In the search bar, search for “Mistral-Small-24B-Instruct-25-1,” as shown in the following screenshot

Amazon Bedrock UI with model catalog, filters, and Mistral-Small-24B-Instruct-2501 model spotlight

  1. Select the model and select Deploy.
  2. In the configuration page, you can keep all fields as default. This endpoint requires an instance type ml.g6.12xlarge. Check service quotas under the Amazon SageMaker service to make sure you have more than two instances available for endpoint usage (you’ll use another instance for Amazon SageMaker JumpStart deployment). If you don’t have more than two instances, request a quota increase for this instance type. Then choose Deploy. The model deployment might take a few minutes.
  3. When the model is in service, copy the endpoint Amazon Resource Name (ARN), as shown in the following screenshot, and add it to the config.py file in the model_id field. Then you can test the solution with the Gradio web-based app.
  4. The Mistral-Small-24B-Instruct-25-1 model doesn’t support image input, so only text-based Q&A is supported.

AWS Bedrock marketplace deployments interface with workflow steps and active Mistral endpoint

Amazon SageMaker JumpStart (Mistral-Small-24B-Instruct-2501)

To enable this model, follow these steps:

  1. Go to the Amazon SageMaker console
  2. Create a domain and user profile
  3. Under the created user profile, launch Studio
  4. In the left navigation pane, select JumpStart, then search for “Mistral”
  5. Select Mistral-Small-24B-Instruct-2501, then choose Deploy

This deployment might take a few minutes. The following screenshot shows that this model is marked as Bedrock ready. This means you can register this model as an Amazon Bedrock Marketplace deployment and use Amazon Bedrock APIs to invoke this Amazon SageMaker endpoint.

Dark-themed SageMaker dashboard displaying Mistral AI models with Bedrock ready status

  1. After the model is in service, copy its endpoint ARN from the Amazon Bedrock Marketplace deployment, as shown in the following screenshot, and provide it to the config.py file in the model_id field. Then you can test the solution with the Gradio web-based app.

The Mistral-Small-24B-Instruct-25-1 model doesn’t support image input, so only text-based Q&A is supported.

SageMaker real-time inference endpoint for Mistral small model with AllTraffic variant on ml.g6 instance

Build an MCP application with Mistral models on AWS

The following sections provide detailed insights into building MCP applications from the ground up using a component-level approach. We explore how to implement the three core MCP components, MCP host, MCP client, and MCP servers, giving you complete control and understanding of the underlying architecture.

MCP host component

The MCP is designed to facilitate seamless interaction between AI models and external tools, systems, and data sources. In this architecture, the MCP host plays a pivotal role in managing the lifecycle and orchestration of MCP clients and servers, enabling AI applications to access and utilize external resources effectively. The MCP host is responsible for integration with FMs, providing context, capabilities discovery, initialization, and MCP client management. In this solution, we have three files to provide this capability.

The first file is agent.py. The BedrockConverseAgent class in agent.py is the core component that manages communication with the Amazon Bedrock service and provides the FM models integration. The constructor initializes the agent with model settings and sets up the AWS Bedrock client.

def __init__(self, model_id, region, system_prompt='You are a helpful assistant.'):
    """
    Initialize the Bedrock agent with model configuration.
    
    Args:
        model_id (str): The Bedrock model ID to use
        region (str): AWS region for Bedrock service
        system_prompt (str): System instructions for the model
    """
    self.model_id = model_id
    self.region = region
    self.client = boto3.client('bedrock-runtime', region_name=self.region)
    self.system_prompt = system_prompt
    self.messages = []
    self.tools = None

Then, the agent intelligently handles multimodal inputs with its image processing capabilities. This method validates image URLs provided by the user, downloads images, detects and normalizes image formats, resizes large images to meet API constraints, and converts incompatible formats to JPEG.

async def _fetch_image_from_url(self, image_url):
    # Download image from URL
    # Process and optimize for model compatibility
    # Return binary image data with MIME type

When users enter a prompt, the agent detects whether it contains an uploaded image or an image URL and processes it accordingly in the invoke_with_prompt function. This way, users can paste an image URL in their query or upload an image from their local device and have it analyzed by the AI model.

async def invoke_with_prompt(self, prompt):
    # Check if prompt contains an image URL
    has_image, image_url = self._is_image_url(prompt)
    if image_input:
        # First check for direct image upload
        # ...
    if has_image_url:
       # Second check for image URL in prompt
    else:
        # Standard text-only prompt
        content = [{'text': prompt}]
    return await self.invoke(content)

The most powerful feature is the agent’s ability to use external tools provided by MCP servers. When the model wants to use a tool, the agent detects the tool_use stop reason from Amazon Bedrock and extracts tool request details, including names and inputs. It then executes the tool through the UtilityHelper, and the tool use results are returned back to the model. The MCP host then continues the conversation with the tool results incorporated.

async def _handle_response(self, response):
    # Add the response to the conversation history
    self.messages.append(response['output']['message'])
    # Check the stop reason
    stop_reason = response['stopReason']
    if stop_reason == 'tool_use':
        # Extract tool use details and execute
        tool_response = []
        for content_item in response['output']['message']['content']:
            if 'toolUse' in content_item:
                tool_request = {
                    "toolUseId": content_item['toolUse']['toolUseId'],
                    "name": content_item['toolUse']['name'],
                    "input": content_item['toolUse']['input']
                }
                tool_result = await self.tools.execute_tool(tool_request)
                tool_response.append({'toolResult': tool_result})
        # Continue conversation with tool results
        return await self.invoke(tool_response)

The second file is utility.py. The UtilityHelper class in utility.py serves as a bridge between Amazon Bedrock and external tools. It manages tool registration, formatting tool specifications for Bedrock compatibility, and tool execution.

def register_tool(self, name, func, description, input_schema):
    corrected_name = UtilityHelper._correct_name(name)
    self._name_mapping[corrected_name] = name
    self._tools[corrected_name] = {
        "function": func,
        "description": description,
        "input_schema": input_schema,
        "original_name": name,
    }

For Amazon Bedrock to understand available tools from MCP servers, the utility module generates tool specifications by providing name, description, and inputSchema in the following function:

def get_tools(self):
    tool_specs = []
    for corrected_name, tool in self._tools.items():
        # Ensure the inputSchema.json.type is explicitly set to 'object'
        input_schema = tool["input_schema"].copy()
        if 'json' in input_schema and 'type' not in input_schema['json']:
            input_schema['json']['type'] = 'object'
        tool_specs.append(
            {
                "toolSpec": {
                    "name": corrected_name,
                    "description": tool["description"],
                    "inputSchema": input_schema,
                }
            }
        )
    return {"tools": tool_specs}

When the model requests a tool, the utility module executes it and formats the result:

async def execute_tool(self, payload):
    tool_use_id = payload["toolUseId"]
    corrected_name = payload["name"]
    tool_input = payload["input"]
    # Find and execute the tool
    tool_func = self._tools[corrected_name]["function"]
    original_name = self._tools[corrected_name]["original_name"]
    # Execute the tool
    result_data = await tool_func(original_name, tool_input)
    # Format and return the result
    return {
        "toolUseId": tool_use_id,
        "content": [{"text": str(result)}],
    }

The final component in the MCP host is the gradio_app.py file, which implements a web-based interface for our AI assistant using Gradio. First, it initializes the model configurations and the agent, then connects to MCP servers and retrieves available tools from the MCP servers.

async def initialize_agent():
  """Initialize Bedrock agent and connect to MCP tools"""
  # Initialize model configuration from config.py
  model_id = AWS_CONFIG["model_id"]
  region = AWS_CONFIG["region"]
  # Set up the agent and tool manager
  agent = BedrockConverseAgent(model_id, region)
  agent.tools = UtilityHelper()
  # Define the agent's behavior through system prompt
  agent.system_prompt = """
  You are a helpful assistant that can use tools to help you answer questions and perform tasks.
  Please remember and save user's preferences into memory based on user questions and conversations.
  """
  # Connect to MCP servers and register tools
  # ...
  return agent, mcp_clients, available_tools

When a user sends a message, the app processes it through the agent invoke_with_prompt() function. The response from the model is displayed on the Gradio GUI:

async def process_message(message, history):
  """Process a message from the user and get a response from the agent"""
  global agent
  if agent is None:
      # First-time initialization
      agent, mcp_clients, available_tools = await initialize_agent()
  try:
      # Process message and get response
      response = await agent.invoke_with_prompt(message)
      # Return the response
      return response
  except Exception as e:
      logger.error(f"Error processing message: {e}")
      return f"I encountered an error: {str(e)}"

MCP client implementation

MCP clients serve as intermediaries between the AI model and the MCP server. Each client maintains a one-to-one session with a server, managing the lifecycle of interactions, including handling interruptions, timeouts, and reconnections. MCP clients route protocol messages bidirectionally between the host application and the server. They parse responses, handle errors, and make sure that the data is relevant and appropriately formatted for the AI model. They also facilitate the invocation of tools exposed by the MCP server and manage the context so that the AI model has access to the necessary resources and tools for its tasks.

The following function in the mcpclient.py file is designed to establish connections to MCP servers and manage connection sessions.

async def connect(self):
  """
  Establishes connection to MCP server.
  Sets up stdio client, initializes read/write streams,
  and creates client session.
  """
  # Initialize stdio client with server parameters
  self._client = stdio_client(self.server_params)
  # Get read/write streams
  self.read, self.write = await self._client.__aenter__()
  # Create and initialize session
  session = ClientSession(self.read, self.write)
  self.session = await session.__aenter__()
  await self.session.initialize()

After it’s connected with MCP servers, the client lists available tools from each MCP server with their specifications:

async def get_available_tools(self):
    """List available tools from the MCP server."""
    if not self.session:
        raise RuntimeError("Not connected to MCP server")
    response = await self.session.list_tools()
    # Extract and format tools
    tools = response.tools if hasattr(response, 'tools') else []
    formatted_tools = [
        {
            'name': tool.name,
            'description': str(tool.description),
            'inputSchema': {
                'json': {
                    'type': 'object',
                    'properties': tool.inputSchema.get('properties', {}),
                    'required': tool.inputSchema.get('required', [])
                }
            }
        }
        for tool in tools
    ]
    return formatted_tools

When a tool is defined and called, the client first validates the session is active, then executes the tool through the MCP session that is established between client and server. Finally, it returns the structured response.

async def call_tool(self, tool_name, arguments):
    # Execute tool
    start_time = time.time()
    result = await self.session.call_tool(tool_name, arguments=arguments)
    execution_time = time.time() - start_time
    # Augment result with server info
    return {
        "result": result,
        "tool_info": {
            "tool_name": tool_name,
            "server_name": server_name,
            "server_info": server_info,
            "execution_time": f"{execution_time:.2f}s"
        }
    }

MCP server configuration

The server_configs.py file defines the MCP tool servers that our application will connect to. This configuration sets up Google Maps MCP server with an API key, adds a time server for date and time operations, and includes a memory server for storing conversation context. Each server is defined as a StdioServerParameters object, which specifies how to launch the server process using Node.js (using npx). You can add or remove MCP server configurations based on your application objectives and requirements.

from mcp import StdioServerParameters
SERVER_CONFIGS = [
        StdioServerParameters(
            command="npx",
            args=["-y", "@modelcontextprotocol/server-google-maps"],
            env={"GOOGLE_MAPS_API_KEY": "<ADD_GOOGLE_API_KEY>"}
        ),
        StdioServerParameters(
            command="npx",
            args=["-y", "time-mcp"],
        ),
        StdioServerParameters(
            command="npx",
            args=["@modelcontextprotocol/server-memory"]
            )
]

Alternative implementation: Strands Agent framework

For developers seeking a more streamlined approach to building MCP-powered applications, the Strands Agents framework provides an alternative that significantly reduces implementation complexity while maintaining full MCP compatibility. This section demonstrates how the same functionality can be achieved with substantially less code using Strands Agents. The code sample is available in this Git repository.

First, initialize the model and provide the Mistral model ID on Amazon Bedrock.

from strands import Agent
from strands.tools.mcp import MCPClient
from strands.models import BedrockModel
# Initialize the Bedrock model
bedrock_model = BedrockModel(
    model_id="us.mistral.pixtral-large-2502-v1:0",
    streaming=False
)

The following code creates multiple MCP clients from server configurations, automatically manages their lifecycle using context managers, collects available tools from each client, and initializes an AI agent with the unified set of tools.

from contextlib import ExitStack
from mcp import stdio_client
# Create MCP clients with automatic lifecycle management
mcp_clients = [
    MCPClient(lambda cfg=server_config: stdio_client(cfg))
    for server_config in SERVER_CONFIGS
]
with ExitStack() as stack:
    # Enter all MCP clients automatically
    for mcp_client in mcp_clients:
        stack.enter_context(mcp_client)
    
    # Aggregate tools from all clients
    tools = []
    for i, mcp_client in enumerate(mcp_clients):
        client_tools = mcp_client.list_tools_sync()
        tools.extend(client_tools)
        logger.info(f"Loaded {len(client_tools)} tools from client {i+1}")
    
    # Create agent with unified tool registry
    agent = Agent(model=bedrock_model, tools=tools, system_prompt=system_prompt)

The following function processes user messages with optional image inputs by formatting them for multimodal AI interaction, sending them to an agent that handles tool routing and response generation, and returning the agent’s text response:

def process_message(message, image=None):
    """Process user message with optional image input"""
    try:
        if image is not None:
            # Convert PIL image to Bedrock format
            image_data = convert_image_to_bytes(image)
            if image_data:
                # Create multimodal message structure
                multimodal_message = {
                    "role": "user",
                    "content": [
                        {
                            "image": {
                                "format": image_data['format'],
                                "source": {"bytes": image_data['bytes']}
                            }
                        },
                        {
                            "text": message if message.strip() else "Please analyze the content of the image."
                        }
                    ]
                }
                agent.messages.append(multimodal_message)
        
        # Single call handles tool routing and response generation
        response = agent(message)
        
        # Extract response content
        return response.text if hasattr(response, 'text') else str(response)
        
    except Exception as e:
        return f"Error: {str(e)}"

The Strands Agents approach streamlines MCP integration by reducing code complexity, automating resource management, and unifying tools from multiple servers into a single interface. It also offers built-in error handling and native multimodal support, minimizing manual effort and enabling more robust, efficient development.

Demo

This demo showcases an intelligent food recognition application with integrated location services. Users can submit an image of a dish, and the AI assistant:

    1. Accurately identifies the cuisine from the image
    2. Provides restaurant recommendations based on the identified food
    3. Offers route planning powered by the Google Maps MCP server

The application demonstrates sophisticated multi-server collaboration to answer complex queries such as “Is the restaurant open when I arrive?” To answer this, the system:

  1. Determines the current time in the user’s location using the time MCP server
  2. Retrieves restaurant operating hours and calculates travel time using the Google Maps MCP server
  3. Synthesizes this information to provide a clear, accurate response

We encourage you to modify the solution by adding additional MCP server configurations tailored to your specific personal or business requirements.

MCP applicaiton demo

Clean up

When you finish experimenting with this example, delete the SageMaker endpoints that you created in the process:

  1. Go to Amazon SageMaker console
  2. In the left navigation pane, choose Inference and then choose Endpoints
  3. From the endpoints list, delete the ones that you created from Amazon Bedrock Marketplace and SageMaker JumpStart.

Conclusion

This post covers how integrating MCP with Mistral AI models on AWS enables the rapid development of intelligent applications that interact seamlessly with external systems. By standardizing tool use, developers can focus on core logic while keeping AI reasoning and tool execution cleanly separated, improving maintainability and scalability. The Strands Agent framework enhances this by streamlining implementation without sacrificing MCP compatibility. With AWS offering flexible deployment options, from Amazon Bedrock to Amazon Bedrock Marketplace and SageMaker, this approach balances performance and cost. The solution demonstrates how even lightweight setups can connect AI to real-time services.

We encourage developers to build upon this foundation by incorporating additional MCP servers tailored to their specific requirements. As the landscape of MCP-compatible tools continues to expand, organizations can create increasingly sophisticated AI assistants that effectively reason over external knowledge and take meaningful actions, accelerating the adoption of practical, agentic AI systems across industries while reducing implementation barriers.

Ready to implement MCP in your own projects? Explore the official AWS MCP server repository for examples and reference implementations. For more information about the Strands Agents framework, which simplifies agent building with its intuitive, code-first approach to data source integration, visit Strands Agent. Finally, dive deeper into open protocols for agent interoperability in the recent AWS blog post: Open Protocols for Agent Interoperability, which explores how these technologies are shaping the future of AI agent development.


About the authors

Ying Hou, PhD, is a Sr. Specialist Solution Architect for Gen AI at AWS, where she collaborates with model providers to onboard the latest and most intelligent AI models onto AWS platforms. With deep expertise in Gen AI, ASR, computer vision, NLP, and time-series forecasting models, she works closely with customers to design and build cutting-edge ML and GenAI applications.

Siddhant Waghjale, is an Applied AI Engineer at Mistral AI, where he works on challenging customer use cases and applied science, helping customers achieve their goals with Mistral models. He’s passionate about building solutions that bridge  AI capabilities with actual business applications, specifically in agentic workflows and code generation.

Samuel-BarrySamuel Barry is an Applied AI Engineer at Mistral AI, where he helps organizations design, deploy, and scale cutting-edge AI systems. He partners with customers to deliver high-impact solutions across a range of use cases, including RAG, agentic workflows, fine-tuning, and model distillation. Alongside engineering efforts, he also contributes to applied research initiatives that inform and strengthen production use cases.

Preston TugglePreston Tuggle is a Sr. Specialist Solutions Architect with the Third-Party Model Provider team at AWS. He focuses on working with model providers across Amazon Bedrock and Amazon SageMaker, helping them accelerate their go-to-market strategies through technical scaling initiatives and customer engagement.

Read More

Build real-time conversational AI experiences using Amazon Nova Sonic and LiveKit

Build real-time conversational AI experiences using Amazon Nova Sonic and LiveKit

The rapid growth of generative AI technology has been a catalyst for business productivity growth, creating new opportunities for greater efficiency, enhanced customer service experiences, and more successful customer outcomes. Today’s generative AI advances are helping existing technologies achieve their long-promised potential. For example, voice-first applications have been gaining traction across industries for years—from customer service to education to personal voice assistants and agents. But early versions of this technology struggled to interpret human speech or mimic real conversation. Building real-time, natural-sounding, low-latency voice AI has until recently remained complex, especially when working with streaming infrastructure and speech foundation models (FMs).

The rapid progress of conversational AI technology has led to the development of powerful models that address the historical challenges of traditional voice-first applications. Amazon Nova Sonic is a state-of-the-art speech-to-speech FM designed to build real-time conversational AI applications in Amazon Bedrock. This model offers industry-leading price-performance and low latency. The Amazon Nova Sonic architecture unifies speech understanding and generation into a single model, to enable real, human-like voice conversations in AI applications.

Amazon Nova Sonic accommodates the breadth and richness of human language. It can understand speech in different speaking styles and generate speech in expressive voices, including both masculine-sounding and feminine-sounding voices. Amazon Nova Sonic can also adapt the patterns of stress, intonation, and style of the generated speech response to align with the context and content of the speech input. Additionally, Amazon Nova Sonic supports function calling and knowledge grounding with enterprise data using Retrieval-Augmented Generation (RAG). To further simplify the process of getting the most from this technology, Amazon Nova Sonic is now integrated with LiveKit’s WebRTC framework, a widely used platform that enables developers to build real-time audio, video, and data communication applications. This integration makes it possible for developers to build conversational voice interfaces without needing to manage complex audio pipelines or signaling protocols. In this post, we explain how this integration works, how it addresses the historical challenges of voice-first applications, and some initial steps to start using this solution.

Solution overview

LiveKit is a popular open source WebRTC platform that provides scalable, multi‑user real‑time video, audio, and data communication. Designed as a full-stack solution, it offers a Selective Forwarding Unit (SFU) architecture; modern client SDKs across web, mobile, and server environments; and built‑in features such as speaker detection, bandwidth optimization, simulcast support, and seamless room management. You can deploy it as a self-hosted system or on AWS, so developers can focus on application logic without managing the underlying media infrastructure.

Building real-time, voice-first AI applications requires developers to manage multiple layers of infrastructure—from handling audio capture and streaming protocols to coordinating signaling, routing, and event-driven state management. Working with bidirectional streaming models such as Amazon Nova Sonic often meant setting up custom pipelines, managing audio buffers, and working to maintain low-latency performance across diverse client environments. These tasks added development overhead and required specialized knowledge in networking and real-time systems, making it difficult to quickly prototype or scale production-ready voice AI solutions. To address this complexity, we implemented a real-time plugin for Amazon Nova Sonic in the LiveKit Agent SDK. This solution removes the need for developers to manage audio signaling, streaming protocols, or custom transport layers. LiveKit handles real-time audio routing and session management, and Amazon Nova Sonic powers speech understanding and generation. Together, LiveKit and Amazon Nova Sonic provide a streamlined, production-ready setup for building voice-first AI applications. Features such as full-duplex audio, voice activity detection, and noise suppression are available out of the box, so developers can focus on application logic rather than infrastructure orchestration.

The following video shows Amazon Nova Sonic and LiveKit in action. You can find the code for this example in the LiveKit Examples GitHub repo.

The following diagram illustrates the solution architecture of Amazon Nova Sonic deployed as a voice agent in the LiveKit framework on AWS.

Diagram illustrates the solution architecture of Amazon Nova Sonic

Prerequisites

To implement the solution, you must have the following prerequisites:

  • Python version 3.12 or higher
  • An AWS account with appropriate Identity and Access Management (IAM) permissions for Amazon Bedrock
  • Access to Amazon Nova Sonic on Amazon Bedrock
  • A web browser (such as Google Chrome or Mozilla Firefox) with WebRTC support

Deploy the solution

Complete the following steps to get started talking to Amazon Nova Sonic through LiveKit:

  1. Install the necessary dependencies:
brew install livekit livekit-cli
curl -LsSf https://astral.sh/uv/install.sh | sh

uv is a fast, drop-in replacement for pip, used in the LiveKit Agents SDK (you can also choose to use pip).

  1. Set up a new local virtual environment:
uv init sonic_demo
cd sonic_demo
uv venv --python 3.12
uv add livekit-agents python-dotenv 'livekit-plugins-aws[realtime]'

  1. To run the LiveKit server locally, open a new terminal (for example, a new UNIX process) and run the following command:
livekit-server --dev

You must keep the LiveKit server running for the entire duration that the Amazon Nova Sonic agent is running, because it’s responsible for proxying data between parties.

  1. Generate an access token using the following code. The default values for api-key and api-secret are devkey and secret, respectively. When creating an access token for permission to join a LiveKit room, you must specify the room name and user identity.
lk token create 
 --api-key devkey --api-secret secret 
 --join --room my-first-room --identity user1 
 --valid-for 24h

  1. Create environment variables. You must specify the AWS credentials:
vim .env

// contents of the .env file
AWS_ACCESS_KEY_ID=<aws access key id>
AWS_SECRET_ACCESS_KEY=<aws secret access key>

# if using a permanent identity (e.g. IAM user)
# then session token is optional
AWS_SESSION_TOKEN=<aws session token>
LIVEKIT_API_KEY=devkey
LIVEKIT_API_SECRET=secret

  1. Create the main.py file:
from dotenv import load_dotenv
from livekit import agents
from livekit.agents import AgentSession, Agent, AutoSubscribe
from livekit.plugins.aws.experimental.realtime import RealtimeModel

load_dotenv()

async def entrypoint(ctx: agents.JobContext):
    # Connect to the LiveKit server
    await ctx.connect(auto_subscribe=AutoSubscribe.AUDIO_ONLY)
    
    # Initialize the Amazon Nova Sonic agent
    agent = Agent(instructions="You are a helpful voice AI assistant.")
    session = AgentSession(llm=RealtimeModel())
    
    # Start the session in the specified room
    await session.start(
        room=ctx.room,
        agent=agent,
    )

if __name__ == "__main__":
    agents.cli.run_app(agents.WorkerOptions(entrypoint_fnc=entrypoint))

  1. Run the main.py file:
uv run python main.py connect --room my-first-room

Now you’re ready to connect to the agent frontend.

  1. Go to https://agents-playground.livekit.io/.
  2. Choose Manual.
  3. In the first text field, enter ws://localhost:7880.
  4. In the second text field, enter the access token you generated.
  5. Choose Connect.

You should now be able to talk to Amazon Nova Sonic in real time.

If you’re disconnected from the LiveKit room, you will have to restart the agent process (main.py) to talk to Amazon Nova Sonic again.

Clean up

This example runs locally, meaning there are no special teardown steps required for cleanup. You can simply exit the agent and LiveKit server processes. The only cost incurred are the costs of making calls to Amazon Bedrock to talk to Amazon Nova Sonic. After you have disconnected from the LiveKit room, you will no longer incur charges and no AWS resources will remain in use.

Conclusion

Thanks to generative AI, the qualitative benefits long promised by voice-first applications can now be realized. By combining Amazon Nova Sonic with LiveKit’s WebRTC infrastructure, developers can build real-time, voice-first AI applications with less complexity and faster deployment. The integration reduces the need for custom audio pipelines, so teams can focus on building engaging conversational experiences.

“Our goal with this integration is to simplify the development of real-time voice applications,” said Josh Wulf, CEO of LiveKit. “By combining LiveKit’s robust media routing and session management with Nova Sonic’s speech capabilities, we’re helping developers move faster—no need to manage low-level infrastructure, so they can focus on building the conversation.”

To learn more about Amazon Nova Sonic, read the AWS News Blog, Amazon Nova Sonic product page, and Amazon Nova Sonic User Guide. To get started with Amazon Nova Sonic in Amazon Bedrock, visit the Amazon Bedrock console.


About the authors

Glen Ko is an AI developer at AWS Bedrock, where his focus is on enabling the proliferation of open source AI tooling and supporting open source innovation.

Anuj Jauhari is a Senior Product Marketing Manager at Amazon Web Services, where he helps customers realize value from innovations in generative AI.

Osman Ipek is a Solutions Architect on Amazon’s AGI team focusing on Nova foundation models. He guides teams to accelerate development through practical AI implementation strategies, with expertise spanning voice AI, NLP, and MLOps.

Read More

Reach the ‘PEAK’ on GeForce NOW

Reach the ‘PEAK’ on GeForce NOW

Grab a friend and climb toward the clouds — PEAK is now available on GeForce NOW, enabling members to try the hugely popular indie hit on virtually any device.

It’s one of four new games joining the cloud this week. Plus, members can look forward to Tony Hawk’s Pro Skater 3 + 4 coming soon.

Time to Climb

PEAK on GeForce NOW
There’s always gonna be another mountain.

PEAK is a co-op climbing game that puts players in the shoes of lost nature scouts, ascending a mountain at the center of a mysterious island. Scavenge for food (even if it’s of questionable quality), manage injuries on the climb and help the squad summit safely. Members can play solo or survive together with up to four players.

There’s a new island to survive on every day. And, with more than 100,000 concurrent players daily on Steam, there’s always a climbing buddy to join.

GeForce NOW members are equipped for the challenge with an Ultimate membership, which powers the climb at up to 4K resolution and 120 frames per second. Ultimate members can play PEAK and more than 2,000 other games they already own with extended session lengths and ultralow latency on a GeForce RTX 4080 rig. Upgrade today for elevated gameplay.

Get Hyped

Tony Hawk's Pro Skater 3+4 is Coming soon to GeForce NOW
Old school, new tricks.

Tony Hawk’s Pro Skater 3 + 4 is coming soon to GeForce NOW.

The legendary franchise series that taught generations to ollie, grind and combo like maniacs is back, helping members hit the 900 from nearly any device.

The Birdman and crew return, bringing all the classic parks, legendary skaters and iconic soundtrack gamers remember — now fully remade with a few wild surprises for players.

Relive the glory days, whether grinding rails in the airport or pulling off insane combos in Los Angeles. New environments like a water park add a creative twist. The roster is stacked with original legends and new faces — plus a few unexpected guests.

Career Mode delivers with heart-pounding runs, while New Game+ and Solo Tours keep the challenge alive. Take on friends and rivals in online multiplayer mode. The upgraded Create-a-Park and Create-a-Skater tools mean gamers can build, style and shred their way.

With GeForce NOW, gamers will soon be able to skate anywhere, anytime — no console required. Enjoy the title in stunning 4K resolution and ultrasmooth frame rates with an Ultimate membership powered by GeForce RTX 4080 servers. Drop in instantly on any device, chase high scores with the lowest latency and keep the shred alive. The ultimate skate session is always just a click away.

Every Day a New Game

Every Day we Fight on GeForce NOW
Fight as one, survive as many.

Catch Every Day We Fight, a new roguelite, turn-based tactic game from Singla Space Lab and Hooded Horse, in the cloud with GeForce NOW.

In this title, citizens from either side of an ongoing war must set aside their differences as a mysterious alien invasion threatens humanity — and time has come to a stop for all but a small band of freedom fighters. Caught in a seemingly endless loop, players must shape these ordinary civilians into heroes as they repeatedly fight and die. Real-time exploration, stealth and teamwork are essential to acquire new skills, seek out more powerful weapons, escape the time loop and save the world.

Members can look for the following games to stream:

  • Every Day We Fight (New release on Steam, July 10)
  • Mycopunk (New release on Steam, July 10)
  • Brickadia (New release on Steam, July 11)
  • Peak (Steam)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

How to Run Coding Assistants for Free on RTX AI PCs and Workstations

How to Run Coding Assistants for Free on RTX AI PCs and Workstations

Coding assistants or copilots — AI-powered assistants that can suggest, explain and debug code — are fundamentally changing how software is developed for both experienced and novice developers.

Experienced developers use these assistants to stay focused on complex coding tasks, reduce repetitive work and explore new ideas more quickly. Newer coders — like students and AI hobbyists — benefit from coding assistants that accelerate learning by describing different implementation approaches or explaining what a piece of code is doing and why.

Coding assistants can run in cloud environments or locally. Cloud-based coding assistants can be run anywhere but offer some limitations and require a subscription. Local coding assistants remove these issues but require performant hardware to operate well.

NVIDIA GeForce RTX GPUs provide the necessary hardware acceleration to run local assistants effectively.

Code, Meet Generative AI

Traditional software development includes many mundane tasks such as reviewing documentation, researching examples, setting up boilerplate code, authoring code with appropriate syntax, tracing down bugs and documenting functions. These are essential tasks that can take time away from problem solving and software design. Coding assistants help streamline such steps.

Many AI assistants are linked with popular integrated development environments (IDEs) like Microsoft Visual Studio Code or JetBrains’ Pycharm, which embed AI support directly into existing workflows.

There are two ways to run coding assistants: in the cloud or locally.

Cloud-based coding assistants require source code to be sent to external servers before responses are returned. This approach can be laggy and impose usage limits. Some developers prefer to keep their code local, especially when working with sensitive or proprietary projects. Plus, many cloud-based assistants require a paid subscription to unlock full functionality, which can be a barrier for students, hobbyists and teams that need to manage costs.

Coding assistants run in a local environment, enabling cost-free access with:

Coding assistants running locally on RTX offer numerous advantages.

Get Started With Local Coding Assistants

Tools that make it easy to run coding assistants locally include:

  • Continue.dev — An open-source extension for the VS Code IDE that connects to local large language models (LLMs) via Ollama, LM Studio or custom endpoints. This tool offers in-editor chat, autocomplete and debugging assistance with minimal setup. Get started with Continue.dev using the Ollama backend for local RTX acceleration.
  • Tabby — A secure and transparent coding assistant that’s compatible across many IDEs with the ability to run AI on NVIDIA RTX GPUs. This tool offers code completion, answering queries, inline chat and more. Get started with Tabby on NVIDIA RTX AI PCs.
  • OpenInterpreter — Experimental but rapidly evolving interface that combines LLMs with command-line access, file editing and agentic task execution. Ideal for automation and devops-style tasks for developers. Get started with OpenInterpreter on NVIDIA RTX AI PCs.
  • LM Studio — A graphical user interface-based runner for local LLMs that offers chat, context window management and system prompts. Optimal for testing coding models interactively before IDE deployment. Get started with LM Studio on NVIDIA RTX AI PCs.
  • Ollama — A local AI model inferencing engine that enables fast, private inference of models like Code Llama, StarCoder2 and DeepSeek. It integrates seamlessly with tools like Continue.dev.

These tools support models served through frameworks like Ollama or llama.cpp, and many are now optimized for GeForce RTX and NVIDIA RTX PRO GPUs.

See AI-Assisted Learning on RTX in Action

Running on a GeForce RTX-powered PC, Continue.dev paired with the Gemma 12B Code LLM helps explain existing code, explore search algorithms and debug issues — all entirely on device. Acting like a virtual teaching assistant, the assistant provides plain-language guidance, context-aware explanations, inline comments and suggested code improvements tailored to the user’s project.

This workflow highlights the advantage of local acceleration: the assistant is always available, responds instantly and provides personalized support, all while keeping the code private on device and making the learning experience immersive.

That level of responsiveness comes down to GPU acceleration. Models like Gemma 12B are compute-heavy, especially when they’re processing long prompts or working across multiple files. Running them locally without a GPU can feel sluggish — even for simple tasks. With RTX GPUs, Tensor Cores accelerate inference directly on the device, so the assistant is fast, responsive and able to keep up with an active development workflow.

Coding assistants running on the Meta Llama 3.1-8B model experience 5-6x faster throughput on RTX-powered laptops versus on CPU. Data measured uses the average tokens per second at BS = 1, ISL/OSL = 2000/100, with the Llama-3.1-8B model quantized to int4.

Whether used for academic work, coding bootcamps or personal projects, RTX AI PCs are enabling developers to build, learn and iterate faster with AI-powered tools.

For those just getting started — especially students building their skills or experimenting with generative AI — NVIDIA GeForce RTX 50 Series laptops feature specialized AI technologies that accelerate top applications for learning, creating and gaming, all on a single system. Explore RTX laptops ideal for back-to-school season.

And to encourage AI enthusiasts and developers to experiment with local AI and extend the capabilities of their RTX PCs, NVIDIA is hosting a Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16. Participants can create custom plug-ins for Project G-Assist, an experimental AI assistant designed to respond to natural language and extend across creative and development tools. It’s a chance to win prizes and showcase what’s possible with RTX AI PCs.

Join NVIDIA’s Discord server to connect with community developers and AI enthusiasts for discussions on what’s possible with RTX AI.

Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 

Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.

Follow NVIDIA Workstation on LinkedIn and X

See notice regarding software product information.

Read More

AWS AI infrastructure with NVIDIA Blackwell: Two powerful compute solutions for the next frontier of AI

AWS AI infrastructure with NVIDIA Blackwell: Two powerful compute solutions for the next frontier of AI

Imagine a system that can explore multiple approaches to complex problems, drawing on its understanding of vast amounts of data, from scientific datasets to source code to business documents, and reasoning through the possibilities in real time. This lightning-fast reasoning isn’t waiting on the horizon. It’s happening today in our customers’ AI production environments. The scale of the AI systems that our customers are building today—across drug discovery, enterprise search, software development, and more—is truly remarkable. And there’s much more ahead.

To accelerate innovation across emerging generative AI developments such as reasoning models and agentic AI systems, we’re excited to announce general availability of P6e-GB200 UltraServers, accelerated by NVIDIA Grace Blackwell Superchips. P6e-GB200 UltraServers are designed for training and deploying the largest, most sophisticated AI models. Earlier this year, we launched P6-B200 instances, accelerated by NVIDIA Blackwell GPUs, for diverse AI and high-performance computing workloads.

In this post, we share how these powerful compute solutions build on everything we’ve learned about delivering secure, reliable GPU infrastructure at a massive scale, so that customers can confidently push the boundaries of AI.

Meeting the expanding compute demands of AI workloads

P6e-GB200 UltraServers represent our most powerful GPU offering to date, featuring up to 72 NVIDIA Blackwell GPUs interconnected using fifth-generation NVIDIA NVLink—all functioning as a single compute unit. Each UltraServer delivers a massive 360 petaflops of dense FP8 compute and 13.4 TB of total high bandwidth GPU memory (HBM3e)—which is over 20 times the compute and over 11 times the memory in a single NVLink domain compared to P5en instances. P6e-GB200 UltraServers support up to 28.8 Tbps aggregate bandwidth of fourth-generation Elastic Fabric Adapter (EFAv4) networking.P6-B200 instances are a versatile option for a broad range of AI use cases. Each instance provides 8 NVIDIA Blackwell GPUs interconnected using NVLink with 1.4 TB of high bandwidth GPU memory, up to 3.2 Tbps of EFAv4 networking, and fifth-generation Intel Xeon Scalable processors. P6-B200 instances offer up to 2.25 times the GPU TFLOPs, 1.27 times the GPU memory size, and 1.6 times the GPU memory bandwidth compared to P5en instances.

How do you choose between P6e-GB200 and P6-B200? This choice comes down to your specific workload requirements and architectural needs:

  • P6e-GB200 UltraServers are ideal for the most compute and memory intensive AI workloads, such as training and deploying frontier models at the trillion-parameter scale. Their NVIDIA GB200 NVL72 architecture really shines at this scale. Imagine all 72 GPUs working as one, with a unified memory space and coordinated workload distribution. This architecture enables more efficient distributed training by reducing communication overhead between GPU nodes. For inference workloads, the ability to fully contain trillion-parameter models within a single NVLink domain means faster, more consistent response times at scale. When combined with optimization techniques such as disaggregated serving with NVIDIA Dynamo, the large domain size of GB200 NVL72 architecture unlocks significant inference efficiencies for various model architectures such as mixture of experts models. GB200 NVL72 is particularly powerful when you need to handle extra-large context windows or run high-concurrency applications in real time.
  • P6-B200 instances support a broad range of AI workloads and are an ideal option for medium to large-scale training and inference workloads. If you want to port your existing GPU workloads, P6-B200 instances offer a familiar 8-GPU configuration that minimizes code changes and simplifies migration from current generation instances. Additionally, although NVIDIA’s AI software stack is optimized for both Arm and x86, if your workloads are specifically built for x86 environments, P6-B200 instances, with their Intel Xeon processors, will be your ideal choice.

Innovation built on AWS core strengths

Bringing NVIDIA Blackwell to AWS isn’t about a single breakthrough—it’s about continuous innovation across multiple layers of infrastructure. By building on years of learning and innovation across compute, networking, operations, and managed services, we’ve brought NVIDIA Blackwell’s full capabilities with the reliability and performance customers expect from AWS.

Robust instance security and stability

When customers tell me why they choose to run their GPU workloads on AWS, one crucial point comes up consistently: they highly value our focus on instance security and stability in the cloud. The specialized hardware, software, and firmware of the AWS Nitro System are designed to enforce restrictions so that nobody, including anyone in AWS, can access your sensitive AI workloads and data. Beyond security, the Nitro System fundamentally changes how we maintain and optimize infrastructure. The Nitro System, which handles networking, storage, and other I/O functions, makes it possible to deploy firmware updates, bug fixes, and optimizations while it remains operational. This ability to update without system downtime, which we call live update, is crucial in today’s AI landscape, where any interruption significantly impacts production timelines. P6e-GB200 and P6-B200 both feature the sixth generation of the Nitro System, but these security and stability benefits aren’t new—our innovative Nitro architecture has been protecting and optimizing Amazon Elastic Compute Cloud (Amazon EC2) workloads since 2017.

Reliable performance at massive scale

In AI infrastructure, the challenge isn’t just reaching massive scale—it’s delivering consistent performance and reliability at that scale. We’ve deployed P6e-GB200 UltraServers in third-generation EC2 UltraClusters, which creates a single fabric that can encompass our largest data centers. Third-generation UltraClusters cut power consumption by up to 40% and reduce cabling requirements by more than 80%—not only improving efficiency, but also significantly reducing potential points of failure.

To deliver consistent performance at this massive scale, we use Elastic Fabric Adapter (EFA) with its Scalable Reliable Datagram protocol, which intelligently routes traffic across multiple network paths to maintain smooth operation even during congestion or failures. We’ve continuously improved EFA’s performance across four generations. P6e-GB200 and P6-B200 instances with EFAv4 show up to 18% faster collective communications in distributed training compared to P5en instances that use EFAv3.

Infrastructure efficiency

Whereas P6-B200 instances use our proven air-cooling infrastructure, P6e-GB200 UltraServers use liquid cooling, which enables higher compute density in large NVLink domain architectures, delivering higher system performance. P6e-GB200 are liquid cooled with novel mechanical cooling solutions providing configurable liquid-to-chip cooling in both new and existing data centers, so we can support both liquid-cooled accelerators and air-cooled network and storage infrastructure in the same facility. With this flexible cooling design, we can deliver maximum performance and efficiency at the lowest cost.

Getting started with NVIDIA Blackwell on AWS

We’ve made it simple to get started with P6e-GB200 UltraServers and P6-B200 instances through multiple deployment paths, so you can quickly begin using Blackwell GPUs while maintaining the operational model that works best for your organization.

Amazon SageMaker HyperPod

If you’re accelerating your AI development and want to spend less time managing infrastructure and cluster operations, that’s exactly where Amazon SageMaker HyperPod excels. It provides managed, resilient infrastructure that automatically handles provisioning and management of large GPU clusters. We keep enhancing SageMaker HyperPod, adding innovations like flexible training plans to help you gain predictable training timelines and run training workloads within your budget requirements.

SageMaker HyperPod will support both P6e-GB200 UltraServers and P6-B200 instances, with optimizations to maximize performance by keeping workloads within the same NVLink domain. We’re also building in a comprehensive, multi-layered recovery system: SageMaker HyperPod will automatically replace faulty instances with preconfigured spares in the same NVLink domain. Built-in dashboards will give you visibility into everything from GPU utilization and memory usage to workload metrics and UltraServer health status.

Amazon EKS

For large-scale AI workloads, if you prefer to manage your infrastructure using Kubernetes, Amazon Elastic Kubernetes Service (Amazon EKS) is often the control plane of choice. We continue to drive innovations in Amazon EKS with capabilities like Amazon EKS Hybrid Nodes, which enable you to manage both on-premises and EC2 GPUs in a single cluster—delivering flexibility for AI workloads.

Amazon EKS will support both P6e-GB200 UltraServers and P6-B200 instances with automated provisioning and lifecycle management through managed node groups. For P6e-GB200 UltraServers, we’re building in topology awareness that understands the GB200 NVL72 architecture, automatically labeling nodes with their UltraServer ID and network topology information to enable optimal workload placement. You will be able to span node groups across multiple UltraServers or dedicate them to individual UltraServers, giving you flexibility in organizing your training infrastructure. Amazon EKS monitors GPU and accelerator errors and relays them to the Kubernetes control plane for optional remediation.

NVIDIA DGX Cloud on AWS

P6e-GB200 UltraServers will also be available through NVIDIA DGX Cloud. DGX Cloud is a unified AI platform optimized at every layer with multi-node AI training and inference capabilities and NVIDIA’s complete AI software stack. You benefit from NVIDIA’s latest optimizations, benchmarking recipes, and technical expertise to improve efficiency and performance. It offers flexible term lengths along with comprehensive NVIDIA expert support and services to help you accelerate your AI initiatives.

This launch announcement is an important milestone, and it’s just the beginning. As AI capabilities evolve rapidly, you need infrastructure built not just for today’s demands but for all the possibilities that lie ahead. With innovations across compute, networking, operations, and managed services, P6e-GB200 UltraServers and P6-B200 instances are ready to enable these possibilities. We can’t wait to see what you will build with them.

Resources


About the author

David Brown is the Vice President of AWS Compute and Machine Learning (ML) Services. In this role he is responsible for building all AWS Compute and ML services, including Amazon EC2, Amazon Container Services, AWS Lambda, Amazon Bedrock and Amazon SageMaker. These services are used by all AWS customers but also underpin most of AWS’s internal Amazon applications. He also leads newer solutions, such as AWS Outposts, that bring AWS services into customers’ private data centers.

David joined AWS in 2007 as a Software Development Engineer based in Cape Town, South Africa, where he worked on the early development of Amazon EC2. In 2012, he relocated to Seattle and continued to work in the broader Amazon EC2 organization. Over the last 11 years, he has taken on larger leadership roles as more of the AWS compute and ML products have become part of his organization.

Prior to joining Amazon, David worked as a Software Developer at a financial industry startup. He holds a Computer Science & Economics degree from the Nelson Mandela University in Port Elizabeth, South Africa.

Read More