AWS offers new artificial intelligence, machine learning, and generative AI guides to plan your AI strategy

AWS offers new artificial intelligence, machine learning, and generative AI guides to plan your AI strategy

Breakthroughs in artificial intelligence (AI) and machine learning (ML) have been in the headlines for months—and for good reason. The emerging and evolving capabilities of this technology promises new business opportunities for customer across all sectors and industries. But the speed of this revolution has made it harder for organizations and consumers to assess what these breakthroughs mean for them specifically.

Over the years, AWS has invested in the democratizing of access to—and understanding of —AI, ML and generative AI. Through announcements around the latest developments in generative AI and the establishment of a $100 million Generative AI Innovation Center program, Amazon Web Services (AWS) has been at the forefront of helping drive understanding about the role that these innovations can play in the lives of both individuals and organizations. To help you understand your options in relation to AI and ML, AWS has published two new guides: the AWS Cloud Adoption Framework for Artificial Intelligence, Machine Learning, and Generative AI and the Getting Started Resource Center machine learning decision guide.

AWS CAF for AI, ML, and Generative AI

The AWS Cloud Adoption Framework for Artificial Intelligence, Machine Learning, and Generative AI (CAF-AI) is designed to help you navigate your AI journey. It’s a mental model for organizations that strive to generate business value from AI/ML. Based on our own—and our customers’—experience, we provide in this framework of best practices for an AI transformation and accelerate business outcomes through innovative use of AI on AWS.

Used by customers and partner teams, CAF-AI helps derive, prioritize, evolve, and communicate a strategy for AI transformation. The following figure shows how we simplify an AI journey through CAF-AI: by working backward from business outcomes (1) to the opportunities that AI, ML, and generative AI provide (2), across your transformation domains (3) and your foundational capabilities (4) through an iterative process (5) of assessing, deriving, and implementing action items for an AI strategy.

In CAF-AI, we describe the AI/ML journey you may experience as your organizational capabilities on AI and ML mature. To guide you, we zoom in on the evolution of foundational capabilities that we have observed assist an organization to grow its maturity in AI further.

We also provide prescriptive guidance through an overview of the target state of these foundational capabilities and explain how to evolve them step by step to generate business value along the way. The following figure shows these foundational capabilities for cloud and AI/ML adoption. A capability is an organizational ability to use processes to deploy resources (such as people, technology, and other tangible or intangible assets) to achieve an outcome. Because the CAF-AI is a living index of knowledge, you can expect it to grow and change over time.

Designed as a starting and orientation point throughout a customer’s ML and AI journey, CAF-AI is intended to be a document that organizations can draw inspiration from as they shape their mid-term AI and ML agenda and try to understand the important topics and perspectives that influence it. Depending on where you are at on your AI/ML journey, you might focus on a specific section and hone your skills there, or use the whole document to judge maturity and help direct near-term improvement areas.

Because the business problem space to which AI/ML can be applied isn’t a single function or domain, it applies across all functions of businesses and all industry domains where you are looking for ways to reset the playing field in markets where AI/ML does make an economical difference. The AWS Cloud Adoption Framework for Artificial Intelligence, Machine Learning, and Generative AI is one of the many tools AWS provides to help you achieve this outcome. As AI/ML enables solutions and solution paths to problems that have remained uneconomical to solve for decades (or were technically impossible to tackle without AI/ML), the resulting business outcomes can be profound.

The Getting Started Resource Center machine learning decision guide

AWS has always been about choice. As you ramp up your use of AI, it is paramount that you have the right support in choosing the best service, model, and infrastructure for your business needs. The Getting Started Resource Center machine learning decision guide is designed to provide you with a detailed overview of the AI and ML services offered by AWS, and provide structured guidance on how to choose the services that might be right for you and your use cases.

The decision guide can also help you articulate and consider the criteria that will inform your choices. For example, it describes the range of AWS ML services (see the following screenshot), each of which caters to different levels of management requirement, depending on how much control and customization you need.

The guide also explains the unique capabilities of AWS services in realizing the power of foundation models and where you can make the most of this fast-evolving branch of machine learning.

It offers details on specific services, links to detailed, service-level technical guides, a comparison table that highlights the unique capabilities of key services, and criteria for selecting AI and ML services. It also provides a curated set of links to key resources that can help you get started in using AI, ML, and generative AI services on AWS.

If you want to understand the breadth of AI, ML, and generative AI offerings provided by AWS, this decision guide is a great place to start.

Conclusion

The Getting Started Resource Center machine learning decision guide, together with the AWS Cloud Adoption Framework for Artificial Intelligence, Machine Learning, and Generative AI, covers the technical and non-technical questions that we often hear. We hope you find these new resources useful and look forward to your feedback on them.


About the Authors

Caleb Wilkinson has more than a decade of experience building AI solutions. As a Senior Machine Learning Strategist at AWS, Caleb pioneers innovative applications of AI that push the boundaries of possibility and helps organizations benefit responsibly from artificial intelligence. He is the co-author of CAF-AI.

Alexander Wöhlke has a decade of experience in AI and ML. He is Senior Machine Learning Strategist and Technical Product Manager at the AWS Generative AI Innovation Center. He works with large organizations on their AI-Strategy and helps them take calculated risks at the forefront of technological development. He is the co-author of CAF-AI.

Geof Wheelwright manages the AWS decision content team, which writes and develops the growing collection of decision guides on the AWS Getting Started Resource Center. His team created the Choosing an AWS machine learning decision guide. He has enjoyed working with AI and its ancestors since first being introduced to simple, text-based Apple II versions of ELIZA in the early 1980s.

Read More

Codeium’s Varun Mohan and Jeff Wang on Unleashing the Power of AI in Software Development

Codeium’s Varun Mohan and Jeff Wang on Unleashing the Power of AI in Software Development

The world increasingly runs on code.

Accelerating the work of those who create that code will boost their productivity — and that’s just what AI startup Codeium, a member of NVIDIA’s Inception program for startups, aims to do.

On the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz interviewed Codeium founder and CEO Varun Mohan and Jeff Wang, the company’s head of business, about the company’s business, about how AI is transforming software.

Codeium’s AI-powered code acceleration toolkit boasts three core features: autocomplete, chat and search.

Autocomplete intelligently suggests code segments, saving developers time by minimizing the need for writing boilerplate or unit tests.

At the same time the chat function empowers developers to rework or even create code with natural language queries, enhancing their coding efficiency while providing searchable context on the entire code base.

Noah spoke with Mohan and Wang about the future of software development with AI, and the continued, essential role of humans in the process.

You Might Also Like

Jules Anh Tuan Nguyen Explains How AI Lets Amputee Control Prosthetic Hand, Video Games

A postdoctoral researcher at the University of Minnesota discusses his efforts to allow amputees to control their prosthetic limb — right down to the finger motions — with their minds.

Overjet’s Ai Wardah Inam on Bringing AI to Dentistry

Overjet, a member of NVIDIA Inception, is moving fast to bring AI to dentists’ offices. Dr. Wardah Inam, CEO of the company, discusses using AI to improve patient care.

Immunai CTO and Co-Founder Luis Voloch on Using Deep Learning to Develop New Drugs

Luis Voloch, co-founder and chief technology officer of Immunai, talks about tackling the challenges of the immune system with a machine learning and data science mindset.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better. Have a few minutes to spare? Fill out this listener survey.

 

Read More

New technical deep dive course: Generative AI Foundations on AWS

New technical deep dive course: Generative AI Foundations on AWS

Generative AI Foundations on AWS is a new technical deep dive course that gives you the conceptual fundamentals, practical advice, and hands-on guidance to pre-train, fine-tune, and deploy state-of-the-art foundation models on AWS and beyond. Developed by AWS generative AI worldwide foundations lead Emily Webber, this free hands-on course and the supporting GitHub source code launched via AWS Youtube. If you are looking for a curated playlist of the top resources, concepts, and guidance to get up to speed on foundation models, and especially those that unlock generative capabilities in your data science and machine learning projects, then look no further.

During this 8-hour deep dive, you will be introduced to the key techniques, services, and trends that will help you understand foundation models from the ground up. This means breaking down theory, mathematics, and abstract concepts combined with hands-on exercises to gain functional intuition for practical application. Throughout the course, we focus on a wide spectrum of progressively complex generative AI techniques, giving you a strong base to understand, design, and apply your own models for the best performance. We’ll start with recapping foundation models, understanding where they come from, how they work, how they relate to generative AI, and what you can to do customize them. You’ll then learn about picking the right foundation model to suit your use case.

Once you’ve developed a strong contextual understanding of foundation models and how to use them, you’ll be introduced to the core subject of this course: pre-training new foundation models. You’ll learn why you’d want to do this as well as how and where it’s competitive. You’ll even learn how to use the scaling laws to pick the right model, dataset, and compute sizes. We’ll cover preparing training datasets at scale on AWS, including picking the right instances and storage techniques. We’ll cover fine-tuning your foundation models, evaluating recent techniques, and understanding how to run these with your scripts and models. We’ll dive into reinforcement learning with human feedback, exploring how to use it skillfully and at scale to truly maximize your foundation model performance.

Finally, you’ll learn how to apply theory to production by deploying your new foundation model on Amazon SageMaker, including across multiple GPUs and using top design patterns like retrieval augmented generation and chained dialogue. As an added bonus, we’ll walk you through a Stable Diffusion deep dive, prompt engineering best practices, standing up LangChain, and more.

More of a reader than a video consumer? You can check out my 15-chapter book “Pretrain Vision and Large Language Models in Python: End-to-end techniques for building and deploying foundation models on AWS,” which released May 31, 2023, with Packt publishing and is available now on Amazon. Want to jump right into the code? I’m with you—every video starts with a 45-minute overview of the key concepts and visuals. Then I’ll give you a 15-minute walkthrough of the hands-on portion. All of the example notebooks and supporting code will ship in a public repository, which you can use to step through on your own. Feel free to reach out to me on Medium, LinkedIn, GitHub, or through your AWS teams. Learn more about generative AI on AWS.

Happy trails!

Course outline

1. Introduction to Foundation Models

  • What are large language models and how do they work?
  • Where do they come from?
  • What are other types of generative AI?
  • How do you customize a foundation model?
  • How do you evaluate a Generative model?
  • Hands-on walk through: Foundation Models on SageMaker

Lesson 1 slides

Lesson 1 hands-on demo resources

2. Picking the right foundation model

  • Why starting with the right foundation model matters
  • Considering size
  • Considering accuracy
    • Considering ease-of-use
  • Considering licensing
  • Considering previous examples of this model working well in your industry
    • Considering external benchmarks

Lesson 2 slides

Lesson 2 hands-on demo resources

3. Using pretrained foundation models: prompt engineering and fine-tuning

  • The benefits of starting with a pre-trained foundation model
  • Prompt engineering:
    • Zero-shot
    • Single-shot
    • Few-shot
    • Summarization
      • Classification
    • Translation
  • Fine-tuning
    • Classic fine-tuning
    • Parameter efficient fine-tuning
    • Hugging Face’s new library
    • Hands-on walk through: prompt engineering and fine-tuning on SageMaker

Lesson 3 slides

Lesson 3 hands-on demo resources

4. Pretraining a new foundation model

  • Why would you want or need to create a new foundation model?
    • Comparing pretraining to fine-tuning
  • Preparing your dataset for pretraining
  • Distributed training on SageMaker: libraries, scripts, jobs, resources
  • Why and how to adapt a new script to SageMaker distributed training

Lesson 4 slides

Lesson 4 hands-on demo resources

5. Preparing data and training at scale

  • Options for prepping data at scale on AWS
  • Explain SageMaker job parallelism on CPU instances
  • Explain modes of sending data to SageMaker Training
  • Introduction to FSx for Lustre
  • Using FSx for Lustre at scale for SageMaker Training
  • Hands-on walk through: configuring Lustre for SageMaker Training

Lesson 5 slides

Lesson 5 hands-on demo resources

6. Reinforcement learning with human feedback

  • What is this technique and why do we care about it
  • How it gets around problems with subjectivity and objectivity through ranking human preferences at scale
  • How does it work?
  • How to do this with SageMaker Ground Truth
  • Updated reward modeling
  • Hands-on walk through: RLFH on SageMaker

Lesson 6 slides

Lesson 6 hands-on demo resources

7. Deploying a foundation model

  • Why do we want to deploy models?
  • Different options for deploying FM’s on AWS
  • How to optimize your model for deployment
  • Large model deployment container deep dive
  • Top configuration tips for deploying FM’s on SageMaker
  • Prompt engineering tips for invoking foundation models
  • Using retrieval augmented generation to mitigate hallucinations
  • Hands-on walk through: Deploying an FM on SageMaker

Lesson 7 slides

Lesson 7 hands-on demo resources


About the author

Emily Webber joined AWS just after SageMaker launched, and has been trying to tell the world about it ever since! Outside of building new ML experiences for customers, Emily enjoys meditating and studying Tibetan Buddhism.

Read More

Frontier Model Forum

We’re forming a new industry body to promote the safe and responsible development of frontier AI systems: advancing AI safety research, identifying best practices and standards, and facilitating information sharing among policymakers and industry.OpenAI Blog

AWS Reaffirms its Commitment to Responsible Generative AI

AWS Reaffirms its Commitment to Responsible Generative AI

As a pioneer in artificial intelligence and machine learning, AWS is committed to developing and deploying generative AI responsibly

As one of the most transformational innovations of our time, generative AI continues to capture the world’s imagination, and we remain as committed as ever to harnessing it responsibly. With a team of dedicated responsible AI experts, complemented by our engineering and development organization, we continually test and assess our products and services to define, measure, and mitigate concerns about accuracy, fairness, intellectual property, appropriate use, toxicity, and privacy. And while we don’t have all of the answers today, we are working alongside others to develop new approaches and solutions to address these emerging challenges. We believe we can both drive innovation in AI, while continuing to implement the necessary safeguards to protect our customers and consumers.

At AWS, we know that generative AI technology and how it is used will continue to evolve, posing new challenges that will require additional attention and mitigation. That’s why Amazon is actively engaged with organizations and standard bodies focused on the responsible development of next-generation AI systems including NIST, ISO, the Responsible AI Institute, and the Partnership on AI. In fact, last week at the White House, Amazon signed voluntary commitments to foster the safe, responsible, and effective development of AI technology. We are eager to share knowledge with policymakers, academics, and civil society, as we recognize the unique challenges posed by generative AI will require ongoing collaboration.

This commitment is consistent with our approach to developing our own generative AI services, including building foundation models (FMs) with responsible AI in mind at each stage of our comprehensive development process. Throughout design, development, deployment, and operations we consider a range of factors including 1/ accuracy, e.g., how closely a summary matches the underlying document; whether a biography is factually correct; 2/ fairness, e.g., whether outputs treat demographic groups similarly; 3/ intellectual property and copyright considerations; 4/ appropriate usage, e.g., filtering out user requests for legal advice, medical diagnoses, or illegal activities, 5/ toxicity, e.g., hate speech, profanity, and insults; and 6/ privacy, e.g., protecting personal information and customer prompts. We build solutions to address these issues into our processes for acquiring training data, into the FMs themselves, and into the technology that we use to pre-process user prompts and post-process outputs. For all our FMs, we invest actively to improve our features, and to learn from customers as they experiment with new use cases.

For example, Amazon’s Titan FMs are built to detect and remove harmful content in the data that customers provide for customization, reject inappropriate content in the user input, and filter the model’s outputs containing inappropriate content (such as hate speech, profanity, and violence).

To help developers build applications responsibly, Amazon CodeWhisperer provides a reference tracker that displays the licensing information for a code recommendation and provides link to the corresponding open-source repository when necessary. This makes it easier for developers to decide whether to use the code in their project and make the relevant source code attributions as they see fit. In addition, Amazon CodeWhisperer filters out code recommendations that include toxic phrases, and recommendations that indicate bias.

Through innovative services like these, we will continue to help our customers realize the benefits of generative AI, while collaborating across the public and private sectors to ensure we’re doing so responsibly. Together, we will build trust among customers and the broader public, as we harness this transformative new technology as a force for good.


About the Author

Peter Hallinan leads initiatives in the science and practice of Responsible AI at AWS AI, alongside a team of responsible AI experts. He has deep expertise in AI (PhD, Harvard) and entrepreneurship (Blindsight, sold to Amazon). His volunteer activities have included serving as a consulting professor at the Stanford University School of Medicine, and as the president of the American Chamber of Commerce in Madagascar. When possible, he’s off in the mountains with his children: skiing, climbing, hiking and rafting

Read More

Use generative AI foundation models in VPC mode with no internet connectivity using Amazon SageMaker JumpStart

Use generative AI foundation models in VPC mode with no internet connectivity using Amazon SageMaker JumpStart

With recent advancements in generative AI, there are lot of discussions happening on how to use generative AI across different industries to solve specific business problems. Generative AI is a type of AI that can create new content and ideas, including conversations, stories, images, videos, and music. It is all backed by very large models that are pre-trained on vast amounts of data and commonly referred to as foundation models (FMs). These FMs can perform a wide range of tasks that span multiple domains, like writing blog posts, generating images, solving math problems, engaging in dialog, and answering questions based on a document. The size and general-purpose nature of FMs make them different from traditional ML models, which typically perform specific tasks, like analyzing text for sentiment, classifying images, and forecasting trends.

While organizations are looking to use the power of these FMs, they also want the FM-based solutions to be running in their own protected environments. Organizations operating in heavily regulated spaces like global financial services and healthcare and life sciences have auditory and compliance requirements to run their environment in their VPCs. In fact, a lot of times, even direct internet access is disabled in these environments to avoid exposure to any unintended traffic, both ingress and egress.

Amazon SageMaker JumpStart is an ML hub offering algorithms, models, and ML solutions. With SageMaker JumpStart, ML practitioners can choose from a growing list of best performing open source FMs. It also provides the ability to deploy these models in your own Virtual Private Cloud (VPC).

In this post, we demonstrate how to use JumpStart to deploy a Flan-T5 XXL model in a VPC with no internet connectivity. We discuss the following topics:

  • How to deploy a foundation model using SageMaker JumpStart in a VPC with no internet access
  • Advantages of deploying FMs via SageMaker JumpStart models in VPC mode
  • Alternate ways to customize deployment of foundation models via JumpStart

Apart from FLAN-T5 XXL, JumpStart provides lot of different foundation models for various tasks. For the complete list, check out Getting started with Amazon SageMaker JumpStart.

Solution overview

As part of the solution, we cover the following steps:

  1. Set up a VPC with no internet connection.
  2. Set up Amazon SageMaker Studio using the VPC we created.
  3. Deploy the generative AI Flan T5-XXL foundation model using JumpStart in the VPC with no internet access.

The following is an architecture diagram of the solution.

sol-arch

Let’s walk through the different steps to implement this solution.

Prerequisites

To follow along with this post, you need the following:

Set up a VPC with no internet connection

Create a new CloudFormation stack by using the 01_networking.yaml template. This template creates a new VPC and adds two private subnets across two Availability Zones with no internet connectivity. It then deploys gateway VPC endpoints for accessing Amazon Simple Storage Service (Amazon S3) and interface VPC endpoints for SageMaker and a few other services to allow the resources in the VPC to connect to AWS services via AWS PrivateLink.

Provide a stack name, such as No-Internet, and complete the stack creation process.

vpc-stack-input

This solution is not highly available because the CloudFormation template creates interface VPC endpoints only in one subnet to reduce costs when following the steps in this post.

Set up Studio using the VPC

Create another CloudFormation stack using 02_sagemaker_studio.yaml, which creates a Studio domain, Studio user profile, and supporting resources like IAM roles. Choose a name for the stack; for this post, we use the name SageMaker-Studio-VPC-No-Internet. Provide the name of the VPC stack you created earlier (No-Internet) as the CoreNetworkingStackName parameter and leave everything else as default.

studio-cfn-stack-input

Wait until AWS CloudFormation reports that the stack creation is complete. You can confirm the Studio domain is available to use on the SageMaker console.

domain-home

To verify the Studio domain user has no internet access, launch Studio using the SageMaker console. Choose File, New, and Terminal, then attempt to access an internet resource. As shown in the following screenshot, the terminal will keep waiting for the resource and eventually time out.

studio-terminal

This proves that Studio is operating in a VPC that doesn’t have internet access.

Deploy the generative AI foundation model Flan T5-XXL using JumpStart

We can deploy this model via Studio as well as via API. JumpStart provides all the code to deploy the model via a SageMaker notebook accessible from within Studio. For this post, we showcase this capability from the Studio.

  • On the Studio welcome page, choose JumpStart under Prebuilt and automated solutions.

studio-welcome-page

  • Choose the Flan-T5 XXL model under Foundation Models.

js-model-hub

  • By default, it opens the Deploy tab. Expand the Deployment Configuration section to change the hosting instance and endpoint name, or add any additional tags. There is also an option to change the S3 bucket location where the model artifact will be stored for creating the endpoint. For this post, we leave everything at its default values. Make a note of the endpoint name to use while invoking the endpoint for making predictions.

deploy-js

  • Expand the Security Settings section, where you can specify the IAM role for creating the endpoint. You can also specify the VPC configurations by providing the subnets and security groups. The subnet IDs and security group IDs can be found from the VPC stack’s Outputs tab on the AWS CloudFormation console. SageMaker JumpStart requires at least two subnets as part of this configuration. The subnets and security groups control access to and from the model container.

js-deploy-security-settings

NOTE: Irrespective of whether the SageMaker JumpStart model is deployed in the VPC or not, the model always runs in network isolation mode, which isolates the model container so no inbound or outbound network calls can be made to or from the model container. Because we’re using a VPC, SageMaker downloads the model artifact through our specified VPC. Running the model container in network isolation doesn’t prevent your SageMaker endpoint from responding to inference requests. A server process runs alongside the model container and forwards it the inference requests, but the model container doesn’t have network access.

  • Choose Deploy to deploy the model. We can see the near-real-time status of the endpoint creation in progress. The endpoint creation may take 5–10 minutes to complete.

js-deploy-progress

Observe the value of the field Model data location on this page. All the SageMaker JumpStart models are hosted on a SageMaker managed S3 bucket (s3://jumpstart-cache-prod-{region}). Therefore, irrespective of which model is picked from JumpStart, the model gets deployed from the publicly accessible SageMaker JumpStart S3 bucket and the traffic never goes to the public model zoo APIs to download the model. This is why the model endpoint creation started successfully even when we’re creating the endpoint in a VPC that doesn’t have direct internet access.

The model artifact can also be copied to any private model zoo or your own S3 bucket to control and secure model source location further. You can use the following command to download the model locally using the AWS Command Line Interface (AWS CLI):

aws s3 cp s3://jumpstart-cache-prod-eu-west-1/huggingface-infer/prepack/v1.0.2/infer-prepack-huggingface-text2text-flan-t5-xxl.tar.gz .
  • After a few minutes, the endpoint gets created successfully and shows the status as In Service. Choose Open Notebook in the Use Endpoint from Studio section. This is a sample notebook provided as part of the JumpStart experience to quickly test the endpoint.

jumpstart-deploy-complete

  • In the notebook, choose the image as Data Science 3.0 and the kernel as Python 3. When the kernel is ready, you can run the notebook cells to make predictions on the endpoint. Note that the notebook uses the invoke_endpoint() API from the AWS SDK for Python to make predictions. Alternatively, you can use the SageMaker Python SDK’s predict() method to achieve the same result.

run-deploy-notebook-jumpstart

This concludes the steps to deploy the Flan-T5 XXL model using JumpStart within a VPC with no internet access.

Advantages of deploying SageMaker JumpStart models in VPC mode

The following are some of the advantages of deploying SageMaker JumpStart models in VPC mode:

  • Because SageMaker JumpStart doesn’t download the models from a public model zoo, it can be used in fully locked-down environments as well where there is no internet access
  • Because the network access can be limited and scoped down for SageMaker JumpStart models, this helps teams improve the security posture of the environment
  • Due to the VPC boundaries, access to the endpoint can also be limited via subnets and security groups, which adds an extra layer of security

Alternate ways to customize deployment of foundation models via SageMaker JumpStart

In this section, we share some alternate ways to deploy the model.

Use SageMaker JumpStart APIs from your preferred IDE

Models provided by SageMaker JumpStart don’t require you to access Studio. You can deploy them to SageMaker endpoints from any IDE, thanks to the JumpStart APIs. You could skip the Studio setup step discussed earlier in this post and use the JumpStart APIs to deploy the model. These APIs provide arguments where VPC configurations can be supplied as well. The APIs are part of the SageMaker Python SDK itself. For more information, refer to Pre-trained models.

Use notebooks provided by SageMaker JumpStart from SageMaker Studio

SageMaker JumpStart also provides notebooks to deploy the model directly. On the model detail page, choose Open notebook to open a sample notebook containing the code to deploy the endpoint. The notebook uses SageMaker JumpStart Industry APIs that allow you to list and filter the models, retrieve the artifacts, and deploy and query the endpoints. You can also edit the notebook code per your use case-specific requirements.

open-jumpstart-notebook

Clean up resources

Check out the CLEANUP.md file to find detailed steps to delete the Studio, VPC, and other resources created as part of this post.

Troubleshooting

If you encounter any issues in creating the CloudFormation stacks, refer to Troubleshooting CloudFormation.

Conclusion

Generative AI powered by large language models is changing how people acquire and apply insights from information. However, organizations operating in heavily regulated spaces are required to use the generative AI capabilities in a way that allows them to innovate faster but also simplifies the access patterns to such capabilities.

We encourage you to try out the approach provided in this post to embed generative AI capabilities in your existing environment while still keeping it inside your own VPC with no internet access. For further reading on SageMaker JumpStart foundation models, check out the following:


About the authors

Vikesh Pandey is a Machine Learning Specialist Solutions Architect at AWS, helping customers from financial industries design and build solutions on generative AI and ML. Outside of work, Vikesh enjoys trying out different cuisines and playing outdoor sports.

Mehran Nikoo is a Senior Solutions Architect at AWS, working with Digital Native businesses in the UK and helping them achieve their goals. Passionate about applying his software engineering experience to machine learning, he specializes in end-to-end machine learning and MLOps practices.

Read More

What's new in TensorFlow 2.13 and Keras 2.13?

What’s new in TensorFlow 2.13 and Keras 2.13?

Posted by the TensorFlow and Keras Teams

TensorFlow 2.13 and Keras 2.13 have been released! Highlights of this release include publishing Apple Silicon wheels, the new Keras V3 format being default for .keras extension files and many more!

TensorFlow Core

Apple Silicon wheels for TensorFlow

TensorFlow 2.13 is the first version to provide Apple Silicon wheels, which means when you install TensorFlow on an Apple Silicon Mac, you will be able to use the latest version of TensorFlow. The nightly builds for Apple Silicon wheels were released in March 2023 and this new support will enable more fine-grained testing, thanks to technical collaboration between Apple, MacStadium, and Google.

tf.lite

The Python TensorFlow Lite Interpreter bindings now have an option to use experimental_disable_delegate_clustering flag to turn-off delegate clustering during delegate graph partitioning phase. You can set this flag in TensorFlow Lite interpreter Python API

interpreter = new Interpreter(file_of_a_tensorflowlite_model, experimental_preserve_all_tensors=False)

The flag is set to False by default. This is an advanced feature in experimental that is designed for people who insert explicit control dependencies via with tf.control_dependencies() or need to change graph execution order.

Besides, there are several operator improvements in TensorFlow Lite in 2.13

  • add operation now supports broadcasting up to 6 dimensions. This will remove explicit broadcast ops from many models. The new implementation is also much faster than the current one which calculates the entire index for both inputs the the input instead of only calculating the part that changes.
  • Improve the coverage for 16×8 quantization by enabling int16x8 ops for exp, mirror_pad, space_to_batch_nd, batch_to_space_nd
  • Increase the coverage of integer data types
    • enabled int16 for less, greater_than, equal, bitcast, bitwise_xor, right_shift, top_k, mul, and int16 indices for gather and gather_nd
    • enabled int8 for floor_div and floor_mod, bitwise_xor, bitwise_xor
    • enabled 32-bit int for bitcast, bitwise_xorright_shift

tf.data

We have improved usability and added functionality for tf.data APIs.

tf.data.Dataset.zip now supports Python-style zipping. Previously users were required to provide an extra set of parentheses when zipping datasets as in Dataset.zip((a, b, c)). With this change, users can specify the datasets to be zipped simply as Dataset.zip(a, b, c) making it more intuitive.

Additionally, tf.data.Dataset.shuffle now supports full shuffling. To specify that data should be fully shuffled, use dataset = dataset.shuffle(dataset.cardinality()). This will load the full dataset into memory so that it can be shuffled, so make sure to only use this with datasets of filenames or other small datasets.

We have also added a new tf.data.experimental.pad_to_cardinality transformation which pads a dataset with zero elements up to a specified cardinality. This is useful for avoiding partial batches while not dropping any data.

Example usage:

ds = tf.data.Dataset.from_tensor_slices({‘a’: [1, 2]})
ds = ds.apply(tf.data.experimental.pad_to_cardinality(3))
list(ds.as_numpy_iterator())
[{‘a’: 1, ‘valid’: True}, {‘a’: 2, ‘valid’: True}, {‘a’: 0, ‘valid’: False}]

This can be useful, e.g. during eval, when partial batches are undesirable but it is also important not to drop any data.

oneDNN BF16 Math Mode on CPU

oneDNN supports BF16 math mode where full FP32 tensors are implicitly down-converted to BF16 during computations for faster execution time. TensorFlow CPU users can enable this by setting the environment variable TF_SET_ONEDNN_FPMATH_MODE to BF16. This mode may negatively impact model accuracy. To go back to full FP32 math mode, unset the variable.

Keras

Keras Saving format

The new Keras V3 saving format, released in TF 2.12, is now the default for all files with the .keras extension.

You can start using it now by calling model.save(“your_model.keras”).

It provides richer Python-side model saving and reloading with numerous advantages:

  • A lightweight, faster format:
ALT TEXT
  • Human-readable: The new format is name-based, with a more detailed serialization format that makes debugging much easier. What you load is exactly what you saved, from Python’s perspective.
  • Safer: Unlike SavedModel, there is no reliance on loading via bytecode or pickling – a big advancement for secure ML, as pickle files can be exploited to cause arbitrary code execution at loading time.
  • More general: Support for non-numerical states, such as vocabularies and lookup tables, is included in the new format.
  • Extensible: You can add support for saving and loading exotic state elements in custom layers using save_assets(), such as a FIFOQueue – or anything else you want. You have full control of disk I/O for custom assets.

The legacy formats (h5 and Keras SavedModel) will stay supported in perpetuity. However, we recommend that you consider adopting the new Keras v3 format for saving/reloading in Python runtimes, and using model.export() for inference in all other runtimes (such as TF Serving).

Read More

NVIDIA DGX Cloud Now Available to Supercharge Generative AI Training

NVIDIA DGX Cloud Now Available to Supercharge Generative AI Training

NVIDIA DGX Cloud — which delivers tools that can turn nearly any company into an AI company —  is now broadly available, with thousands of NVIDIA GPUs online on Oracle Cloud Infrastructure, as well as NVIDIA infrastructure located in the U.S. and U.K.

Unveiled at NVIDIA’s GTC conference in March, DGX Cloud is an AI supercomputing service that gives enterprises immediate access to the infrastructure and software needed to train advanced models for generative AI and other groundbreaking applications.

“Generative AI has made the rapid adoption of AI a business imperative for leading companies in every industry, driving many enterprises to seek more accelerated computing infrastructure,” said Pat Moorhead, chief analyst at Moor Insights & Strategy.

Generative AI could add more than $4 trillion to the economy annually, turning proprietary business knowledge across a vast swath of the world’s industries into next-generation AI applications, according to recent estimates by global management consultancy McKinsey.

Industry Pioneers Transforming Business With Generative AI

Nearly every industry can benefit from generative AI, with early pioneers already leading transformative change across their markets.

Healthcare companies use DGX Cloud to train protein models to speed drug discovery and clinical reporting with natural language processing.

Financial service providers use DGX Cloud to forecast trends, optimize portfolios, build recommender systems and develop intelligent generative AI chatbots.

Insurance companies are building models to automate claims processing.

Software companies are using it to develop AI-powered features and applications.

And others are using DGX Cloud to build AI factories and digital twins of valuable assets.

Dedicated AI Supercomputing With Immediate Availability

DGX Cloud instances provide dedicated infrastructure enterprises rent on a monthly basis, ensuring customers can quickly and easily develop large, multi-node training workloads without having to wait for accelerated computing resources that are often in high demand.

“The availability of NVIDIA DGX Cloud provides a new pool of AI supercomputing resources, with nearly instantaneous access,” Moorhead said.

This simple approach to AI supercomputing removes the complexity of acquiring, deploying and managing on-premises infrastructure. Providing NVIDIA DGX AI supercomputing paired with NVIDIA AI Enterprise software, DGX Cloud makes it possible for businesses everywhere to access their own AI supercomputer using a web browser.

NVIDIA AI Supercomputing and Software in a Browser

Each instance of DGX Cloud features eight NVIDIA 80GB Tensor Core GPUs for 640GB of GPU memory per node. A high-performance, low-latency fabric ensures workloads can scale across clusters of interconnected systems, allowing multiple instances to act as one massive GPU. High-performance storage is integrated into DGX Cloud to provide a complete solution.

Enterprises manage and monitor DGX Cloud training workloads using NVIDIA Base Command Platform software. The platform provides a seamless user experience across DGX Cloud and on-premises NVIDIA DGX supercomputers, so enterprises can combine resources when needed.

And DGX Cloud includes NVIDIA AI Enterprise, the software layer of the NVIDIA AI platform, which provides over 100 end-to-end AI frameworks and pretrained models to accelerate data science pipelines and streamline the development and deployment of production AI.

Learn more about how to get started with DGX Cloud.

Read More