The Shift from Models to Compound AI Systems

The Shift from Models to Compound AI Systems


AI caught everyone’s attention in 2023 with Large Language Models (LLMs) that can be instructed to perform general tasks, such as translation or coding, just by prompting. This naturally led to an intense focus on models as the primary ingredient in AI application development, with everyone wondering what capabilities new LLMs will bring.
As more developers begin to build using LLMs, however, we believe that this focus is rapidly changing: state-of-the-art AI results are increasingly obtained by compound systems with multiple components, not just monolithic models.

For example, Google’s AlphaCode 2 set state-of-the-art results in programming through a carefully engineered system that uses LLMs to generate up to 1 million possible solutions for a task and then filter down the set. AlphaGeometry, likewise, combines an LLM with a traditional symbolic solver to tackle olympiad problems. In enterprises, our colleagues at Databricks found that 60% of LLM applications use some form of retrieval-augmented generation (RAG), and 30% use multi-step chains.
Even researchers working on traditional language model tasks, who used to report results from a single LLM call, are now reporting results from increasingly complex inference strategies: Microsoft wrote about a chaining strategy that exceeded GPT-4’s accuracy on medical exams by 9%, and Google’s Gemini launch post measured its MMLU benchmark results using a new CoT@32 inference strategy that calls the model 32 times, which raised questions about its comparison to just a single call to GPT-4. This shift to compound systems opens many interesting design questions, but it is also exciting, because it means leading AI results can be achieved through clever engineering, not just scaling up training.

In this post, we analyze the trend toward compound AI systems and what it means for AI developers. Why are developers building compound systems? Is this paradigm here to stay as models improve? And what are the emerging tools for developing and optimizing such systems—an area that has received far less research than model training? We argue that compound AI systems will likely be the best way to maximize AI results in the future, and might be one of the most impactful trends in AI in 2024.

The Shift from Models to Compound AI Systems

The Shift from Models to Compound AI Systems


AI caught everyone’s attention in 2023 with Large Language Models (LLMs) that can be instructed to perform general tasks, such as translation or coding, just by prompting. This naturally led to an intense focus on models as the primary ingredient in AI application development, with everyone wondering what capabilities new LLMs will bring.
As more developers begin to build using LLMs, however, we believe that this focus is rapidly changing: state-of-the-art AI results are increasingly obtained by compound systems with multiple components, not just monolithic models.

For example, Google’s AlphaCode 2 set state-of-the-art results in programming through a carefully engineered system that uses LLMs to generate up to 1 million possible solutions for a task and then filter down the set. AlphaGeometry, likewise, combines an LLM with a traditional symbolic solver to tackle olympiad problems. In enterprises, our colleagues at Databricks found that 60% of LLM applications use some form of retrieval-augmented generation (RAG), and 30% use multi-step chains.
Even researchers working on traditional language model tasks, who used to report results from a single LLM call, are now reporting results from increasingly complex inference strategies: Microsoft wrote about a chaining strategy that exceeded GPT-4’s accuracy on medical exams by 9%, and Google’s Gemini launch post measured its MMLU benchmark results using a new CoT@32 inference strategy that calls the model 32 times, which raised questions about its comparison to just a single call to GPT-4. This shift to compound systems opens many interesting design questions, but it is also exciting, because it means leading AI results can be achieved through clever engineering, not just scaling up training.

In this post, we analyze the trend toward compound AI systems and what it means for AI developers. Why are developers building compound systems? Is this paradigm here to stay as models improve? And what are the emerging tools for developing and optimizing such systems—an area that has received far less research than model training? We argue that compound AI systems will likely be the best way to maximize AI results in the future, and might be one of the most impactful trends in AI in 2024.

AI’s Hottest Ticket: NVIDIA GTC Brings Together Automotive Leaders and Visionaries Transforming the Future of Transportation

AI’s Hottest Ticket: NVIDIA GTC Brings Together Automotive Leaders and Visionaries Transforming the Future of Transportation

Generative AI and software-defined computing are transforming the automotive landscape — making the journey behind the wheel safer, smarter and more enjoyable.

Dozens of automakers and NVIDIA DRIVE ecosystem partners will be demonstrating their developments in mobility, along with showcasing their next-gen vehicles at GTC, the conference for the era of AI, running from March 18-21 in San Jose, Calif., and online. These include the Mercedes-Benz Concept CLA Class, the new Volvo EX90, Polestar 3, WeRide Robobus, Nuro R3 autonomous delivery vehicle and more.

Explore myriad sessions to learn about the latest developments in mobility — from highly automated and autonomous driving, generative AI and large language models to simulation, safety, design and manufacturing.

Featured sessions include:

Rounding out the week will be DRIVE Developer Day on Thursday, March 21 — featuring a series of deep-dive sessions on how to build safe and robust self-driving systems. Led by NVIDIA’s engineering experts, these talks will highlight the latest DRIVE features and developments.

Find additional details on automotive-specific programming at GTC here.

Don’t stall — register today to learn how generative AI and software-defined computing are transforming the auto industry.

Read More

Code Llama 70B is now available in Amazon SageMaker JumpStart

Code Llama 70B is now available in Amazon SageMaker JumpStart

Today, we are excited to announce that Code Llama foundation models, developed by Meta, are available for customers through Amazon SageMaker JumpStart to deploy with one click for running inference. Code Llama is a state-of-the-art large language model (LLM) capable of generating code and natural language about code from both code and natural language prompts. You can try out this model with SageMaker JumpStart, a machine learning (ML) hub that provides access to algorithms, models, and ML solutions so you can quickly get started with ML. In this post, we walk through how to discover and deploy the Code Llama model via SageMaker JumpStart.

Code Llama

Code Llama is a model released by Meta that is built on top of Llama 2. This state-of-the-art model is designed to improve productivity for programming tasks for developers by helping them create high-quality, well-documented code. The models excel in Python, C++, Java, PHP, C#, TypeScript, and Bash, and have the potential to save developers’ time and make software workflows more efficient.

It comes in three variants, engineered to cover a wide variety of applications: the foundational model (Code Llama), a Python specialized model (Code Llama Python), and an instruction-following model for understanding natural language instructions (Code Llama Instruct). All Code Llama variants come in four sizes: 7B, 13B, 34B, and 70B parameters. The 7B and 13B base and instruct variants support infilling based on surrounding content, making them ideal for code assistant applications. The models were designed using Llama 2 as the base and then trained on 500 billion tokens of code data, with the Python specialized version trained on an incremental 100 billion tokens. The Code Llama models provide stable generations with up to 100,000 tokens of context. All models are trained on sequences of 16,000 tokens and show improvements on inputs with up to 100,000 tokens.

The model is made available under the same community license as Llama 2.

Foundation models in SageMaker

SageMaker JumpStart provides access to a range of models from popular model hubs, including Hugging Face, PyTorch Hub, and TensorFlow Hub, which you can use within your ML development workflow in SageMaker. Recent advances in ML have given rise to a new class of models known as foundation models, which are typically trained on billions of parameters and are adaptable to a wide category of use cases, such as text summarization, digital art generation, and language translation. Because these models are expensive to train, customers want to use existing pre-trained foundation models and fine-tune them as needed, rather than train these models themselves. SageMaker provides a curated list of models that you can choose from on the SageMaker console.

You can find foundation models from different model providers within SageMaker JumpStart, enabling you to get started with foundation models quickly. You can find foundation models based on different tasks or model providers, and easily review model characteristics and usage terms. You can also try out these models using a test UI widget. When you want to use a foundation model at scale, you can do so without leaving SageMaker by using pre-built notebooks from model providers. Because the models are hosted and deployed on AWS, you can rest assured that your data, whether used for evaluating or using the model at scale, is never shared with third parties.

Discover the Code Llama model in SageMaker JumpStart

To deploy the Code Llama 70B model, complete the following steps in Amazon SageMaker Studio:

  1. On the SageMaker Studio home page, choose JumpStart in the navigation pane.

  2. Search for Code Llama models and choose the Code Llama 70B model from the list of models shown.

    You can find more information about the model on the Code Llama 70B model card.

    The following screenshot shows the endpoint settings. You can change the options or use the default ones.

  3. Accept the End User License Agreement (EULA) and choose Deploy.

    This will start the endpoint deployment process, as shown in the following screenshot.

Deploy the model with the SageMaker Python SDK

Alternatively, you can deploy through the example notebook by choosing Open Notebook within model detail page of Classic Studio. The example notebook provides end-to-end guidance on how to deploy the model for inference and clean up resources.

To deploy using notebook, we start by selecting an appropriate model, specified by the model_id. You can deploy any of the selected models on SageMaker with the following code:

from sagemaker.jumpstart.model import JumpStartModel

model = JumpStartModel(model_id="meta-textgeneration-llama-codellama-70b")
predictor = model.deploy(accept_eula=False)  # Change EULA acceptance to True

This deploys the model on SageMaker with default configurations, including default instance type and default VPC configurations. You can change these configurations by specifying non-default values in JumpStartModel. Note that by default, accept_eula is set to False. You need to set accept_eula=True to deploy the endpoint successfully. By doing so, you accept the user license agreement and acceptable use policy as mentioned earlier. You can also download the license agreement.

Invoke a SageMaker endpoint

After the endpoint is deployed, you can carry out inference by using Boto3 or the SageMaker Python SDK. In the following code, we use the SageMaker Python SDK to call the model for inference and print the response:

def print_response(payload, response):
    print(payload["inputs"])
    print(f"> {response[0]['generated_text']}")
    print("n==================================n")

The function print_response takes a payload consisting of the payload and model response and prints the output. Code Llama supports many parameters while performing inference:

  • max_length – The model generates text until the output length (which includes the input context length) reaches max_length. If specified, it must be a positive integer.
  • max_new_tokens – The model generates text until the output length (excluding the input context length) reaches max_new_tokens. If specified, it must be a positive integer.
  • num_beams – This specifies the number of beams used in the greedy search. If specified, it must be an integer greater than or equal to num_return_sequences.
  • no_repeat_ngram_size – The model ensures that a sequence of words of no_repeat_ngram_size is not repeated in the output sequence. If specified, it must be a positive integer greater than 1.
  • temperature – This controls the randomness in the output. Higher temperature results in an output sequence with low-probability words, and lower temperature results in an output sequence with high-probability words. If temperature is 0, it results in greedy decoding. If specified, it must be a positive float.
  • early_stopping – If True, text generation is finished when all beam hypotheses reach the end of sentence token. If specified, it must be Boolean.
  • do_sample – If True, the model samples the next word as per the likelihood. If specified, it must be Boolean.
  • top_k – In each step of text generation, the model samples from only the top_k most likely words. If specified, it must be a positive integer.
  • top_p – In each step of text generation, the model samples from the smallest possible set of words with cumulative probability top_p. If specified, it must be a float between 0 and 1.
  • return_full_text – If True, the input text will be part of the output generated text. If specified, it must be Boolean. The default value for it is False.
  • stop – If specified, it must be a list of strings. Text generation stops if any one of the specified strings is generated.

You can specify any subset of these parameters while invoking an endpoint. Next, we show an example of how to invoke an endpoint with these arguments.

Code completion

The following examples demonstrate how to perform code completion where the expected endpoint response is the natural continuation of the prompt.

We first run the following code:

prompt = """
import socket

def ping_exponential_backoff(host: str):
"""

payload = {
    "inputs": prompt,
    "parameters": {"max_new_tokens": 256, "temperature": 0.2, "top_p": 0.9},
}
response = predictor.predict(payload)
print_response(payload, response)

We get the following output:

"""
    Pings the given host with exponential backoff.
    """
    timeout = 1
    while True:
        try:
            socket.create_connection((host, 80), timeout=timeout)
            return
        except socket.error:
            timeout *= 2

For our next example, we run the following code:

prompt = """
import argparse
def main(string: str):
    print(string)
    print(string[::-1])
if __name__ == "__main__":
"""

payload = {
    "inputs": prompt,
    "parameters": {"max_new_tokens": 256, "temperature": 0.2, "top_p": 0.9},
}
predictor.predict(payload)

We get the following output:

parser = argparse.ArgumentParser(description='Reverse a string')
    parser.add_argument('string', type=str, help='String to reverse')
    args = parser.parse_args()
    main(args.string)

Code generation

The following examples show Python code generation using Code Llama.

We first run the following code:

prompt = """
Write a python function to traverse a list in reverse.
"""

payload = {
    "inputs": prompt,
    "parameters": {"max_new_tokens": 256, "temperature": 0.2, "top_p": 0.9},
}
response = predictor.predict(payload)
print_response(payload, response)

We get the following output:

def reverse(list1):
    for i in range(len(list1)-1,-1,-1):
        print(list1[i])

list1 = [1,2,3,4,5]
reverse(list1)

For our next example, we run the following code:

prompt = """
Write a python function to to carry out bubble sort.
"""

payload = {
    "inputs": prompt,
    "parameters": {"max_new_tokens": 256, "temperature": 0.1, "top_p": 0.9},
}
response = predictor.predict(payload)
print_response(payload, response)

We get the following output:

def bubble_sort(arr):
    n = len(arr)
    for i in range(n):
        for j in range(0, n-i-1):
            if arr[j] > arr[j+1]:
                arr[j], arr[j+1] = arr[j+1], arr[j]
    return arr

arr = [64, 34, 25, 12, 22, 11, 90]
print(bubble_sort(arr))

These are some of the examples of code-related tasks using Code Llama 70B. You can use the model to generate even more complicated code. We encourage you to try it using your own code-related use cases and examples!

Clean up

After you have tested the endpoints, make sure you delete the SageMaker inference endpoints and the model to avoid incurring charges. Use the following code:

predictor.delete_endpoint()

Conclusion

In this post, we introduced Code Llama 70B on SageMaker JumpStart. Code Llama 70B is a state-of-the-art model for generating code from natural language prompts as well as code. You can deploy the model with a few simple steps in SageMaker JumpStart and then use it to carry out code-related tasks such as code generation and code infilling. As a next step, try using the model with your own code-related use cases and data.


About the authors

Dr. Kyle Ulrich is an Applied Scientist with the Amazon SageMaker JumpStart team. His research interests include scalable machine learning algorithms, computer vision, time series, Bayesian non-parametrics, and Gaussian processes. His PhD is from Duke University and he has published papers in NeurIPS, Cell, and Neuron.

Dr. Farooq Sabir is a Senior Artificial Intelligence and Machine Learning Specialist Solutions Architect at AWS. He holds PhD and MS degrees in Electrical Engineering from the University of Texas at Austin and an MS in Computer Science from Georgia Institute of Technology. He has over 15 years of work experience and also likes to teach and mentor college students. At AWS, he helps customers formulate and solve their business problems in data science, machine learning, computer vision, artificial intelligence, numerical optimization, and related domains. Based in Dallas, Texas, he and his family love to travel and go on long road trips.

June Won is a product manager with SageMaker JumpStart. He focuses on making foundation models easily discoverable and usable to help customers build generative AI applications. His experience at Amazon also includes mobile shopping application and last mile delivery.

Read More

Telco GPT: Survey Shows Scale of Industry’s Enthusiasm and Adoption of Generative AI

Telco GPT: Survey Shows Scale of Industry’s Enthusiasm and Adoption of Generative AI

It’s been five years since the telecommunications industry first deployed 5G networks to drive new performance levels for customers and unlock new value for telcos.

But that industry milestone has been overshadowed by the emergence of generative AI and the swift pace at which telcos are embracing large language models as they seek to transform all parts of their business.

A recent survey of more than 400 telecommunications industry professionals from around the world showed that generative AI is the breakout technology of the year and that enthusiasm and adoption for both generative AI, and AI in general, is booming. In addition, the survey showed that, among respondents, AI is improving both revenues and cost savings.

The generative AI insight is the main highlight in the second edition of NVIDIA’s “State of AI in Telecommunications” survey, which included questions covering a range of AI topics, including infrastructure spending, top use cases, biggest challenges and deployment models.

Survey respondents included C-suite leaders, managers, developers and IT architects from mobile telecoms, fixed and cable companies. The survey was conducted over eight weeks between October and December.

Ramping Up on Generative AI

The survey results show how generative AI went from relative obscurity in 2022 to a key solution within a year. Forty-three percent of respondents reported they were investing in it, showing clear evidence that the telecom industry is enthusiastically embracing the generative AI wave to address a wide variety of business goals.

More broadly, there was a marked increase in interest in adopting AI and growing expectations of success from the technology, especially among industry executives. In the survey, 53% of respondents agreed or strongly agreed that adopting AI will be a source of competitive advantage, compared to 39% who reported the same in 2022. For management respondents, the figure was 56%.

The primary reason for this sustained engagement is because many industry stakeholders expect AI to contribute to their company’s success. Overall, 56% of respondents agreed or strongly agreed that “AI is important to my company’s future success,” with the figure rising to 61% among decision-making management respondents. The overall figure is a 14-point boost over the 42% result from the 2022 survey.

Customer Experience Remains Key Driver of AI Investment 

Telcos are adopting AI and generative AI to address a wide variety of business needs. Overall, 31% of respondents said they invested in at least six AI use cases in 2023, while 40% are planning to scale to six or more use cases in 2024.

But enhancing customer experiences remains the biggest AI opportunity for the telecom industry, with 48% of survey respondents selecting it as their main goal for using the technology. Likewise, some 35% of respondents identified customer experiences as their key AI success story.

For generative AI, 57% are using it to improve customer service and support, 57% to improve employee productivity, 48% for network operations and management, 40% for network planning and design, and 32% for marketing content generation.

Early Phase of AI Investment Cycle

The focus on customer experience is influencing investments. Investing in customer-experience optimization remains the most popular AI use case for 2023 (49% of respondents) and for generative AI investments (57% of respondents).

Telcos are also investing in other AI use cases: security (42%), network predictive maintenance (37%), network planning and operations (34%) and field operations (34%) are notable examples. However, using AI for fraud detection in transactions and payments had the biggest jump in popularity between 2022 and 2023, rising 14 points to 28% of respondents.

Overall, investments in AI are still in an early phase of the investment cycle, although growing strongly. In the survey, 43% of respondents reported an investment of over $1 million in AI in their previous year, 52% reported the same for the current year, and 66% reported their budget for AI infrastructure will increase in the next year.

For those who are already investing in AI, 67% reported that AI adoption has helped them increase revenues, with 19% of respondents noting that this revenue growth is more than 10% in specific business areas. Likewise, 63% reported that AI adoption has helped them reduce costs in specific business areas, with 14% noting that this cost reduction is more than 10%.

Innovation With Partners

While telcos are increasing their investments to improve their internal AI capabilities, partnerships remain critical for the adoption of AI solutions in the industry. This is applicable both for AI models and AI hardware infrastructure.

In the survey, 44% of respondents reported that co-development with partners is their company’s preferred approach to building AI solutions. Some 28% of respondents prefer to use open-source tools, while 25% take an AI-as-a-service approach. For generative AI, 29% of respondents built or customized models with a partner, an understandable conservative approach for the telecom industry with its stringent data protection rules.

For infrastructure, increasingly, many telcos are opting for cloud hosting, although the hybrid model still remains dominant. In the survey, 31% of respondents reported that they run most of their AI workloads in the cloud (44% for hybrid), compared to 21% of respondents in the previous survey (56% for hybrid). This is helping to fuel the growing need for more localized cloud infrastructure.

Download the “State of AI in Telecommunications: 2024 Trends” report for in-depth results and insights.

Explore how AI is transforming telecommunications at NVIDIA GTC, featuring industry leaders including Amdocs, Indosat, KT, Samsung Research, ServiceNow, Singtel, SoftBank, Telconet and Verizon.

Learn more about NVIDIA solutions for telecommunications across customer experience, network operations, sovereign AI factories and more.

Read More

Detect anomalies in manufacturing data using Amazon SageMaker Canvas

Detect anomalies in manufacturing data using Amazon SageMaker Canvas

With the use of cloud computing, big data and machine learning (ML) tools like Amazon Athena or Amazon SageMaker have become available and useable by anyone without much effort in creation and maintenance. Industrial companies increasingly look at data analytics and data-driven decision-making to increase resource efficiency across their entire portfolio, from operations to performing predictive maintenance or planning.

Due to the velocity of change in IT, customers in traditional industries are facing a dilemma of skillset. On the one hand, analysts and domain experts have a very deep knowledge of the data in question and its interpretation, yet often lack the exposure to data science tooling and high-level programming languages such as Python. On the other hand, data science experts often lack the experience to interpret the machine data content and filter it for what is relevant. This dilemma hampers the creation of efficient models that use data to generate business-relevant insights.

Amazon SageMaker Canvas addresses this dilemma by providing domain experts a no-code interface to create powerful analytics and ML models, such as forecasts, classification, or regression models. It also allows you to deploy and share these models with ML and MLOps specialists after creation.

In this post, we show you how to use SageMaker Canvas to curate and select the right features in your data, and then train a prediction model for anomaly detection, using the no-code functionality of SageMaker Canvas for model tuning.

Anomaly detection for the manufacturing industry

At the time of writing, SageMaker Canvas focuses on typical business use cases, such as forecasting, regression, and classification. For this post, we demonstrate how these capabilities can also help detect complex abnormal data points. This use case is relevant, for instance, to pinpoint malfunctions or unusual operations of industrial machines.

Anomaly detection is important in the industry domain, because machines (from trains to turbines) are normally very reliable, with times between failures spanning years. Most data from these machines, such as temperature senor readings or status messages, describes the normal operation and has limited value for decision-making. Engineers look for abnormal data when investigating root causes for a fault or as warning indicators for future faults, and performance managers examine abnormal data to identify potential improvements. Therefore, the typical first step in moving towards data-driven decision-making relies on finding that relevant (abnormal) data.

In this post, we use SageMaker Canvas to curate and select the right features in data, and then train a prediction model for anomaly detection, using SageMaker Canvas no-code functionality for model tuning. Then we deploy the model as a SageMaker endpoint.

Solution overview

For our anomaly detection use case, we train a prediction model to predict a characteristic feature for the normal operation of a machine, such as the motor temperature indicated in a car, from influencing features, such as the speed and recent torque applied in the car. For anomaly detection on a new sample of measurements, we compare the model predictions for the characteristic feature with the observations provided.

For the example of the car motor, a domain expert obtains measurements of the normal motor temperature, recent motor torque, ambient temperature, and other potential influencing factors. These allow you to train a model to predict the temperature from the other features. Then we can use the model to predict the motor temperature on a regular basis. When the predicted temperature for that data is similar to the observed temperature in that data, the motor is working normally; a discrepancy will point to an anomaly, such as the cooling system failing or a defect in the motor.

The following diagram illustrates the solution architecture.

Overview of the process: A model is created in SageMaker Canvas, deployed and then accessed from an AWS Lambda Funcino.

The solution consists of four key steps:

  1. The domain expert creates the initial model, including data analysis and feature curation using SageMaker Canvas.
  2. The domain expert shares the model via the Amazon SageMaker Model Registry or deploys it directly as a real-time endpoint.
  3. An MLOps expert creates the inference infrastructure and code translating the model output from a prediction into an anomaly indicator. This code typically runs inside an AWS Lambda function.
  4. When an application requires an anomaly detection, it calls the Lambda function, which uses the model for inference and provides the response (whether or not it’s an anomaly).

Prerequisites

To follow along with this post, you must meet the following prerequisites:

Create the model using SageMaker

The model creation process follows the standard steps to create a regression model in SageMaker Canvas. For more information, refer to Getting started with using Amazon SageMaker Canvas.

First, the domain expert loads relevant data into SageMaker Canvas, such as a time series of measurements. For this post, we use a CSV file containing the (synthetically generated) measurements of an electrical motor. For details, refer to Import data into Canvas. The sample data used is available for download as a CSV.

A picture showing teh first lines of the csv. In addition, a histogram and benchmark metrics are shown for a quick-preview model..

Curate the data with SageMaker Canvas

After the data is loaded, the domain expert can use SageMaker Canvas to curate the data used in the final model. For this, the expert selects those columns that contain characteristic measurements for the problem in question. More precisely, the expert selects columns that are related to each other, for instance, by a physical relationship such as a pressure-temperature curve, and where a change in that relationship is a relevant anomaly for their use case. The anomaly detection model will learn the normal relationship between the selected columns and indicate when data doesn’t conform to it, such as an abnormally high motor temperature given the current load on the motor.

In practice, the domain expert needs to select a set of suitable input columns and a target column. The inputs are typically the collection of quantities (numeric or categorical) that determine a machine’s behavior, from demand settings, to load, speed, or ambient temperature. The output is typically a numeric quantity that indicates the performance of the machine’s operation, such as a temperature measuring energy dissipation or another performance metric changing when the machine runs under suboptimal conditions.

To illustrate the concept of what quantities to select for input and output, let’s consider a few examples:

  • For rotating equipment, such as the model we build in this post, typical inputs are the rotation speed, torque (current and history), and ambient temperature, and the targets are the resulting bearing or motor temperatures indicating good operational conditions of the rotations
  • For a wind turbine, typical inputs are the current and recent history of wind speed and rotor blade settings, and the target quantity is the produced power or rotational speed
  • For a chemical process, typical inputs are the percentage of different ingredients and the ambient temperature, and targets are the heat produced or the viscosity of the end product
  • For moving equipment such as sliding doors, typical inputs are the power input to the motors, and the target value is the speed or completion time for the movement
  • For an HVAC system, typical inputs are the achieved temperature difference and load settings, and the target quantity is the energy consumption measured

Ultimately, the right inputs and targets for a given equipment will depend on the use case and anomalous behavior to detect, and are best known to a domain expert who is familiar with the intricacies of the specific dataset.

In most cases, selecting suitable input and target quantities means selecting the right columns only and marking the target column (for this example, bearing_temperature). However, a domain expert can also use the no-code features of SageMaker Canvas to transform columns and refine or aggregate the data. For instance, you can extract or filter specific dates or timestamps from the data that are not relevant. SageMaker Canvas supports this process, showing statistics on the quantities selected, allowing you to understand if a quantity has outliers and spread that may affect the results of the model.

Train, tune, and evaluate the model

After the domain expert has selected suitable columns in the dataset, they can train the model to learn the relationship between the inputs and outputs. More precisely, the model will learn to predict the target value selected from the inputs.

Normally, you can use the SageMaker Canvas Model Preview option. This provide a quick indication of the model quality to expect, and allows you to investigate the effect that different inputs have on the output metric. For instance, in the following screenshot, the model is most affected by the motor_speed and ambient_temperature metrics when predicting bearing_temperature. This is sensible, because these temperatures are closely related. At the same time, additional friction or other means of energy loss are likely to affect this.

For the model quality, the RMSE of the model is an indicator how well the model was able to learn the normal behavior in the training data and reproduce the relationships between the input and output measures. For instance, in the following model, the model should be able to predict the correct motor_bearing temperature within 3.67 degrees Celsius, so we can consider a deviation of the real temperature from a model prediction that is larger than, for example, 7.4 degrees as an anomaly. The real threshold that you would use, however, will depend on the sensitivity required in the deployment scenario.

A graph showing the actual and predicted motor speed. The relationship is linear with some noise.

Finally, after the model evaluation and tuning is finished, you can start the complete model training that will create the model to use for inference.

Deploy the model

Although SageMaker Canvas can use a model for inference, productive deployment for anomaly detection requires you to deploy the model outside of SageMaker Canvas. More precisely, we need to deploy the model as an endpoint.

In this post and for simplicity, we deploy the model as an endpoint from SageMaker Canvas directly. For instructions, refer to Deploy your models to an endpoint. Make sure to take note of the deployment name and consider the pricing of the instance type you deploy to (for this post, we use ml.m5.large). SageMaker Canvas will then create a model endpoint that can be called to obtain predictions.

An appication window showing the configuration of a model deployment. Settings shown are a machine size ml.m5.large and a deployment name of sample-anomaly-model.

In industrial settings, a model needs to undergo thorough testing before it can be deployed. For this, the domain expert will not deploy it, but instead share the model to the SageMaker Model Registry. Here, an MLOps operations expert can take over. Typically, that expert will test the model endpoint, evaluate the size of computing equipment required for the target application, and determine most cost-efficient deployment, such as deployment for serverless inference or batch inference. These steps are normally automated (for instance, using Amazon Sagemaker Pipelines or the Amazon SDK).

An image showing the button to share a model from Amazon Sgemaker to a Model Registry.

Use the model for anomaly detection

In the previous step, we created a model deployment in SageMaker Canvas, called canvas-sample-anomaly-model. We can use it to obtain predictions of a bearing_temperature value based on the other columns in the dataset. Now, we want to use this endpoint to detect anomalies.

To identify anomalous data, our model will use the prediction model endpoint to get the expected value of the target metric and then compare the predicted value against the actual value in the data. The predicted value indicates the expected value for our target metric based on the training data. The difference of this value therefore is a metric for the abnormality of the actual data observed. We can use the following code:

# We are using pandas dataframes for data handling
import pandas as pd 
import boto3,json
sm_runtime_client = boto3.client('sagemaker-runtime')

# Configuration of the actual model invocation
endpoint_name="canvas-sample-anomaly-model"
# Name of the column in the input data to compare with predictions
TARGET_COL='bearing_temperature' 

def do_inference(data, endpoint_name):
    # Example Code provided by Sagemaker Canvas
    body = data.to_csv(header=False, index=True).encode("utf-8")
    response = sm_runtime_client.invoke_endpoint(Body = body,
                              EndpointName = endpoint_name,
                              ContentType = "text/csv",
                              Accept = "application/json",
                              )
    return json.loads(response["Body"].read())


def input_transformer(input_data, drop_cols = [ TARGET_COL ] ):
    # Transform the input: Drop the Target column
    return input_data.drop(drop_cols,axis =1 )

def output_transformer(input_data,response):
    # Take the initial input data and compare it to the response of the prediction model
    scored = input_data.copy()
    scored.loc[ input_data.index,'prediction_'+TARGET_COL ] = pd.DataFrame(
response[ 'predictions' ],
index = input_data.index 
)['score']
    scored.loc[ input_data.index,'error' ] = (
scored[ TARGET_COL ]-scored[ 'prediction_'+TARGET_COL ]
).abs()
    return scored

# Run the inference
raw_input = pd.read_csv(MYFILE) # Read my data for inference
to_score = input_transformer(raw_input) # Prepare the data
predictions = do_inference(to_score, endpoint_name) # create predictions
results = output_transformer(to_score,predictions) # compare predictions & actuals

The preceding code performs the following actions:

  1. The input data is filtered down to the right features (function “input_transformer“).
  2. The SageMaker model endpoint is invoked with the filtered data (function “do_inference“), where we handle input and output formatting according to the sample code provided when opening the details page of our deployment in SageMaker Canvas.
  3. The result of the invocation is joined to the original input data and the difference is stored in the error column (function “output_transform“).

Find anomalies and evaluate anomalous events

In a typical setup, the code to obtain anomalies is run in a Lambda function. The Lambda function can be called from an application or Amazon API Gateway. The main function returns an anomaly score for each row of the input data—in this case, a time series of an anomaly score.

For testing, we can also run the code in a SageMaker notebook. The following graphs show the inputs and output of our model when using the sample data. Peaks in the deviation between predicted and actual values (anomaly score, shown in the lower graph) indicate anomalies. For instance, in the graph, we can see three distinct peaks where the anomaly score (difference between expected and real temperature) surpasses 7 degrees Celsius: the first after a long idle time, the second at a steep drop of bearing_temperature, and the last where bearing_temperature is high compared to motor_speed.

Two graphs for timeseries. The top shows the timeseries for motor temperatures and motor speeds. The lower graph shows the anomaly score over time with three peaks that indicate anomalies..

In many cases, knowing the time series of the anomaly score is already sufficient; you can set up a threshold for when to warn of a significant anomaly based on the need for model sensitivity. The current score then indicates that a machine has an abnormal state that needs investigation. For instance, for our model, the absolute value of the anomaly score is distributed as shown in the following graph. This confirms that most anomaly scores are below the (2xRMS=)8 degrees found during training for the model as the typical error. The graph can help you choose a threshold manually, such that the right percentage of the evaluated samples are marked as anomalies.

A histogram of the occurrence of values for the anomaly score. The curve decreases from x=0 to x=15.

If the desired output are events of anomalies, then the anomaly scores provided by the model require refinement to be relevant for business use. For this, the ML expert will typically add postprocessing to remove noise or large peaks on the anomaly score, such as adding a rolling mean. In addition, the expert will typically evaluate the anomaly score by a logic similar to raising an Amazon CloudWatch alarm, such as monitoring for the breach of a threshold over a specific duration. For more information about setting up alarms, refer to Using Amazon CloudWatch alarms. Running these evaluations in the Lambda function allows you to send warnings, for instance, by publishing a warning to an Amazon Simple Notification Service (Amazon SNS) topic.

Clean up

After you have finished using this solution, you should clean up to avoid unnecessary cost:

  1. In SageMaker Canvas, find your model endpoint deployment and delete it.
  2. Log out of SageMaker Canvas to avoid charges for it running idly.

Summary

In this post, we showed how a domain expert can evaluate input data and create an ML model using SageMaker Canvas without the need to write code. Then we showed how to use this model to perform real-time anomaly detection using SageMaker and Lambda through a simple workflow. This combination empowers domain experts to use their knowledge to create powerful ML models without additional training in data science, and enables MLOps experts to use these models and make them available for inference flexibly and efficiently.

A 2-month free tier is available for SageMaker Canvas, and afterwards you only pay for what you use. Start experimenting today and add ML to make the most of your data.


About the author

Helge Aufderheide is an enthusiast of making data usable in the real world with a strong focus on Automation, Analytics and Machine Learning in Industrial Applications, such as Manufacturing and Mobility.

Read More