Seventy-four percent of surveyed undergraduates in Columbia-Amazon program say it exceeded or far exceeded expectations.Read More
Bring Your Amazon SageMaker model into Amazon Redshift for remote inference
Amazon Redshift, a fast, fully managed, widely used cloud data warehouse, natively integrates with Amazon SageMaker for machine learning (ML). Tens of thousands of customers use Amazon Redshift to process exabytes of data every day to power their analytics workloads. Data analysts and database developers want to use this data to train ML models, which can then be used to generate insights for use cases such as forecasting revenue, predicting customer churn, and detecting anomalies.
Amazon Redshift ML makes it easy for SQL users to create, train, and deploy ML models using familiar SQL commands. In a previous post, we covered how Amazon Redshift ML allows you to use your data in Amazon Redshift with SageMaker, a fully managed ML service, without requiring you to become an expert in ML. We also discussed how Amazon Redshift ML enables ML experts to create XGBoost or MLP models in an earlier post. Additionally, Amazon Redshift ML allows data scientists to either import existing SageMaker models into Amazon Redshift for in-database inference or remotely invoke a SageMaker endpoint.
This post shows how you can enable your data warehouse users to use SQL to invoke a remote SageMaker endpoint for prediction. We first train and deploy a Random Cut Forest model in SageMaker, and demonstrate how you can create a model with SQL to invoke that SageMaker predictions remotely. Then, we show how end users can invoke the model.
Prerequisites
To get started, we need an Amazon Redshift cluster with the Amazon Redshift ML feature enabled. For an introduction to Amazon Redshift ML and instructions on setting it up, see Create, train, and deploy machine learning models in Amazon Redshift using SQL with Amazon Redshift ML.
You also have to make sure that the SageMaker model is deployed and you have the endpoint. You can use the following AWS CloudFormation template to provision all the required resources in your AWS accounts automatically.
Solution overview
Amazon Redshift ML supports text and CSV inference formats. For more information about various SageMaker algorithms and their inference formats, see Random Cut Forest (RCF) Algorithm.
Amazon SageMaker Random Cut Forest (RCF) is an algorithm designed to detect anomalous data points within a dataset. Examples of anomalies that are important to detect include when website activity uncharacteristically spikes, when temperature data diverges from a periodic behavior, or when changes to public transit ridership reflect the occurrence of a special event.
In this post, we use the SageMaker RCF algorithm to train an RCF model using the Notebook generated by the CloudFormation template on the Numenta Anomaly Benchmark (NAB) NYC Taxi dataset.
We downloaded the data and stored it in an Amazon Simple Storage Service (Amazon S3) bucket. The data consists of the number of New York City taxi passengers over the course of 6 months aggregated into 30-minute buckets. We naturally expect to find anomalous events occurring during the NYC marathon, Thanksgiving, Christmas, New Year’s Day, and on the day of a snowstorm.
We then use this model to predict anomalous events by generating an anomaly score for each data point.
The following figure illustrates how we use Amazon Redshift ML to create a model using the SageMaker endpoint.
Deploy the model
To deploy the model, go to the SageMaker console and open the notebook that was created by the CloudFormation template.
Then choose bring-your-own-model-remote-inference.ipynb
.
Set up parameters as shown in the following screenshot and then run all cells.
Get the SageMaker model endpoint
On the Amazon SageMaker console, under Inference in the navigation pane, choose Endpoints to find your model name. You use this when you create the remote inference model in Amazon Redshift.
Prepare data to create a remote inference model using Amazon Redshift ML
Create the schema and load the data in Amazon Redshift using the following SQL:
DROP TABLE IF EXISTS public.rcf_taxi_data CASCADE;
CREATE TABLE public.rcf_taxi_data
(
ride_timestamp timestamp,
nbr_passengers int
);
COPY public.rcf_taxi_data
FROM 's3://sagemaker-sample-files/datasets/tabular/anomaly_benchmark_taxi/NAB_nyc_taxi.csv'
iam_role 'arn:aws:iam:::<accountid>:role/RedshiftML' ignoreheader 1 csv delimiter ',';
Amazon Redshift now supports attaching the default IAM role. If you have enabled the default IAM role in your cluster, you can use the default IAM role as follows.
COPY public.rcf_taxi_data
FROM 's3://sagemaker-sample-files/datasets/tabular/anomaly_benchmark_taxi/NAB_nyc_taxi.csv'
iam_role default ignoreheader 1 csv delimiter ',';
You can use the Amazon Redshift query editor v2 to run these commands.
Create a model
Create a model in Amazon Redshift ML using the SageMaker endpoint you previously captured:
CREATE MODEL public.remote_random_cut_forest
FUNCTION remote_fn_rcf(int)
RETURNS decimal(10,6)
SAGEMAKER 'randomcutforest-xxxxxxxxx'
IAM_ROLE 'arn:aws:iam::<accountid>:role/RedshiftML';
CREATE MODEL public.remote_random_cut_forest
FUNCTION remote_fn_rcf(int)
RETURNS decimal(10,6)
SAGEMAKER 'randomcutforest-xxxxxxxxx'
IAM_ROLE default;
Check model status
You can use the show model
command to view the status of the model:
show model public.remote_random_cut_forest
You get output like the following screenshot, which shows the endpoint and function name.
Compute anomaly scores across the entire taxi dataset
Now, run the inference query using the function name from the create model
statement:
select ride_timestamp, nbr_passengers, public.remote_fn_rcf(nbr_passengers) as score
from public.rcf_taxi_data;
The following screenshot shows our results.
Now that we have our anomaly scores, we need to check for higher-than-normal anomalies.
Amazon Redshift ML has batching optimizations to minimize the communication cost with SageMaker and offers high-performance remote inference.
Check for high anomalies
The following code runs a query for any data points with scores greater than three standard deviations (approximately 99.9th percentile) from the mean score:
with score_cutoff as
(select stddev(public.remote_fn_rcf(nbr_passengers)) as std, avg(public.remote_fn_rcf(nbr_passengers)) as mean, ( mean + 3 * std ) as score_cutoff_value
from public.rcf_taxi_data)
select ride_timestamp, nbr_passengers, public.remote_fn_rcf(nbr_passengers) as score
from public.rcf_taxi_data
where score > (select score_cutoff_value from score_cutoff)
order by 2 desc;
The data in the following screenshot shows that the biggest spike in ridership occurs on November 2, 2014, which was the annual NYC marathon. We also see spikes on Labor Day weekend, New Year’s Day and the July 4th holiday weekend.
Conclusion
In this post, we used SageMaker Random Cut Forest to detect anomalous data points in a taxi ridership dataset. In this data, the anomalies occurred when ridership was uncharacteristically high or low. However, the RCF algorithm is also capable of detecting when, for example, data breaks periodicity or uncharacteristically changes global behavior.
We then used Amazon Redshift ML to demonstrate how you can make inferences on unsupervised algorithms (such as Random Cut Forest). This allows you to democratize ML by making predictions with Amazon Redshift SQL commands.
For more information about building different models with Amazon Redshift ML see the Amazon Redshift ML documentation.
About the Authors
Phil Bates is a Senior Analytics Specialist Solutions Architect at AWS with over 25 years of data warehouse experience.
Debu Panda, a principal product manager at AWS, is an industry leader in analytics, application platform, and database technologies and has more than 25 years of experience in the IT world.
Nikos Koulouris is a Software Development Engineer at AWS. He received his PhD from University of California, San Diego and he has been working in the areas of databases and analytics.
Murali Narayanaswamy is a principal machine learning scientist in AWS. He received his PhD from Carnegie Mellon University and works at the intersection of ML, AI, optimization, learning and inference to combat uncertainty in real-world applications including personalization, forecasting, supply chains and large scale systems.
Finding critical information during disasters
Lise St. Denis, a research scientist at the University of Colorado, says social media can be useful for responders. Now she’s helping them separate truly useful info from the noise.Read More
Run distributed hyperparameter and neural architecture tuning jobs with Syne Tune
Today we announce the general availability of Syne Tune, an open-source Python library for large-scale distributed hyperparameter and neural architecture optimization. It provides implementations of several state-of-the-art global optimizers, such as Bayesian optimization, Hyperband, and population-based training. Additionally, it supports constrained and multi-objective optimization, and allows you to bring your own global optimization algorithm.
With Syne Tune, you can run hyperparameter and neural architecture tuning jobs locally on your machine or remotely on Amazon SageMaker by changing just one line of code. The former is a well-suited backend for smaller workloads and fast experimentation on local CPUs or GPUs. The latter is well-suited for larger workloads, which come with a substantial amount of implementation overhead. Syne Tune makes it easy to use SageMaker as a backend to reduce wall clock time by evaluating a large number of configurations on parallel Amazon Elastic Compute Cloud (Amazon EC2) instances, while taking advantage of SageMaker’s rich set of functionalities (including pre-built Docker deep learning framework images, EC2 Spot Instances, experiment tracking, and virtual private networks).
By open-sourcing Syne Tune, we hope to create a community that brings together academic and industrial researchers in machine learning (ML). Our goal is to create synergies between these two groups by enabling academics to easily validate small-scale experiments at larger scale and industrials to use a broader set of state-of-the-art optimizers.
In this post, we discuss hyperparameter and architecture optimization in ML, and show you how to launch tuning experiments on your local machine and also on SageMaker for large-scale experiments.
Hyperparameter and architecture optimization in machine learning
Every ML algorithm comes with a set of hyperparameters that control the training algorithm or the architecture of the underlying statistical model. Typical examples of such hyperparameters for deep neural networks are the learning rate or the number of units per layer. Setting these hyperparameters correctly is crucial to obtain top-notch predictive performances.
To overcome the daunting process of trial and error, hyperparameter and architecture optimization aims to automatically find the specific configuration that maximizes the validation performance of our ML algorithm. Arguably, the easiest method to solve this global optimization problem is random search, where configurations are sampled from a predefined probability distribution. A more sample-efficient technique is Bayesian optimization, which maintains a probabilistic model of the objective function (here, the validation performance) to guide the search toward the global optimum in a sequential manner.
Unfortunately, with ever-increasing dataset sizes and ever-deeper models, training deep neural networks can be prohibitively slow to tune. Recent advances in hyperparameter optimization, such as Hyperband or MoBster, early stop the evaluation of configurations that are unlikely to achieve a good performance and reallocate the resources that would have been consumed to the evaluation of other candidate configurations. You can obtain further gains by using distributed resources to parallelize the tuning process. Because the time to train a deep neural network can vary widely across hyperparameter and architecture configurations, optimal resource allocation requires our optimizer to asynchronously decide which configuration to run next by taking the pending evaluation of other configurations into account. Next, we see how this works in practice and how we can run this either on a local machine or on SageMaker.
Tune hyperparameters with Syne Tune
We now detail how to tune hyperparameters with Syne Tune. First, you need a script that takes hyperparameters as arguments and reports results as soon as they are observed. Let’s look at a simplified example of a script that exposes the learning rate, dropout rate, and momentum as hyperparameters, and reports the validation accuracy after each training epoch:
from argparse import ArgumentParser
from syne_tune.report import Reporter
if __name__ == '__main__':
parser = ArgumentParser()
parser.add_argument('--lr', type=float)
parser.add_argument('--dropout_rate', type=float)
parser.add_argument('--momentum', type=float)
args, _ = parser.parse_known_args()
report = Reporter()
for epoch in range(1, args.epochs + 1):
# ... train model and get validation accuracy
val_acc = compute_accuracy()
# Feed the score back to Syne Tune.
report(epoch=epoch, val_acc=val_acc)
The important part is the call to report. It enables you to transmit results to a scheduler that decides whether to continue the evaluation of a configuration, or trial, and later potentially uses this data to select new configurations. In our case, we use a common use case that trains a computer vision model adapted from SageMaker examples on GitHub.
We define the search space for the hyperparameters (dropout, learning rate, momentum) that we want to optimize by specifying the ranges:
from syne_tune.search_space import loguniform, uniform
max_epochs = 27
config_space = {
"epochs": max_epochs,
"lr": loguniform(1e-5, 1e-1),
"momentum": uniform(0.8, 1.0),
"dropout_rate": loguniform(1e-5, 1.0),
}
We also specify the scheduler we want to use, Hyperband in our case:
from syne_tune.optimizer.schedulers.hyperband import HyperbandScheduler
scheduler = HyperbandScheduler(
config_space,
max_t=max_epochs,
resource_attr='epoch',
searcher='random',
metric="val_acc",
mode="max",
)
Hyperband is a method that randomly samples configurations and early stops evaluation trials if they’re not performing well enough after a few epochs. We use this particular scheduler for our example, but many others are available; for example, switching searcher=bayesopt enables us to use MoBster, which uses a surrogate model to sample new configurations to evaluate.
We’re now ready to define and launch a hyperparameter tuning job. First, we define the number of workers that evaluate trials concurrently and how long the optimization should run in seconds. Importantly, we use the local backend to evaluate our training script “train_cifar100.py” (see the full code). This means that the tuning happens on the local machine with one Python subprocess per worker. See the following code:
from syne_tune.backend.local_backend import LocalBackend
from syne_tune.tuner import Tuner
from syne_tune.stopping_criterion import StoppingCriterion
tuner = Tuner(
backend=LocalBackend(entry_point="train_cifar100.py"),
scheduler=scheduler,
stop_criterion=StoppingCriterion(max_wallclock_time=7200),
n_workers=4,
)
tuner.run()
As soon as the tuning starts, Syne Tune outputs the following line:
INFO:syne_tune.tuner:results of trials will be saved on /home/ec2-user/syne-tune/train-cifar100-2021-11-05-13-29-01-468
The log of the trials is stored in the aforementioned folder for further analysis. At any time during the tuning job, we can easily get the results obtained so far by calling load_experiment(“train-cifar100-2021-11-05-15-22-27-531”) and plotting the best result obtained since the start of the tuning job:
from syne_tune.experiments import load_experiment
tuning_experiment = load_experiment("train-cifar100-2021-11-05-15-22-27-531")
tuning_experiment.plot()
The following graph shows our results.
More fine-grained information is available if desired; the results obtained during tuning are stored as well as the scheduler and tuner state—namely, the state of the optimization process. For instance, we can plot the metric obtained for each trial over time (recall that we run four trials asynchronously). In the following figure, each trace represents the evaluation of a configuration as a function of the wall clock time; a dot is a trial stopped after one epoch.
We clearly see the effect of early stopping—only the most promising configurations are evaluated fully and poor performing configurations are stopped early, often after just evaluating a single epoch.
We can also easily switch to another scheduler, for example, random search or MoBster:
from syne_tune.optimizer.schedulers.fifo import FIFOScheduler
scheduler = FIFOScheduler(
config_space,
searcher='random',
metric="val_acc",
mode="max",
)
scheduler = HyperbandScheduler(
config_space,
max_t=max_epochs,
resource_attr='epoch',
searcher='bayesopt',
metric="val_acc",
mode="max",
)
If we then run the same code with the new schedulers, we can compare all three methods. We see in the following figure that Hyperband only continues well-performing trials, and early stops poorly performing configurations.
Therefore, Hyperband evaluates many more configurations than random search (see the following figure), which uses resources to evaluate every configuration until the end. This can lead to drastic speedups of the tuning process in practice.
MoBster further improves over Hyperband by using a probabilistic surrogate model of the objective function.
The following figure show all configurations that Hyperband samples during the tuning job.
In comparison, MoBster samples more promising configurations around the well-performing range (brighter color being better) of the search space instead of sampling them uniformly at random like Hyperband.
Run large-scale tuning jobs with Syne Tune and SageMaker
The previous example showed how to tune hyperparameters on a local machine. Sometimes, we need more powerful machines or a large number or workers, which motivates the use of a cloud infrastructure. Syne Tune provides a very simple way to run tuning jobs on SageMaker. Let’s look at how this can be achieved with Syne Tune.
We first upload the cifar100 dataset to Amazon Simple Storage Service (Amazon S3) so that it’s available on EC2 instances:
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = "sagemaker/DEMO-pytorch-cnn-cifar100"
role = sagemaker.get_execution_role()
inputs = sagemaker_session.upload_data(path="data", bucket=bucket, key_prefix="data/cifar100")
Next, we specify that we want trials to be run on the SageMaker backend. We use the SageMaker framework (PyTorch) in this particular example because we have a PyTorch training script, but you can use any SageMaker framework (such as XGBoost, TensorFlow, Scikit-learn, or Hugging Face).
A SageMaker framework is a Python wrapper that allows you to run ML code easily by providing a pre-made Docker image that works seamlessly on CPU and GPU for many framework versions. In this particular example, all we need to do is to instantiate the wrapper PyTorch with our training script:
from sagemaker.pytorch import PyTorch
from syne_tune.backend.sagemaker_backend.sagemaker_utils import get_execution_role
from syne_tune.backend.sagemaker_backend.sagemaker_backend import SagemakerBackend
backend = SagemakerBackend(
sm_estimator=PyTorch(
entry_point="./train_cifar100.py",
instance_type="ml.g4dn.xlarge",
instance_count=1,
role=get_execution_role(),
framework_version='1.7.1',
py_version='py3',
),
inputs=inputs,
)
We can now run our tuning job again, but this time we use 20 workers, each having their own GPU:
tuner = Tuner(
backend=backend,
scheduler=scheduler,
stop_criterion=StoppingCriterion(max_wallclock_time=7200, max_cost=20.0),
n_workers=20,
tuner_name="cifar100-on-sagemaker"
)
tuner.run()
After each instance initiates a training job, you see the status update as in the local case. An important difference to the local backend is that the total estimated dollar cost is displayed as well the cost of workers.
trial_id status iter dropout_rate epochs lr momentum epoch val_acc worker-time worker-cost
0 InProgress 1 0.003162 30 0.001000 0.900000 1.0 0.4518 50.0 0.010222
1 InProgress 1 0.037723 30 0.000062 0.843500 1.0 0.1202 50.0 0.010222
2 InProgress 1 0.000015 30 0.000865 0.821807 1.0 0.4121 50.0 0.010222
3 InProgress 1 0.298864 30 0.006991 0.942469 1.0 0.2283 49.0 0.010018
4 InProgress 0 0.000017 30 0.028001 0.911238 - - -
5 InProgress 0 0.000144 30 0.000080 0.870546 - - - -
6 trials running, 0 finished (0 until the end), 387.53s wallclock-time, 0.04068444444444444$ estimated cost
Because we specified max_wallclock_time=7200 and max_cost=20.0, the tuning job stops when the wall clock time or the estimated cost goes above the specified bound. In addition to providing an estimate of the cost, it can be optimized with our multi-objective optimizers (see the GitHub repo for an example). As shown in the following figures, the SageMaker backend allows you to evaluate many more configurations of hyperparameters and architectures in the same wall clock time than the local one and, as a result, increases the likelihood of finding a better configuration.
Conclusion
In this post, we saw how to use Syne Tune to launch tuning experiments on your local machine and also on SageMaker for large-scale experiments. To learn more about the library, check out our GitHub repo for documentation and examples that show, for instance, how to run model-based Hyperband, tune multiple objectives, or run with your own scheduler. We look forward to your contributions and seeing how this solution can address everyday tuning of ML pipelines and models.
About the Author
David Salinas is a Sr Applied Scientist at AWS.
Aaron Klein is an Applied Scientist at AWS.
Matthias Seeger is a Principal Applied Scientist at AWS.
Cedric Archambeau is a Principal Applied Scientist at AWS and Fellow of the European Lab for Learning and Intelligent Systems.
Your guide to AI and ML at AWS re:Invent 2021
It’s almost here! Only 9 days until AWS re:Invent 2021, and we’re very excited to share some highlights you might enjoy this year. The AI/ML team has been working hard to serve up some amazing content and this year, we have more session types for you to enjoy. Back in person, we now have chalk talks, workshops, builders’ sessions, and our traditional breakout sessions. Last year we hosted the first-ever machine learning (ML) keynote, and we are continuing the tradition. We also have more interactive and fun events happening with our AWS DeepRacer League and AWS BugBust Challenge. There are over 200 AI/ML sessions, including breakout sessions with customers such as Aon Corporation, Qualtrics, Shutterstock, and Bloomberg.
To help you plan your agenda for this year’s re:Invent, here are some highlights of the AI/ML track. You can also get the scoop from some of our AI/ML Community Heroes. So buckle up, and start registering for your favorite sessions.
Swami Sivasubramanian keynote
Wednesday, December 1, 8:30 am PT
Join Swami Sivasubramanian, Vice President, Machine Learning, AWS on an exploration of what it takes to put data in action with an end-to-end data strategy including the latest news on databases, analytics, and ML.
AI/ML leadership session with Bratin Saha
Wednesday, December 1, 4:00 pm PT
With the rise in compute power and data proliferation, ML has moved from the peripheral to being a core part of businesses and organizations across industries. AWS customers use ML and AI services to make accurate predictions, get deeper insights from their data, reduce operational overhead, improve customer experiences, and create entirely new lines of business. In this session, hear from Bratin Saha, Vice President, Machine Leaning, AWS and explore how AWS services can help you move from idea to production with ML.
AI/ML session preview
Here’s a preview of some of the different sessions we’re offering this year by session type. You can always log in to the event portal to favorite or register for any of these sessions, or search the catalog for over 200 other sessions available.
Breakout sessions
Prepare data for ML with ease, speed, and accuracy (AIM319)
Join this session to learn how to prepare data for ML in minutes using Amazon SageMaker. SageMaker offers tools to simplify data preparation so that you can label, prepare, and understand your data. Walk through a complete data-preparation workflow, including how to label training datasets using SageMaker Ground Truth, as well as how to extract data from multiple data sources, transform it using the prebuilt visualization templates in SageMaker Data Wrangler, and create model features. Also, learn how to improve efficiency by using SageMaker Feature Store to create a repository to store, retrieve, and share features.
Achieve high performance and cost-effective model deployment (AIM408)
To maximize your ML investments, high performance and cost-effective techniques are needed to scale model deployments. In this session, learn about the deployment options available in Amazon SageMaker, including optimized infrastructure choices; real-time, asynchronous, and batch inferences; multi-container endpoints; multi-model endpoints; auto scaling; model monitoring; and CI/CD integration for your ML workloads. Discover how to choose a better inference option for your ML use case. Then, hear from Goldman Sachs about how they use SageMaker for fast, low-latency, and scalable deployments to provide relevant research content recommendations for their clients.
Implementing MLOps practices with Amazon SageMaker, featuring Vanguard (AIM320)
Implementing MLOps practices helps data scientists and operations engineers collaborate to prepare, build, train, deploy, and manage models at scale. During this session, explore the breadth of MLOps features in Amazon SageMaker that help you provision consistent model development environments, automate ML workflows, implement CI/CD pipelines for ML, monitor models in production, and standardize model governance capabilities. Then, hear from Vanguard as they share their journey enabling MLOps to achieve ML at scale for their polyglot model development platforms using SageMaker features, including SageMaker projects, SageMaker Pipelines, SageMaker Model Registry, and SageMaker Model Monitor.
Enhancing the customer experience with Amazon Personalize (AIM204)
Personalizing content for a customer online is key to breaking through the noise. Yet, brands face challenges that often prevent them from providing these seamless, relevant experiences. Learn how easy it is to use Amazon Personalize to tailor product and content recommendations to ensure that your users are getting the content they want, leading to increased engagement and retention.
AI/ML for sustainability innovation: Insight at the edge (AIM207)
As climate change, wildlife conservation, public health, racial and economic equity, and new energy solutions become increasingly interdependent, scalable solutions are needed for actionable analysis at the intersection of these fields. In this session, learn how the power of AI/ML and IoT can be brought as close as possible to the challenging edge environments that provide data to create these insights. Also learn how AWS puts AI/ML in the hands of the largest-scale fisheries on the planet, and how organizations can leverage data to support more sustainable, resilient supply chains.
Get started with AWS computer vision services (AIM202)
This session provides an overview of AWS computer vision services and demonstrates how these pretrained and customizable ML capabilities can help you get started quickly—no ML expertise required. Learn how to deploy these models onto the device of your choice to run an inference locally or use cloud APIs for your specific computing needs. Learn first-hand how Shutterstock uses AWS computer vision services to create performance at scale for media analysis, content moderation, and quality inspection use cases.
Chalk talk sessions
Build an ML-powered demand planning system using Amazon Forecast (AIM310)
This chalk talk explores how you can use Amazon Forecast to build an ML-powered, fully automated demand planning system for your business or your multi-tenant SaaS platform without needing any ML expertise. Forecast automatically generates highly accurate forecasts using ML, explains the drivers behind those forecasts, and keeps your ML models always up to date to capture new trends.
Hello, is it conversational AI you’re looking for? (AIM305)
Customers calling in for support expect a personalized experience and a quick resolution to their issue. With chatbots, you can provide automated and human-like conversational experiences for your customers. In this chalk talk, discuss strategies to design personalized experiences using Amazon Lex and Amazon Polly. Explore how to design conversation paths, customize responses, integrate with your applications, and enable self-service use cases to scale your customer support functions.
Harness the power of ML to protect your business with Amazon Fraud Detector (AIM308)
How does more than 20 years of Amazon experience fighting fraud translate into an AI service that can help companies detect more online fraud faster? In this session, learn how Amazon Fraud Detector transforms raw data into highly accurate ML-based fraud detection models. Then, discover how the service does data preparation and validation, feature engineering, data enrichment, and model training and tuning. Finally, with actual customer examples across a wide range of industries and fraud use cases, find out how the service makes deployment easy.
Deep learning applications with PyTorch (AIM404)
By using PyTorch in Amazon SageMaker, you have a flexible deep learning framework combined with a fully managed ML solution that allows you to transition seamlessly from research prototyping to production deployment. In this session, hear from the PyTorch team on the latest features and library releases. Also, learn how to develop with PyTorch using SageMaker for key use cases, such as using a BERT model for natural language processing (NLP) and instance segmentation for fine-grained computer vision with distributed training and model parallelism.
Explore, analyze, and process data using Jupyter notebooks (AIM324)
Before using a dataset to train a model, you need to explore, analyze, and preprocess it. During this chalk talk, learn how to use Amazon SageMaker to complete these tasks in a Jupyter notebook environment.
Machine learning at the edge with Amazon SageMaker (AIM410)
More ML models are being deployed on edge devices such as robots and smart cameras. In this chalk talk, dive into building computer vision (CV) applications at the edge for predictive maintenance, industrial IoT, and more. Learn how to operate and monitor multiple models across a fleet of devices. Also walk through the process to build and train CV models with Amazon SageMaker and how to package, deploy, and manage them with SageMaker Edge Manager. The chalk talk also covers edge device setup and MLOps lifecycle with over-the-air model updates and data capture to the cloud.
Builders’ sessions
Build and deploy a custom computer vision model in 60 minutes (AIM314)
Amazon Rekognition Custom Labels is an automated ML feature that enables customers to quickly train their own custom models for detecting business-specific objects and scenes from images—no ML expertise is required. In this builders’ session, learn how to use Amazon Rekognition Custom Labels to build and deploy your own computer vision model and push it to an application to showcase inference on images from a camera feed. Bring your laptop and an AWS account.
Easily label training data for machine learning at scale (AIM406)
Join this session to learn how to create high-quality labels while also reducing your data labeling costs by up to 70%. This builders’ session walks through the different workflow options in Amazon SageMaker Ground Truth, such as automatic labeling and assistive labeling features like auto-segmentation and image label verification. It also details how to build highly accurate training datasets for company brand logos, so you can build an ML model for company brand protection.
Workshop sessions
Develop your ML project with Amazon SageMaker (AIM402)
In this workshop, learn how to develop a full ML project end to end with Amazon SageMaker. Start with data exploration and analysis, data cleansing, and feature engineering with SageMaker Data Wrangler. Then, store features in SageMaker Feature Store, extract features for training with SageMaker Processing, train a model with SageMaker training, and then deploy it with SageMaker hosting. Also, learn how to use SageMaker Studio as an IDE and SageMaker Pipelines for orchestrating the ML workflow.
End-to-end 3D machine learning on Amazon SageMaker (AIM414)
As lidar sensors become more accessible and cost-effective, customers increasingly use point cloud data in new spaces like autonomous driving, robotics, and augmented reality. The growing availability of lidar sensors has increased use of point cloud data for ML tasks like 3D object detection, segmentation, object synthesis, and reconstruction. This workshop features Amazon SageMaker Ground Truth and explains how to ingest raw 3D point cloud data, label it, train a 3D object detection model, and deploy the model. The model in this session will be trained on an autonomous vehicle dataset.
AI workflow automation for document processing (AIM316)
Mortgage packets have hundreds of documents in various layouts and formats. With ML, you can set up a document-processing pipeline to automate mortgage application workflows like extracting text from W2s, paystubs, and deeds; classifying documents; or using custom entity recognition to pull out specific data points. In this workshop, learn various ways to use optical character recognition (OCR), NLP, and human-in-the-loop services to build a document-processing pipeline to automate mortgage applications—saving time, reducing manual effort, and improving ROI for your organization.
Boost the value of your media content with ML-powered search (AIM315)
Consumers rely on content not only to entertain but also to educate and facilitate purchasing decisions. To meet this demand, media content production is exploding. However, the process of producing, distributing, and monetizing this content is often complex, expensive, and time-consuming. Applying artificial intelligence and ML capabilities like image and video analysis, audio transcription, machine translation, and text analytics can solve many of these problems. In this workshop, utilize ML to extract detailed metadata from content and make it available for search, discovery, and editing use cases.
Instantly detect and diagnose anomalies within your business data (AIM302)
Anomalies in business data often indicate potential issues or even opportunities. ML can help you detect anomalies and then act on them proactively. In this workshop, learn how Amazon Lookout for Metrics automatically detects anomalies across thousands of metrics in near-real time and reduces false alarms.
Join the first annual AWS BugBust re:Invent Challenge and help set a Guinness record
The largest code fixing challenge is here! Python and Java developers of all skill levels can compete to fix software bugs, earn points, and win an array of prizes including Amazon Echo Dots, hoodies, and the grand prize of $1,500 USD. As you bust bugs, you also become part of an attempt to set the record for the largest bug fixing challenge with the Guinness World Records. All registered participants who fix even one bug will receive exclusive prizes and a certificate from AWS and Guinness to commemorate their contribution. Let the bug busting begin! You can join the challenge virtually or in-person at the AWS BugBust Hub in the main expo. Register now for free.
AWS DeepRacer: The fastest way to get rolling with machine learning
Developers of all skill levels from beginners to experts can get hands-on with ML by using AWS DeepRacer to train models in a cloud-based 3D racing simulator. Racers from virtually anywhere in the world can compete in the AWS DeepRacer League, the first global autonomous racing league driven by reinforcement learning. The race is on now! Sign in to AWS DeepRacer and compete in the AWS re:Invent Open for prizes and glory now through December 31, 2021. Tune in to the AWS DeepRacer League Championships on Twitch November 19 and 22 to see the 40 fastest developers of the 2021 season compete live. Learn from the best as they vie for a chance to advance to the Championship Cup Finale during Swami Sivasubramanian’s keynote on December 1, where they will race for their shot at $20,000 USD in cash prizes and the right to hoist the Championship Cup!
For those attending re:Invent in Las Vegas, don’t miss out on the opportunity to take your model from Sim2Real (simulation to reality) on the AWS DeepRacer Speedway inside the content hub at Caesar’s Forum. Upload your model and race a 1/18th scale autonomous RC car on a physical track. Stop by Tuesday afternoon to participate in the livestreamed wildcard race for a chance to win a trip back for re:Invent 2022. No model? No problem! The all-new AWS DeepRacer Arcade is available in the expo, where you can get literally get in the driver’s seat and take the wheel in this educational racing game. Take a spin on the virtual track and then compete against a featured AWS DeepRacer autonomous model in this arcade racing experience, with prizes and giveaways galore. Shift into the fast lane on your ML learning journey with AWS DeepRacer.
Head over to the re:Invent portal to build your schedule so you’re ready to hit the ground running. Be sure to stop by and talk to our experts at the AI/ML booth, or chat with the speakers after sessions. We can’t wait to see you in Las Vegas!
About the Authors
Andrea Youmans is a Product Marketing Manager on the AI Services team at AWS. Over the past 10 years she has worked in the technology and telecommunications industries, focused on developer storytelling and marketing campaigns. In her spare time, she enjoys heading to the lake with her husband and Aussie dog Oakley, tasting wine and enjoying a movie from time to time.
AWS AI/ML Community attendee guides to AWS re:Invent 2021
The AWS AI/ML Community has compiled a series of session guides to AWS re:Invent 2021 to help you get the most out of re:Invent this year. They covered four distinct categories relevant to AI/ML. With a number of our guide authors attending re:Invent virtually, you will find a balance between virtually accessible sessions and sessions available in-person.
The AWS AI/ML Community is a vibrant group of developers, data scientists, researchers, and business decision-makers that dive deep into artificial intelligence and machine learning (ML) concepts, contribute with real-world experiences, and collaborate on building projects together.
Community guides for developers new to machine learning
From AWS ML Hero Mike Chambers AWS reInvent 2021: How To, tips, and my session selection (video). In this video—which should be required viewing for anyone new to re:Invent—Mike dives deep, beyond simply recommending sessions, with loads of tips and advice for how to make the most of your re:Invent experience—in-person or virtual.
AWS ML Hero Cyrus Wong’s top five AL/ML newbies should attend! For folks new to ML on AWS, spend your time leaning and making use of Amazon AI/ML services with Cyrus’s top five re:Invent sessions.
AWS re:Invent 2021: How to maximize your in-person learning experience as a new Machine Learning practitioner, from AWS ML Community Builder Martin Paradesi. For those attending re:Invent in-person this year, check out Martin’s guide for five sessions curated for new ML practitioners.
From our new Egypt-based AWS ML Hero Salah Elhossiny: Top 5 AWS ML Sessions to Attend at AWS re:Invent 2021. For those new to AWS ML, spend your time learning and using Amazon SageMaker with the best five AWS re:Invent sessions to help you get started quickly!
Community guides for AI/ML developers
AWS ML Hero Juv Chan’s top five recommendations for AI/ML builders and architects. Juv, a Sr. Cloud AI Engineer/Architect, ML Hero, and re:Invent Championship Cup 2019 finalist, shares his top five session picks and can’t miss photos from re:Invent 2019.
Top 5 Sessions for AI/ML Developers at AWS re:Invent 2021, from AWS ML Community Builder Brooke Jamieson. For those attending re:Invent virtually this year, check out Brooke’s guide.
AWS ML Hero Tomasz Ptak’s AWS re:Invent 2021 schedule. Tomasz shares his session picks plus tips and advice for making the most of your re:Invent experience.
Production-grade ML re:Invent 2021 sessions guide, from AWS ML Community Builder Kyle Gallatin. Builder Kyle Gallatin shares five ML talks skewed towards his interests in scalable, production-grade ML.
Community guides for MLOps developers
AWS ML Hero Rustem Feyzkhanov’s top MLOps breakout sessions to look forward to at re:Invent 2021. Rustem shares seven sessions to help you stay in the loop of MLOps in the AWS Cloud.
AWS ML Community Builder Phil Basford’s must-see sessions. For those interested in MLOps, ML architecture, edge computing, or data analytics, see Phil’s guide and his tips on how to have fun in Vegas and at home for those attending virtually.
Community guides for ML data scientists
AWS ML Hero’s Philipp Schmid’s remote guide for your virtual re:Invent 2021, focused on NLP and machine learning. Attending remote from Germany, Hugging Face ML engineer and AWS ML Hero Philipp Schmid offers an in-depth guide.
AWS ML Community Builder Pier Paolo Ippolito’s top five suggestions for ML data scientists. Pier, a data scientist at SAS and editor at Towards Data Science, shares his top five picks curated for technical ML builders.
Other AWS ML Community guides worth exploring
AWS ML Hero Kesha Williams’s Machine Learning Attendee Guide 2021. The official AWS Hero guide from Kesha dives deep across all session categories. Check this guide out for a full walkthrough of how to build your schedule, and the ultimate deep dive into Kesha’s ML session picks.
Lastly, we have a unique in-depth guide from AWS ML Community Builder Janos Tolgyesi. Learn how to fight climate change with ML skills and make the Earth a better place with ML at re:Invent 2021. Janos shares his sessions picks and a bonus session suggestion for those interested in beer, plus personalized recommendations!
Whether you’re attending in-person or virtually this year, we hope these recommendations and advice from the AWS ML Community help you make the most of your re:Invent experience. Have a great re:Invent!
About the Author
Paxton Hall is a Marketing Program Manager for the AWS AI/ML Community on the AI/ML Education team at AWS. He has worked in retail and experiential marketing for the past 7 years, focused on developing communities and marketing campaigns. Out of the office, he’s passionate about public lands access and conservation, and enjoys backcountry skiing, climbing, biking, and hiking throughout Washington’s Cascade mountains.
Understand drivers that influence your forecasts with explainability impact scores in Amazon Forecast
We’re excited to launch explainability impact scores in Amazon Forecast, which help you understand the factors that impact your forecasts for specific items and time durations of interest. Forecast is a managed service for developers that uses machine learning (ML) to generate more accurate demand forecasts, without requiring any ML experience. To increase forecast model accuracy, you can add additional information or attributes such as price, promotion, category details, holidays, or weather information to your forecasting model, but you may not know how each attribute influences your forecast. With today’s launch, you can now understand how each attribute impacts your forecasted values using the explainability feature, which we discuss in this post.
ML-based forecasting models, which are more accurate than heuristic rules or human judgment, can drive significant improvement in revenue and customer experience. However, business leaders often lose trust in technology when they see forecasted numbers drastically differing from their intuition, and may find it hard to trust ML systems. Because demand planning decisions have a high impact on the business, business leaders may end up overriding forecasts because they may believe that they have to take the forecast model predictions at face value to make critical business decisions, without understanding why those forecasts were generated and what factors are influencing forecasts to be higher or lower. This can lead to compromising forecast accuracy, and you may lose the benefit of ML forecasting.
Amazon Forecast now provides explainability, which gives you item-level insights across your preferred time duration. Having a certain level of understanding on why a particular forecast value is high or low at a particular time is helpful for decision-making and building trust and confidence in your ML solutions. Explainability reports include impact scores, which help you understand how each attribute in your training data contributes to either increasing or decreasing your forecasted values for specific items. In addition, you can choose to understand explainability for your entire forecast horizon or for specific time durations. Explainability removes the need of running multiple manual analyses to understand past sales and external variable trends to explain forecast results.
How to interpret explainability impact scores
Explainability helps you better understand how the attributes, such as price, category, or holidays, in your datasets impact your forecast values. Forecast uses a metric called impact scores to quantify the relative impact of each attribute and determine whether they generally increase or decrease forecast values.
Impact scores measure the relative impact attributes have on forecast values. For example, if the price
attribute has an impact score that is twice as large as the brand_id
attribute, you can conclude that the price of an item has twice the impact on forecast values than the product brand. Impact scores also provide information on whether an attribute increases or decreases the forecasted value. A negative impact score reflects that the attribute tends to decrease the value of the forecast.
Impact scores measure the relative impact of attributes to each other, not the absolute impact. If an attribute has a low impact score, that doesn’t necessarily mean that it has a low impact on forecast values; it means that it has a lower impact on forecast values than other attributes used by the predictor. If you change attributes in your predictor, the impact scores may differ, and the attribute with the low impact score may have a higher score relative to other attributes. Also, you can’t use impact scores to determine whether particular attributes improve the model accuracy or not. You should use accuracy metrics such as weighted quantile loss and others provided by Forecast to access predictor accuracy.
In the following graph, we take an example of an explainability report graph that shows the relative impact of different attributes on the forecasted value of item_d 1 across all the time points in the forecast horizon. We see that the relative impact is in the following order: Price
has the highest impact, followed by StoreLocation
, then Promo
and Holiday_US
. Price
has the highest influence item_id 1 and tends to increase the forecast value. StoreLocation
has the second highest impact on item_id 1 but tends to decrease the forecast value. Because Promo
is close to 0.2 impact score, Price
has five times more impact than Promo
on the forecasted value of item_id 1, and both attributes tend to increase the forecast value. Holiday_US
has an impact score of 0, which means that this attribute doesn’t increase or decrease the forecast value for item_id 1 relative to other attributes.
The following image shows an example of the explainability report export file with the impact scores for specific time series and time points as well as aggregated scores across those time series and time points.
Generate explainability impact scores
In this section, we walk through how to generate explainability impact scores for your forecasts using the Forecast console. To use the new CreateExplainability API, refer to the notebook in our GitHub repo or review Forecast Explainability.
- On the Forecast console, create a dataset group. Upload your historical demand dataset as target time series followed by related time series or item metadata that you want to use for more accurate forecasting and for which you’re interested in seeing explainability impact scores.
- In the navigation pane, under your dataset, choose Predictors.
- Choose Train new predictor.
Forecast defaults to AutoPredictor as the default training option. No further action is needed from you, but remember that only forecasts generated from a model that has been trained with AutoPredictor are eligible for later generating explainability impact scores for specific forecasts.
- Now that your model is trained, choose Forecasts in the navigation pane.
- Choose Create a forecast.
- Select your trained predictor to create a forecast.
- Choose Insights in the navigation pane.
- Choose Create explainability.
- Choose the forecast that you want to generate explainability impact scores for.
- Choose if you want to see impact scores for all the time points in the forecast horizon or only for a specific time duration.
You can specify up to 500 consecutive time points per explainability report.
- Upload the list of specific time series for which you want to see explainability impact scores.
A time series is a unique combination of item ID and dimension. You can specify up to 50 time series per Forecast explainability.
- Specify the schema of the CSV file that you have uploaded.
- Choose Create explainability.
It takes less than an hour to generate the explainability impact scores.
- When the job status is active, choose the explainability job to view the impact score.
Here you can review the explainability impact score graph. You can use the controls at the top of the graph to drill down to specific time series or time points or view at an aggregated level.
- To export all the impact scores, choose Create explainability export in the Explainability exports
- Provide the export details and choose Create explainability export.
The export is saved in an Amazon Simple Storage Service (Amazon S3) bucket that you specify.
- When the export is complete, navigate to your S3 bucket to review the explainability report CSV file.
The following is an example of your explainability export CSV file. Depending on how large your dataset is, multiple files may be exported.
Aggregate explainability impact scores for category level analysis
You may want to review explainability for a group of items together, which can have more than 50 items. For example, a grocery retailer might be interested in understanding what is driving the forecasts for all their fruits and vegetables, and this category may consist of more than 50 SKUs in their data. However, Forecast lets you specify up to 50 time series per Forecast explainability job. If you have more than 50 time series, you need to run the explainability job multiple times with different items in each job and then combine them.
The explainability export file provides two type of impact scores: normalized impact scores and raw impact scores. Raw impact scores are based on Shapley values and aren’t scaled or bounded. Normalized impact scores scale the raw scores to a value between -1 and 1. Raw impact scores are useful for combining and comparing scores across different explainability resources. Use the raw impact scores of all the time series across multiple explainability jobs to aggregate, then compare it to find the relative influence of each attribute. You can view an example on how to do so by following the notebook in our GitHub repo.
Conclusion
Forecast now provides explainability for specific items and time durations of interest. With the explainability feature, you can understand how each attribute impacts your forecasted values. To learn more, review Forecast Explainability and the notebook in our GitHub repo. If you are interested in aggregated explainability for all your items at the predictor level, review our blog on using the CreateAutoPredictor API here. Explainability is available in all Regions where Forecast is publicly available. For more information about Region availability, see AWS Regional Services.
About the Authors
Namita Das is a Sr. Product Manager for Amazon Forecast. Her current focus is to democratize machine learning by building no-code/low-code ML services. On the side, she frequently advises startups and loves training her dog with new tricks.
Dima Fayyad is a Software Development Engineer on the Amazon Forecast team. She is passionate about machine learning and AI and is currently working on large-scale distributed systems in the forecasting space. In her free time, she enjoys exploring different cuisines, traveling, and skiing.
Youngsuk Park is a Machine Learning Scientist at AWS AI and Amazon Forecast. His research lies in the interplay between machine learning, optimization, and decision-making, with over 10 publications in top-notch ML/AI venues. Before joining AWS, he obtained a PhD from Stanford University.
Shannon Killingsworth is a UX Designer for Amazon Forecast. His current work is creating console experiences that are usable by anyone, and integrating new features into the console experience. In his spare time, he is a fitness and automobile enthusiast.
New Amazon Forecast API that creates up to 40% more accurate forecasts and provides explainability
We’re excited to announce a new forecasting API for Amazon Forecast that generates up to 40% more accurate forecasts and helps you understand which factors, such as price, holidays, weather, or item category, are most influencing your forecasts. Forecast uses machine learning (ML) to generate more accurate demand forecasts, without requiring any ML experience. Forecast brings the same technology used at Amazon to developers as a fully managed service, removing the need to manage resources.
With today’s launch, Forecast can now forecast up to 40% more accurate results by using a combination of ML algorithms that are best suited for your data. In many scenarios, ML experts train separate models for different parts of their dataset to improve forecasting accuracy. This process of segmenting your data and applying different algorithms can be very challenging for non-ML experts. Forecast uses ML to learn not only the best algorithm for each item, but the best ensemble of algorithms for each item, leading to up to 40% better accuracy on forecasts.
To further increase forecast model accuracy, you can add additional information or attributes such as price, promotion, category details, holidays, or weather information, but you may not know how each attribute influences your forecast. Forecasting is mission critical, and therefore having a certain level of attribute explainability is helpful for decision-making. With today’s launch, Forecast now helps you understand and explain how your forecasting model is making predictions by providing explainability reports after your model has been trained. Explainability reports include impact scores, so you can understand how each attribute in your training data contributes to either increasing or decreasing your forecasted values. By understanding how your model makes predictions, you can make more informed business decisions. For example, you can verify that your model is behaving as expected by confirming that attributes with a high impact score represent a valid signal for predictions in your business problem.
You can bring in your recent data to use the latest insights before forecasting for the next period. However, in doing so, you have to train your entire forecasting model again, which is a time-consuming process. Most Forecast customers deploy their forecasting workflow within their operations such as an inventory management solution and run their operations at a set cadence. Because retraining on the entire data can be time-consuming, customer operations may get delayed. With today’s launch, you can save up to 50% of retraining time by selecting to incrementally retrain your models with the new information that you have added.
To get more accurate forecasts, faster retraining, and explainability, use the new experience through the AWS Management Console or the CreateAutoPredictor API. This launch is accompanied with new pricing, which you can review at Amazon Forecast pricing.
Interpreting model explainability
Explainability helps you better understand how the attributes in your datasets, such as price, category, or holidays, impact your forecast values. Forecast uses a metric called impact scores to quantify the relative impact of each attribute and determine whether they generally increase or decrease forecast values.
Impact scores measure the relative impact attributes have on forecast values. For example, if the price
attribute has an impact score that is twice as large as the brand_id
attribute, you can conclude that the price of an item has twice the impact on forecast values than the product brand. Impact scores also provide information on whether an attribute increases or decreases the forecasted value. A negative impact score reflects that the attribute tends to decrease the value of the forecast.
Impact scores measure the relative impact of attributes to each other, not the absolute impact. If an attribute has a low impact score, that doesn’t necessarily mean that it has a low impact on forecast values; it means that it has a lower impact on forecast values than other attributes used by the predictor. If you change attributes in your predictor, the impact scores may differ, and the attribute with the low impact score may have a higher score relative to other attributes. Also, you can’t use impact scores to determine whether particular attributes improve the model accuracy or not. You should use accuracy metrics such as weighted quantile loss and others provided by Forecast to access predictor accuracy.
In the following graph, we take an example of a predictor where the relative impact of attributes is as follows: US holidays, promos, weather, price, and category. US holidays has the highest impact on the forecast values. US holidays tend to increase the forecasted value. Category has the lowest impact on the forecast values, and this attribute tends to decrease the forecast value.
Train a new predictor with the new Forecast API
In this section, we walk through how to train a new predictor using the newly launched forecasting API through the console. To use the new CreateAutoPredictor API directly, refer to the notebook in our GitHub repo or review Training Predictors.
- On the Forecast console, create a dataset group and upload your historical demand dataset as target time series followed by any related time series or item metadata that you want to use for more accurate forecasting.
- In the navigation pane, under your dataset, choose Predictors.
- Choose Train new predictor.
- In the Predictor settings section, enter a name for your predictor, how long in the future you want to forecast with the forecasting frequency, and the number of quantiles you want to forecast for.
- AutoPredictor is enabled by default; no further action is needed from you.
- For Optimization metric, you can choose an optimization metric to optimize AutoPredictor to tune a model for a specific accuracy metric of your choice. We leave this as default for our walkthrough.
- To get the predictor explainability report, select Enable predictor explainability.
- Under the input data configuration, you can add local weather information and national holidays for more accurate demand forecasts.
- In the Attribute configuration section, you can choose filling options for missing values.
- Choose Start to start training your predictor.
- After your predictor is trained, choose your predictor on the Predictors page.
On the predictor’s details page, you can view the overall predictor accuracy metrics and the explainability impact score.
- Now that your model is trained, choose Forecasts in the navigation pane.
- Choose Create a forecast.
- For Predictor, choose your trained predictor to create a forecast.
Retrain your predictor with new data
We now walk through how to use the Forecast console to retrain your predictor when you have new data for the same forecasting problem. You can also follow the notebook in our GitHub repo to learn how to use the CreateAutoPredictor API for retraining your predictor.
Before you retrain your predictor, you have to re-import your dataset with the latest available historical observations.
- On the Forecast console, under your dataset group in the navigation pane, choose Datasets.
In our example, we only update the target time series data. You can follow the same steps to update the related time series data as well.
- Choose the dataset name to view the details.
- In the Dataset imports section, choose Create dataset import.
- Provide the Amazon Simple Storage Service (Amazon S3) location of your dataset and complete importing your data.
- After your dataset has been imported, choose Predictors in the navigation pane.
- Select the predictor for which AutoPredictor enabled is True.
Only predictors with AutoPredictor enabled are eligible to be retrained.
- On the Predictor actions menu, choose Retrain.
- Enter a new name for the retrained predictor and choose Retrain predictor.
All the predictor configuration from the source predictor is automatically copied over to the new predictor that you retrain.
You’re redirected to the predictor details page where you can review the predictor settings.
- Now that your model is trained, choose Forecasts in the navigation pane.
- Choose Create a forecast.
- Choose your trained predictor to create a forecast.
Upgrade your existing legacy predictor to AutoPredictor
You can easily move your existing predictors to AutoPredictor to take advantage of more accurate forecasts by using a predictor that selects the best ensemble of algorithms for each item, faster retraining, and predictor explainability. Forecast takes the old predictor as a reference and creates a new AutoPredictor. You can follow the notebook in our GitHub repo to do the same through the CreateAutoPredictor API.
- On the Forecast console, choose a dataset group for which you have previously trained a predictor.
- In the navigation pane, under your dataset, choose Predictors.
An Upgrade link is next to any legacy predictor for which AutoPredictor is False.
- Select your predictor and on the Predictor actions menu, choose Upgrade.
- Enter the name of the new predictor.
All the predictor configurations from the old predictor are automatically copied over to train the new AutoPredictor.
You’re redirected to the predictor details page where you can review the predictor settings.
- Now that your model is trained, choose Forecasts in the navigation pane.
- Choose Create a forecast.
- Choose your trained predictor to create a forecast.
Conclusion
To get more accurate forecasts, faster retraining, and explainability, you can follow the steps mentioned in this post or follow the notebook in our GitHub repo. If you want to upgrade your existing forecasting models to the new CreateAutoPredictor API, you can do so with one click either on through console or as shown in the notebook in our GitHub repo. To learn more, review Training Predictors. We recommend reviewing the pricing for using these new features. All these new capabilities are available in all Regions where Forecast is publicly available. For more information about Region availability, see AWS Regional Services.
About the Authors
Namita Das is a Sr. Product Manager for Amazon Forecast. Her current focus is to democratize machine learning by building no-code/low-code ML services. On the side, she frequently advises startups and loves training her dog with new tricks.
Jitendra Bangani is an Engineering Manager at AWS, leading a growing team of curious and driven engineers for Amazon Forecast. He started his career at Amazon as an intern in 2013; since then he has helped build engaging shopping experiences, hyperscale distributed systems, and autonomous AI services that delight Amazon and AWS customers.
Hilaf Hasson is a Machine Learning Scientist at AWS, and currently leads the R&D team of scientists working on Amazon Forecast. Before joining AWS, he held multiple faculty positions, including as an Assistant Professor of Mathematics at Stanford University.
Adarsh Singh works as a Software Development Engineer in the Amazon Forecast team. In his current role, he focuses on engineering problems and building scalable distributed systems that provide the most value to end users. In his spare time, he enjoys watching anime and playing video games.
Chinmay Bapat is a Sr. Software Development Engineer in the Amazon Forecast team. His interests lie in the applications of machine learning and building scalable distributed systems. Outside of work, he enjoys playing board games and cooking.
Amazon releases dataset to help detect counterfactual phrases
Identifying descriptions of events that did not take place in product reviews improves product retrieval results.Read More
Next Gen Stats Decision Guide: Predicting fourth-down conversion
It is fourth-and-one on the Texans’ 36-yard line with 3:21 remaining on the clock in a tie game. Should the Colts’ head coach Frank Reich send out kicker Rodrigo Blankenship to attempt a 54-yard field goal or rely on his offense to convert a first down? Frank chose to go for it, leading to a first-down conversion and an eventual touchdown to seal the win. Was this the optimal call or a gamble that ended up working? Through a collaboration between the NFL’s Next Gen Stats team and AWS, NFL fans can now get an answer to this question.
Like the Colts-Texans example, the decision of what to do on a fourth down late in the game can be the difference between a win and a loss. While it can be tempting to focus on fourth-downs late in the game, even fourth-down decisions that occur early in the game can be important. Fourth-down decisions early in the game can have reverberating effects that compound over the course of a game or season. Head coaches who consistently make the right call on the fourth down put their teams in the best possible position to win, but how does a coach know what the right call is? What factors do they have to weigh, and how can a computer give fans insights into this complicated decision-making process?
The problem can be represented as a tree of choices and their respective potential outcomes. On any fourth down, a team has three main options: punt, kick a field goal, or go for it. If a team punts, their opponent generally gains possession of the ball at some point farther down the field. On a field goal attempt, the two main outcomes are the offensive team either makes the field goal or misses the field goal. If they make the field goal, they gain three points. If they miss the field goal, the defense gains possession of the ball at the location of the attempt. Similarly, if a team chooses to go for it, there are two main outcomes. Either the team gains enough yards for a first-down (or potentially a touchdown), or the defense gains possession of the ball at the end of the play.
When coaches decide what to do on a fourth-down, they must weigh all the potential outcomes and the impact of these outcomes on the odds of winning the game. To help fans understand a coach’s decision, the NFL and AWS partnered to create the Next Gen Stats Decision Guide. The Next Gen Stats Decision Guide is a suite of machine learning (ML) models designed to determine the optimal fourth-down call. The decision guide does this by predicting the odds of each potential fourth-down outcome and the resulting odds of winning the game. By comparing the odds of winning the game for each fourth-down choice, the Next Gen Stats Decision Guide provides a data-driven answer to that optimal fourth-down call.
Going back to Frank Reich’s decision, the Colts needed 0.25 yards to gain a first down. What is the probability that they convert? As shown in the following figure, our fourth-down conversion probability model predicts an 81% chance. When paired with the updated win probability of 75% if they convert, we get an expected win probability of 69%. However, if they choose to kick a field goal, the chance of making the field goal is around 42%. Paired with the win probability of 71% if successful, we get an expected win probability of 56%. Based on these expected probabilities, the Next Gen Stats Decision Guide recommends going for it with a 13% difference.
In addition to fourth-down decisions, coaches must decide what to do after scoring a touchdown. The team can kick an extra point (+1 point) or elect to attempt a two-point conversion (+2 points). The application of the Next Gen Stats Decision Guide to fourth-down plays and after-touchdown plays has been presented before, and is a good primer for this discussion. In this post, we focus on the models that determine the probability of converting a fourth-down conversion. We share how we feature engineered and developed the ML model and metrics that were used to evaluate the quality of predictions.
Go-for-it model
If a team chooses to go for it on a fourth-down, the team must gain enough yards to make a first-down on that single play. This means that not all fourth-downs are equal. Some require the offense to gain less than a yard, while others may occasionally require the offense to gain more than 10 yards. The location on the field, time left on the clock, and relative strengths of the teams are among the important parameters in understanding the odds of success. In building the Go-for-it model, we examine these and other factors to determine which features are most important in constructing a performant model.
Problem formulation
The odds of converting on a fourth-down can be formulated as a multi-class classifier. In this formulation, each class represents the offense gaining some number of yards on the play. The probability of each class is used as the odds that the team will gain that number of yards on the play. The following histogram shows the yards gained on third- and fourth-down plays from 2016–2020. An initial approach might be to make each class in the model represent an integer number of yards gained, but the histogram shows that this approach will be difficult. Classes in the long tail of the graph (roughly 40–100 yards) occur infrequently, and this sort of class imbalance can be difficult account for in model training.
To combat the potential class imbalance, we used an unequal distribution of yards to classes. Instead of each yard gained being an individual class, we used 17 different classes to encompass all the potential outcomes shown in in the graph.
As shown in the following table, we use one class for all negative or zero-yards-gained results. Between 1–15 yards gained, we use one class for each potential outcome. The reason for this breakdown is that 88% of fourth-down plays have somewhere between 1–15 yards to go. This enables the model to capture a large majority of fourth-down situations with high fidelity. To address plays with more than 15 yards to go, we employ a decay factor to represent the decreasing probability of getting more yards on a single play.
Yards | Model Classes (17) |
Less than or equal to 0 | 0 |
1–15 yards | 1–15 (15 classes) |
16+ yards | 16 |
The following equation shows the decay factor used where the probability of converting ( Pconversion ) is the probability of getting 16 or more yards () divided by the actual distance needed for a first down (d ) minus 15 yards.
Features
Just as a coach needs to consider many factors when deciding what to do in a game, the conversion probability models also have many potential features to use. Part of the modeling process involved determining which features to incorporate into the model. We used feature importance measures like correlation to help us identify several high-value features (see the following table). These features include the actual yards-to-go, the Vegas spread, and the historical aggregations of expected points added (EPA) by team and quarterback.
The actual yards-to-go is arguably the most important feature for this model, aligning with general football knowledge. The more yards a team needs to gain, the less likely the team is to achieve that outcome. What makes the actual yards-to-go metric even more valuable in this model is that it is derived from the NGS tracking data. Traditional NFL datasets often represent the yards-to-go as an integer, which obscures the variable nature of the game. With the NGS tracking data, we can get a measurement of the football’s location with sub-foot accuracy. This allows our model to understand the difference between fourth and inches versus fourth and 1 yard.
Although the actual yards-to-go is a clear metric to provide the model, some information is harder to quantify immediately and provide to the model. For example, a coach understands the unique skillsets of their team and the opposition, both on that day and historically. To assess coaching decisions, the model needs a way to use similar information. The Vegas lines are a useful condensation of vast amounts of situational and historical knowledge about the teams into a small set of numbers. Specifically, the point spread and the total points lines capture information about prevailing beliefs regarding the relative strengths of the teams, and the model found these values useful.
Input Features | Description |
actualYardsToGo | The yards to go as measured using NGS tracking data between the ball at snap and the yards-to-go marker |
isCalledPass | Is the play predicted to be a pass or a rush? |
totalLine | The closing spread line for the game |
possessionTeamLine | The number of points the possession team is favored by according to Vegas |
possessionTeamTotal | The number of total points the possession team is expected to score as indicated by the Vegas total and spread lines |
offEpa | A team offense’s average expected points added per play over the last X number of plays in similar situations |
defEpa | A team defense’s average expected points added allowed per play over the last X number of plays in similar situations |
qbEpa | A team offense’s average expected points added per play over the last X number of plays when the quarterback on the field attempted a pass, run, or was sacked |
qbSuccessEpa | Quarterback success EPA for the last N similar plays |
Similar to how the Vegas lines provide game-level insight into relative team strengths, we can use EPA values to provide insight into relative team strengths at a more granular level. These EPA values, calculated using other NGS models, provide insight into how the team has performed in similar situations in the past. The EPA models can be broken down by the offense, defense, and quarterback. This provides the model with information about how successful the respective teams have been in the past in addition to how successful the current quarterback has been. The following figure shows the relative importance of the features after HPO. As discussed earlier, this feature importance makes intuitive sense.
Model training
To train the model, we used all the data from third- and fourth-down plays from 2016–2019 regular seasons as the training set. We held out the data from 2020 for the testing set.
For model architecture, a handful of different models were compared, including XGBoost, PyTorch Tabular, and AutoML-based models. Of these options, the XGBoost model provided the best results. It is also explained by using the Shapely Additive Explanations (SHAP) feature importance measures. Because our goal is to optimize for conversion probabilities, we used the Brier score (probabilistic loss function) to measure the performance of our models. The Brier score measures the mean squared difference between predicted probability assigned to the possible outcomes and actual outcomes. A lower Brier score is considered better.
To optimize our models, we used Amazon SageMaker hyperparameter optimization (HPO) to fine-tune XGBoost parameters like learning rate, max depth, subsamples, alpha, and gamma. The SageMaker-managed HPO service helped us run multiple experiments in parallel to identify optimal hyperparameter configurations. Each experiment took only a few minutes because tuning jobs are distributed across 10 instances. In addition, we used SageMaker features, including automatic early stopping and warm starting from previous tuning jobs. This combined with custom metrics improved the performance of the model within minutes. Examples of various SageMaker-based HPO tuning jobs are available on GitHub.
Go-for-it model results
After training and HPO, the XGBoost model achieved a Brier score of 0.21. In addition to the Brier score, we examined the model predictions to ensure they were recreating known aspects of the game. For example, the odds of converting on a fourth-down play decrease as the number of yards needed for a first-down increase. The following figure shows the model’s predicted conversion probabilities as a function of the yards-to-go. We can observe two key trends. First, as expected, the conversion probability decreases as the yards-to-go increases. Second, a team is generally better off running the ball on short yards-to-go situations and passing the ball on long yards-to-go situations.
For the Next Gen Stats Decision Guide, it’s not sufficient for the model to make correct predictions. It must also assign valid probabilities to those predictions. To examine the validity of the model probabilities, we compare the probabilities against the aggregate play outcomes, as shown in the following graph. The model predictions were binned into 10%-wide categories from 0–90%. For each bin, the fraction of plays that were converted was calculated (bar height). For an ideal model, the bin heights should be roughly the midpoint of each bin (solid line). The following graph shows that when the model provides a conversion probability between 0–60%, the actual aggregate outcomes of these plays closely match the model’s predictions. For model predictions between 60–90%, the model slightly appears to underestimate the offense’s probabilities of converting (most notably between 60–70%). In situations where the agreement is poor, we can use postprocessing techniques to increase the agreement between play outcomes and the model probabilities. For an example for deep learning models, see Quantifying uncertainty in deep learning systems.
ML production pipeline
For the model in production, we used SageMaker for preprocessing, training, and postprocessing. The model is hosted using a highly scalable, available, and secured Amazon Elastic Kubernetes Service (Amazon EKS) for production usage. The following figure shows a high-level diagram of the production pipeline. All steps are automated and require minimal maintenance.
Summary
AWS and the NFL NGS team jointly developed the Next Gen Stats Decision Guide, which helps fans understand the choices coaches make at pivotal moments in the game. The odds of converting on a fourth-down play are a key component of the Next Gen Stats Decision Guide. In this post, we provided insight into how AWS helped the NFL create the model powering fourth-down conversions and discussed methods to assess model performance.
The NGS team will be hosting these models as part of the 2021 NFL season. Keep an eye out for the Next Gen Stats Decision Guide during the next NFL game.
You can find full examples of creating custom training jobs, implementing HPO, and deploying models on SageMaker at the AWS Labs GitHub repo. If you would like us to help and accelerate your use of ML, contact the Amazon ML Solutions Lab program.
About the Authors
Selvan Senthivel is a Senior ML Engineer with Amazon ML Solutions Lab team at AWS, focusing on helping customers on Machine Learning, Deep Learning problems and end-to-end ML solutions. He was the founding engineering lead of Amazon Comprehend Medical service and contributed to the design/architecture of multiple AWS AI services.
Lin Lee Cheong is a Senior Scientist and Manager with the Amazon ML Solutions Lab team at Amazon Web Services. She works with strategic AWS customers to explore and apply artificial intelligence and machine learning to discover new insights and solve complex problems.
Tyler Mullenbach is a Principal Data Science Manager with AWS Professional Services. He leads a global team of data science consultants focusing on helping customers turn their data into insights and bring ML models to production.
Ankit Tyagi is a Senior Software Engineer with the NFL’s Next Gen Stats team. He focuses on backend data pipelines and machine learning for delivering stats to fans. Outside of work, you can find him playing tennis, experimenting with brewing beer, or playing guitar.
Mike Band is the Lead Analyst for NFL’s Next Gen Stats. He contributes to the ideation, development, and communication of advanced football performance metrics for the NFL Media Group, NFL Broadcast Partners, and fans.
Juyoung Lee is a Senior Software Engineer with the NFL’s Next Gen Stats. Her work focuses on designing and developing machine learning models to create stats for fans. On her spare time, she enjoys being active by playing Ultimate Frisbee and doing CrossFit.
Michael Schaefer was the Director of Product and Analytics for NFL’s Next Gen Stats. His work focuses on the design and execution of statistics, applications, and content delivered to NFL Media, NFL Broadcaster Partners, and fans.
Michael Chi is the Director of Technology for NFL’s Next Gen Stats. He is responsible for all technical aspects of the platform which is used by all 32 clubs, NFL Media and Broadcast Partners. In his free time, he enjoys being outdoors and spending time with his family.