Reviewing online fraud using Amazon Fraud Detector and Amazon A2I

Each year, organizations lose tens of billions of dollars to online fraud globally. Organizations such as ecommerce companies and credit card companies use machine learning (ML) to detect online fraud. Some of the most common types of online fraud include email account compromise (personal or business), new account fraud, and non-payment or non-delivery (including card numbers compromised).

A common challenge with ML is the need for a large labeled dataset to create ML models for detecting fraud. Moreover, even if you have this dataset, you need the skill set and infrastructure to build, train, deploy, and scale your ML model to detect fraud with millions of events. In addition, you need humans to review the subset of high-risk fraud predictions to ensure that the results are highly accurate. Setting up a human review system with your fraud detection model requires provisioning complex workflows and managing a group of reviewers, which increases the time to market for your applications and overall costs.

In this post, we provide an approach to identify high-risk predictions from Amazon Fraud Detector and use Amazon Augmented AI (Amazon A2I) to set up a human review workflow to automatically trigger a review process for further investigation and validation.

Amazon Fraud Detector is a fully managed service that uses ML and more than 20 years of fraud detection expertise from Amazon to identify potential fraudulent activity so you can catch more online fraud faster. Amazon Fraud Detector automates the time-consuming and expensive steps to build, train, and deploy an ML model for fraud detection, making it easier for you to leverage the technology. Amazon Fraud Detector customizes each model it creates to your dataset, making the accuracy of models higher than current one-size-fits-all ML solutions. And because you pay only for what you use, you avoid large upfront expenses.

Amazon A2I is an ML service that makes it easy to build the workflows with ML models required for human review. Amazon A2I brings human review to all developers, removing the undifferentiated heavy lifting associated with building human review systems or managing large numbers of reviewers.

Overview of the solution

The high-level solution is summarized through the following architecture.

The workflow contains the following steps:

  1. The client application sends information to the Amazon Fraud Detector endpoint.
  2. Amazon Fraud Detector predicts a risk score (in the range of 0–1,000) on the input data with an ML model that is trained using historical data. A score of 0 indicates that the prediction is considered to have the lowest possible risk, and a score of 1,000 indicates that the prediction is considered to have the highest possible risk.
  3. If the risk score for a particular prediction falls beneath a predefined threshold, there is no further action.
  4. If the risk score exceeds the predefined threshold (for example, a score of 900), the Amazon A2I loop starts automatically and sends predictions for human review to an Amazon A2I private workforce. A private workforce can be employees of your company. They open the Amazon A2I interface, review the case, and make an adjudication (approve, deny, or send it for further verification).
  5. The approval or rejection result from the private workforce is stored in Amazon Simple Storage Service (Amazon S3). From Amazon S3, it can be directly sent to the client application.

Solution walkthrough

In this post, we set up Amazon Fraud Detector using the AWS Management Console, and set up Amazon A2I using an Amazon SageMaker notebook. The following steps outline the detailed solution:

  1. Train and deploy the Amazon Fraud Detector model using historical data.
  2. Set up an Amazon A2I human loop with Amazon Fraud Detector.
  3. Use the model to predict the risk score for a given new input data.
  4. Set up an Amazon A2I human workflow and loop.

Prerequisites

Before getting started, you must complete the following prerequisite steps:

  1. Download the training data. For this post, we use synthetic training data.
  2. Create an S3 bucket named fraud-detector-a2i and upload the training data to the bucket.

Training and deploying the Amazon Fraud Detector model

This section covers the high-level steps for building the model and creating a fraud detector:

  1. Create an event to evaluate for fraud.
  2. Define the model and training details to train the model using the data previously uploaded to Amazon S3.
  3. Deploy the model.
  4. Create the detector. 

Creating an event

Navigate to the Amazon Fraud Detector console. You uploaded the training dataset to Amazon S3 in the prerequisite steps. In this step, we create an event. An event is a business activity that is evaluated for fraud risk, and the event type defines the structure for an event sent to Amazon Fraud Detector.

  1. On the Amazon Fraud Detector console, choose Create event.
  2. For Name, enter registration.
  3. For Entity, choose Create new entity.

For Entity, choose Create new entity.

The entity represents who is performing or triggering the event.

  1. For Entity type name, enter customer.

For Entity type name, enter customer.

  1. For Choose how to define this event’s variables, choose Select variables from a training dataset.
  2. For AWS Identity and Access Management or IAM role, choose Create IAM role.

For IAM role¸ choose Create IAM role.

  1. In the Create IAM role section, enter the specific bucket name where you uploaded your training data. 

The name of IAM role should be the S3 bucket name where you uploaded your training data. Otherwise, you get an Access denied Exception error.

  1. Choose Create role.

Choose Create role.

  1. For Data location, enter the path to your training data.
  2. Choose Upload.

This pulls in the variables from the previously uploaded dataset. Choose the variable types as shown in the following screenshot.

Choose the variable types as shown in the following screenshot.

You need to create at least two labels for the model to use.

  1. For Labels, choose fraud and legit.

For Labels, choose fraud and legit.

  1. Choose Create event type.

Creating the model

When the event is successfully created, move on to create the model.

  1. On the Define model details page, for Model name¸ enter sample_fraud_detection.
  2. For Model type, choose Online Fraud Insights.
  3. For Event type, choose registration.
  4. For IAM role, choose the role you created earlier or create a new one.
  5. For Training data location, enter the path to your training data; for example, s3://<bucket-name>/<object name>.
  6. Choose Next.

Choose Next.

  1. On the Configure training page, for Model inputs, select all the variables from your historical event dataset.
  2. For Fraud labels, choose fraud.
  3. For Legitimate labels, choose legit.
  4. Choose Next.

Choose Next

  1. Choose Create and train model.

The process of creating and training the model takes approximately 45 minutes to complete. When the model has stopped training, you can check model performance by choosing the model version. 

Amazon Fraud Detector validates model performance using 15% of your data that was not used to train the model and provides performance metrics, including the confusion matrix and the area under the curve (AUC). You need to consider these metrics together with your business objectives (minimize false positives). For further details on the metrics and how to determine thresholds, see Fraud Detector Training performance metrics.

The following screenshot shows our model performance.

The following screenshot shows our model performance.Deploying the model

When the model is trained, you’re ready to deploy it.

  1. Choose your model (sample_fraud_detection) and the version you want to deploy.
  2. On the model version details page, on the Actions menu, choose Deploy model version.

On the model version details page, on the Actions menu, choose Deploy model version.

Creating a detector

After you deploy your model, you need to create a detector to hold your deployed model and decision logic.

  1. On the Amazon Fraud Detector console, choose Detectors.
  2. Choose Create detector.
  3. For Detector name, enter fraud_detector.
  4. For Event type, choose registration.
  5. Choose Next.

Choose Next.

  1. In the Add model section, for Model, choose your model and its version.

In the Add model section, for Model, choose your model and its version.

  1. Choose Next.

You need to create rules to interpret what is considered a high-risk event based on the model score produced by your detector.

  1. In the Add rules section, for Name, enter high_fraud_risk.
  2. For Expression, enter the following code:
    $ sample_fraud_detection_insightscore > 900,

Each rule must contain a single expression that captures your business logic. All expressions must evaluate to a Boolean value (true or false) and be less than 4,000 characters in length. If-else type conditions are not supported. All variables used in the expression must be predefined in the evaluated event type. For help with more advanced expressions, see Rule language reference.

  1. For Outcomes, choose the outcome you want for your rule.

An outcome is the result of a fraud prediction. Create an outcome for each possible fraud prediction result. For example, you may want outcomes to represent risk levels (high_risk, medium_risk, and low_risk) or actions (approve, review). You can add one or more outcomes to a rule.

  1. Choose Add rule to run the rule validation checker and save the rule.

Choose Add rule to run the rule validation checker and save the rule.

  1. In the Configure rule execution section, for Rule execution modes, select First matched.
  2. Choose Next.

Choose Next.

  1. In the Review and Create section, choose Create detector.

We have successfully created the detector.

Setting up an Amazon A2I human loop with Amazon Fraud Detector

In this section, we show you to configure an Amazon A2I custom task type with Amazon Fraud Detector using the accompanying Jupyter notebook. We use a custom task type to integrate a human review loop into any ML workflow. You can use a custom task type to integrate Amazon A2I with other AWS services like Amazon Comprehend, Amazon Transcribe, and Amazon Translate, as well as your own custom ML workflows.

To get started, complete the following steps:

  1. Create a notebook instance in SageMaker.

Make sure your SageMaker notebook has AWS Identity and Access Management (IAM) roles and permissions for FraudDetectorFullAccess and SagemakerFullAccess, and Amazon S3 read and write access to the bucket you specified in BUCKET.

  1. When the notebook is active, choose Open Jupyter.
  2. On the Jupyter dashboard, choose New, and choose
  3. In the terminal, enter the following code:
  1. Open the notebook by choosing Amazon A2I and Amazon Fraud Detector.ipynb in the root folder.
  2. Run the Install and Setup steps to install the necessary libraries.
  3. To set up the S3 bucket in the notebook, enter the bucket you created in the prerequisite step in which you uploaded your training data:
    # Replace the following with your bucket name
    BUCKET = ' fraud-detector-a2i '

  1. Run the next cells to assert your bucket is in same Region in which you’re running this notebook.

For this post, you create a private work team and add only one user (you) to it.

  1. On the SageMaker console, create a private workforce.
  2. After you create the private workforce, find the workforce ARN and enter the ARN in the notebook:
    WORKTEAM_ARN = "your workteam arn"

  1. Run the notebook cells to complete setting up, such as initializing Amazon Fraud Detector Python Boto3 APIs.
  2. After you create your fraud detector model, replace the MODEL_NAME, DETECTOR_NAME, EVENT_TYPE, and ENTITY_TYPE with your model values:
    MODEL_NAME = 'sample_fraud_detection'
    DETECTOR_NAME = 'fraud_detector'
    EVENT_TYPE = 'registration'
    ENTITY_TYPE = 'customer'
    

Testing the fraud detector with a sample data record

Run the Amazon Fraud Detector Get Event Prediction API on sample data. This API provides a model score on the event and an outcome based on the designated detector. See the following code:

eventId = uuid.uuid1()
timestampStr = '2013-07-16T19:00:00Z'

# Construct a sample data record

rec = {
   'ip_address': '36.72.99.64',
   'email_address': 'fake_bakermichael@example.net',
   'billing_state' : 'NJ',
   'user_agent' : 'Mozilla',
   'billing_postal' : '32067',
   'phone_number' :'555-555-0100',
   'user_agent' : 'Mozilla',
   'billing_address' :'12351 Amanda Knolls Fake St'
}


pred = client.get_event_prediction(detectorId=DETECTOR_NAME, 
                                   detectorVersionId='1',
                                   eventId = str(eventId),
                                   eventTypeName = EVENT_TYPE,
                                   eventTimestamp = timestampStr, 
                                   entities = [{'entityType': ENTITY_TYPE, 'entityId':str(eventId.int)}],
                                   eventVariables=rec)

The API provides the following output:

pred
{'modelScores': [{'modelVersion': {'modelId': 'sample_fraud_detection',
    'modelType': 'ONLINE_FRAUD_INSIGHTS',
    'modelVersionNumber': '1.0'},
   'scores': {'sample_fraud_detection_insightscore': 992.0}}],
 'ruleResults': [{'ruleId': 'high-risk', 'outcomes': ['verify']}],
 'ResponseMetadata': {'RequestId': '8902a475-df5b-470d-a990-ec217d5908cd',
  'HTTPStatusCode': 200,
  'HTTPHeaders': {'date': 'Mon, 02 Nov 2020 17:22:11 GMT',
   'content-type': 'application/x-amz-json-1.1',
   'content-length': '250',
   'connection': 'keep-alive',
   'x-amzn-requestid': '8902a475-df5b-470d-a990-ec217d5908cd'},
  'RetryAttempts': 0}}

Run the following notebook cell to print the model score:

pred['modelScores'][0]['scores']['sample_fraud_detection_insightscore']

Creating a human task UI using a custom worker task template

Use HTML elements to create a custom worker template that Amazon A2I uses to generate your worker task UI. For instructions on creating a custom template, see Create Custom Worker Task Template. We have over 70 pre-built UIs or worker task templates for various use cases. For this post, we use the following custom task template to flag the high-risk output as Fraudulent, Valid, or Needs further Investigation:

template="""<script src="https://assets.crowd.aws/crowd-html-elements.js"></script>

<crowd-form>
      <crowd-classifier
          name="category"
          categories="['Fradulent, 'Valid, 'Needs further Review]"
          header="Select the most relevant category"
      >
      <classification-target>
        <h3><strong>Risk Score (out of 1000): </strong><span style="color: #ff9900;">{{ task.input.score.sample_fraud_detection_insightscore }}</span></h3>
        <hr>
        <h3> Claim Details </h3>
        <p style="padding-left: 50px;"><strong>Email Address   :  </strong>{{ task.input.taskObject.email_address }}</p>
        <p style="padding-left: 50px;"><strong>Billing Address :  </strong>{{ task.input.taskObject.billing_address }}</p>
        <p style="padding-left: 50px;"><strong>Billing State   :  </strong>{{ task.input.taskObject.billing_state }}</p>
        <p style="padding-left: 50px;"><strong>Billing Zip     :  </strong>{{ task.input.taskObject.billing_postal }}</p>
        <p style="padding-left: 50px;"><strong>Originating IP  :  </strong>{{ task.input.taskObject.ip_address }}</p>
        <p style="padding-left: 50px;"><strong>Phone Number    :  </strong>{{ task.input.taskObject.phone_number }}</p>
        <p style="padding-left: 50px;"><strong>User Agent      :  </strong>{{ task.input.taskObject.user_agent }}</p>
      </classification-target>
      
      <full-instructions header="Claim Verification instructions">
      <ol>
        <li><strong>Review</strong> the claim application and documents carefully.</li>
        <li>Mark the claim as valid or fraudulent</li>
      </ol>
      </full-instructions>

      <short-instructions>
           Choose the most relevant category that is expressed by the text. 
      </short-instructions>
    </crowd-classifier>

</crowd-form>
"""

You can create a worker task template using the SageMaker console and the SageMaker API operation CreateHumanTaskUi. Run the following cell to create the human task UI for fraud detection:

def create_task_ui(task_ui_name, template):
    '''
    Creates a Human Task UI resource.

    Returns:
    struct: HumanTaskUiArn
    '''
    response = sagemaker.create_human_task_ui(
        HumanTaskUiName=task_ui_name,
        UiTemplate={'Content': template})
    return response
taskUIName = 'fraud'+ str(uuid.uuid1())

# Create task UI
humanTaskUiResponse = create_task_ui(taskUIName, template)
humanTaskUiArn = humanTaskUiResponse['HumanTaskUiArn']
print(humanTaskUiArn)

Creating a human review workflow definition

Workflow definitions allow you to specify the following:

  • The worker template or human task UI you created in the previous step.
  • The workforce that your tasks are sent to. For this post, it’s the private workforce you created in the prerequisite steps.
  • The instructions that your workforce receives.

This post uses the Create Flow Definition API to create a workflow definition. Run the following cell in the notebook:

def create_flow_definition(flow_definition_name):
    '''
    Creates a Flow Definition resource

    Returns:
    struct: FlowDefinitionArn
    '''
    response = sagemaker.create_flow_definition(
            FlowDefinitionName= flow_definition_name,
            RoleArn= ROLE,
            HumanLoopConfig= {
                "WorkteamArn": WORKTEAM_ARN,
                "HumanTaskUiArn": humanTaskUiArn,
                "TaskCount": 1,
                "TaskDescription": "Please review the  data and flag for potential fraud",
                "TaskTitle": " Review and Approve / Reject Amazon Fraud detector predictions. "
            },
            OutputConfig={
                "S3OutputPath" : OUTPUT_PATH
            }
        )
    
    return response['FlowDefinitionArn']

Optionally, you can create this workflow definition on the Amazon A2I console. For instructions, see Create a Human Review Workflow.

Setting threshold to start a human loop for high-risk scores from Amazon Fraud Detector predictions

As outlined earlier, you can invoke the Amazon Fraud Detector model endpoint to detect the risk score for given input data. If the risk score is greater than a certain threshold (for example, 900), you create and start the Amazon A2I human loop.

You can change the value of the SCORE_THRESHOLD depending on the risk level for triggering the human review. pred refers to the prediction from the sample record rec from the earlier code. Run the following cell to set up your threshold:

FraudScore= pred['modelScores'][0]['scores']['sample_fraud_detection_insightscore']
print(FraudScore)

SCORE_THRESHOLD = 900
if FraudScore > SCORE_THRESHOLD :

    # Create the human loop input JSON object
    humanLoopInput = {
        'score' : pred['modelScores'][0]['scores'],
        'taskObject': rec
    }

print(json.dumps(humanLoopInput))

Below is the response:

996.0
{"score": {"sample_fraud_detection_insightscore": 996.0}, "taskObject": {"ip_address": "36.72.99.64", "email_address": "fake_bakermichael@example.net", "billing_state": "NJ", "user_agent": "Mozilla", "billing_postal": "32067", "phone_number": "'555-555-0100", "billing_address": "12351 Amanda Knolls Fake St"}}

Starting a human loop for high risk Amazon Fraud detector’s predictions

We send the human loop input for human review and start the Amazon A2I loop with the start-human-loop API. When using Amazon A2I for a custom task, a human loop starts when StartHumanLoop is called in your application. Run the following cell in the notebook to start the human loop:

# Create flow definition
uniqueId = str(int(round(time.time() * 1000)))
flowDefinitionName = f'fraud-detector-a2i-{uniqueId}'
flowDefinitionArn = create_flow_definition(flowDefinitionName)

# Start the human loop
humanLoopName = 'Fraud-detector-' + str(int(round(time.time() * 1000)))
print('Starting human loop - ' + humanLoopName)

response = a2i_runtime_client.start_human_loop(
                            HumanLoopName=humanLoopName,
                            FlowDefinitionArn= flowDefinitionArn,
                            HumanLoopInput={
                                'InputContent': json.dumps(humanLoopInput)
                                }
                            )

Checking the status of the human loop

Run the following accompanying notebook cell to get a login link to navigate to the private workforce portal:

workteamName = WORKTEAM_ARN[WORKTEAM_ARN.rfind('/') + 1:]
print("Navigate to the private worker portal and do the tasks. Make sure you've invited yourself to your workteam!")
print('https://' + sagemaker.describe_workteam(WorkteamName=workteamName)['Workteam']['SubDomain'])

Use the generated link to log in to the private worker portal. Choose Start working to review the results.

Use the generated link to log in to the private worker portal.

On the next page, you can review and classify the fraud detector’s response or send it for further reviews.

On the next page, you can review and classify the fraud detector’s response or send it for further reviews.

The private worker can review the results and submit a response by selecting an option, for example Needs further review and choose Submit.

Evaluating the results

When the labeling work is complete for each high-risk prediction, your results should be available in the S3 output path specified in the human review workflow definition. The human answers (labels) are returned and saved in a JSON file. Run the notebook cell to get the results from Amazon S3:

import re
import pprint
pp = pprint.PrettyPrinter(indent=2)

def retrieve_a2i_results_from_output_s3_uri(bucket, a2i_s3_output_uri):
    '''
    Gets the json file published by A2I and returns a deserialized object
    '''
    splitted_string = re.split('s3://' +  bucket + '/', a2i_s3_output_uri)
    output_bucket_key = splitted_string[1]

    response = s3.get_object(Bucket=bucket, Key=output_bucket_key)
    content = response["Body"].read()
    return json.loads(content)
    

for human_loop_name in completed_loops:

    describe_human_loop_response = a2i_runtime_client.describe_human_loop(
        HumanLoopName=human_loop_name
    )
    
    print(f'nHuman Loop Name: {describe_human_loop_response["HumanLoopName"]}')
    print(f'Human Loop Status: {describe_human_loop_response["HumanLoopStatus"]}')
    print(f'Human Loop Output Location: : {describe_human_loop_response["HumanLoopOutput"]["OutputS3Uri"]} n')
    
    # Uncomment below line to print out a2i human answers
    pp.pprint(retrieve_a2i_results_from_output_s3_uri(BUCKET, describe_human_loop_response['HumanLoopOutput']['OutputS3Uri']))

The following code is the human reviewed output with labels you just submitted:

Human Loop Name: Fraud-detector-1613589638354
Human Loop Status: Completed
Human Loop Output Location: : s3://a2i-fd-demos-2020/a2i-results/fraud-detector-a2i-1613589635065/2021/02/17/19/20/38/Fraud-detector-1613589638354/output.json 

{ 'flowDefinitionArn': 'arn:aws:sagemaker:us-east-1:534095625703:flow-definition/fraud-detector-a2i-1613589635065',
  'humanAnswers': [ { 'acceptanceTime': '2021-02-17T19:20:52.563Z',
                      'answerContent': { 'category': { 'label': 'Needs furthur '
                                                                'review'}},
                      'submissionTime': '2021-02-17T19:23:38.092Z',
                      'timeSpentInSeconds': 165.529,
                      'workerId': '7fe4cd6b55282093',
                      'workerMetadata': { 'identityData': { 'identityProviderType': 'Cognito',
                                                            'issuer': 'https://cognito-idp.us-east-1.amazonaws.com/us-east-1_1DCLqiVmd',
                                                            'sub': 'ec69f8cb-3505-4fef-a2d7-56d1b974644a'}}}],
  'humanLoopName': 'Fraud-detector-1613589638354',
  'inputContent': { 'score': {'sample_fraud_detection_insightscore': 996},
                    'taskObject': { 'billing_address': '12351 Amanda Knolls '
                                                       'Fake St',
                                    'billing_postal': '32067',
                                    'billing_state': 'NJ',
                                    'email_address': 'fake_bakermichael@example.net',
                                    'ip_address': '36.72.99.64',
                                    'phone_number': '555-555-0100',
                                    'user_agent': 'Mozilla'}}}

To improve model performance of the existing Amazon Fraud Detector model, you can combine the preceding JSON response from Amazon A2I with your existing training dataset and retrain your model with a new version.

Cleaning up

To avoid incurring unnecessary charges, delete the resources used in this walkthrough when not in use. For instructions, see the following:

Conclusion

This post demonstrated how you can detect online fraud using Amazon Fraud Detector and set up human review workflows using Amazon A2I custom task type to review and validate high-risk predictions. If this post helps you or inspires you to solve a problem, we would love to hear about it! The code for this solution is available on the GitHub repo for you to use and extend. Contributions are always welcome!


About the Authors

Srinath Godavarthi is a Senior Solutions Architect at AWS and is based in the Washington, DC, area. In that role, he helps public sector customers achieve their mission objectives with well-architected solutions on AWS. Prior to AWS, he worked with various systems integrators in healthcare, public safety, and telecom verticals. He focuses on innovative solutions using AI and ML technologies.

 

Mona Mona is an AI/ML Specialist Solutions Architect based out of Arlington, VA. She works with the World Wide Public Sector team and helps customers adopt machine learning on a large scale. She is passionate about NLP and ML explainability areas in AI/ML. Prior to AWS, she did her masters in Computer Information Systems with a major in Big Data Analytics, and has worked for various IT consultants in the global markets domain.

 

Pranusha Manchala is a Solutions Architect at AWS based in Virginia. She works with hundreds of EdTech customers and provides them with architectural guidance for building highly scalable and cost-optimized applications on AWS. She found her interests in machine learning and artificial intelligence and started to dive deep into this technology. Prior to AWS, she did her masters in Computer Science with double majors in Networking and Cloud Computing.

Read More