The COVID-19 global pandemic has accelerated the need to verify and onboard users online across several industries, such as financial services, insurance, and healthcare. When it comes to user experience it is crucial to provide a frictionless transaction while maintaining a high standard for identity verification. The question is, how do you verify real people in the digital world?
Amazon Rekognition provides pre-trained facial recognition and analysis capabilities for identity verification to your online applications, such as banking, benefits, ecommerce, and much more.
In this post, we present the “ID + Selfie” identity verification design pattern and sample code you can use to create your own identity verification REST endpoint. This is a common design pattern that you can incorporate into existing or new solutions that require face-based identity verification. The user presents a form of identification like a driver’s license or passport. The user than captures a real-time selfie with the application. We then compare the face from the document with the real-time selfie taken on their device.
The Amazon Rekognition CompareFaces API
At the core of the “ID + Selfie” design pattern is the comparison of the face in the selfie to the face on the identification document. For this, we use the Amazon Rekognition
CompareFaces API. The API compares a face in the source input image with a face or faces detected in the target input image. In the following example, we compare a sample driver’s license (left) with a selfie (right).
The following is an example of the API code:
Several values are returned in the CompareFaces API response. We focus on the
Similarity value returned in
FaceMatches to validate the selfie matches the ID provided.
Understanding key tuning parameters
SimilarityThreshold is set to 80% by default and will only return results with a similarity score greater than or equal to 80%. Adjust the value by specifying the
QualityFilter is an input parameter to filter out detected faces that don’t meet a required quality bar. The quality bar is based on a variety of common use cases. Use
QualityFilter to set the quality bar by specifying
HIGH. If you don’t want to filter poor quality faces, specify
NONE. The default value is
You can create an “ID + Selfie” API for digital identity verification by deploying the following components:
- A REST API with a POST method that allows us to send the selfie and identification payload and returns a response, in this case the similarity score
- A function to receive the payload, convert the images to the proper format, and call the Amazon Rekognition
The following diagram illustrates the solution architecture and workflow.
The workflow contains the following steps:
- The user uploads the required identification document and a selfie.
- The client submits the identification document and selfie to the REST endpoint.
- The REST endpoint returns a similarity score to the client.
- An evaluation is done through business logic in your application. For example, if the similarity score is below 80%, it fails the digital identity check; otherwise it passes the digital identity check.
- The client sends the status to the user.
The Lambda function converts the incoming payload from base64 to byte for each image and then sends the source (selfie) and target (identification) to the Amazon Rekognition
compare_faces API and returns the similarity score received in the body of the API response. See the following code:
Deploy the project
- Set up a user who has permissions to programmatically deploy the solution resources through the AWS CDK.
- Set up the AWS Command Line Interface (AWS CLI). For instructions, refer to Configuring the AWS CLI.
- If this is your first time using the AWS CDK, complete the prerequisites listed in Working with the AWS CDK in Python.
- Clone the GitHub repository.
- Create the virtual environment. The command you use depends on your OS:
- If using Windows, run the following command in your terminal window from the source of the cloned repository:
- If using Mac or Linux, run the following command in your terminal window from the source of the cloned repository:
- After activating the virtual environment, install the app’s standard dependencies:
- Now that the environment is set up and the requirements are met, we can issue the AWS CDK deployment command to deploy this project to AWS:
Make API calls
We need to send the payload in base64 format to the REST endpoint. We use a Python file to make the API call, which allows us to open the source and target files, convert them to base64, and send the payload to the API Gateway. This code is available in the repository.
Note that the
TARGET file locations will be on your local file system, and the URL is the API Gateway URL generated during the creation of the project.
We used the AWS CDK to build this project, so we can open our project locally and issue the following AWS CDK command to clean up the resources:
There you have it, the “ID + Selfie” design pattern with a simple API that you can integrate with your application to perform digital identity verification. In the next post in our series, we expand upon this pattern further by extracting text from the identification document and searching a collection of faces to prevent duplication.
To learn more, check out the Amazon Rekognition Developer Guide on detecting and analyzing faces.
About the Authors
Mike Ames is a Principal Applied AI/ML Solutions Architect with AWS. He helps companies use machine learning and AI services to combat fraud, waste, and abuse. In his spare time, you can find him mountain biking, kickboxing, or playing guitar in a 90s metal band.
Noah Donaldson is a Solutions Architect at AWS supporting federal financial organizations. He is excited about AI/ML technology that can reduce manual processes, improve customer experiences, and help solve interesting problems. Outside of work, he enjoys spending time on the ice with his son playing hockey, hunting with his oldest daughter, and shooting hoops with his youngest daughter.