The world we live in is rapidly changing, and so are the data and features that companies and customers use to train their models. Retraining models to keep them in sync with these changes is critical to maintain accuracy. Therefore, you need an agile and dynamic approach to keep models up to date and adapt them to new inputs. This combination of great models and continuous adaptation is what will lead to a successful machine learning (ML) strategy.
Today, we are excited to announce the launch of Amazon Comprehend flywheel—a one-stop machine learning operations (MLOps) feature for an Amazon Comprehend model. In this post, we demonstrate how you can create an end-to-end workflow with an Amazon Comprehend flywheel.
Amazon Comprehend is a fully managed service that uses natural language processing (NLP) to extract insights about the content of documents. It helps you extract information by recognizing sentiments, key phrases, entities, and much more, allowing you to take advantage of state-of-the-art models and adapt them for your specific use case.
MLOps focuses on the intersection of data science and data engineering in combination with existing DevOps practices to streamline model delivery across the ML development lifecycle. MLOps is the discipline of integrating ML workloads into release management, CI/CD, and operations. MLOps requires the integration of software development, operations, data engineering, and data science.
This is why Amazon Comprehend is introducing the flywheel. The flywheel is intended to be your one stop to perform MLOPs for your Amazon Comprehend models. This new feature will allow you to keep your models up to date, improve upon your models, and deploy the best version faster.
The following diagram represents the model lifecycle inside an Amazon Comprehend flywheel.
The current process to create a new model consists of a sequence of steps. First, you gather data and prepare the dataset. Then, you train the model using this dataset. After the model is trained, it’s evaluated for accuracy and performance. Finally, you deploy the model to an endpoint to perform inference. When new models are created, these steps need to be repeated, and the endpoint needs to be manually updated.
An Amazon Comprehend flywheel automates this ML process, from data ingestion to deploying the model in production. With this new feature, you can manage training and testing of the created models inside Amazon Comprehend. This feature also allows you to automate model retraining after new datasets are ingested and available in the flywheel´s data lake.
The flywheel provides integration with custom classification and custom entity recognition APIs, and can help different roles such as data engineers and developers automate and manage the NLP workflow with no-code services.
First, let’s introduce some concepts:
Flywheel – A flywheel is an AWS resource that orchestrates the ongoing training of a model for custom classification or custom entity recognition.
Dataset – A dataset is a set of training or test data that is used in a single flywheel. Flywheel uses the training datasets to train new model versions and evaluate their performance on the test datasets.
Data lake – A flywheel’s data lake is a location in your Amazon Simple Storage Service (Amazon S3) bucket that stores all its datasets and model artifacts. Each flywheel has its own dedicated data lake.
Flywheel iteration – A flywheel iteration is a run of the flywheel when triggered by the user. Depending on the availability of new train or test datasets, the flywheel will train a new model version or assess the performance of the active model on new test data.
Active model – An active model is the selected version of the model by the user for predictions. As the performance of the model is improved with new flywheel iterations, you can change the active version to the one that has the best performance.
The following diagram illustrates the flywheel workflow.
These steps are detailed as follows:
Create a flywheel – A flywheel automates the training of model versions for a custom classifier or custom entity recognizer. You can either select an existing Amazon Comprehend model as a starting point for the flywheel or you can start from scratch with no models. In both cases, a flywheel’s data lake location must be specified for the flywheel.
Data ingestion – You can create new datasets for training or testing in the flywheel. All the training and test data for all versions of the model are managed and stored in the flywheel’s data lake created in your S3 bucket. The supported file formats are CSV and augmented manifest from an S3 location. You can find more information for preparing the dataset for custom classification and custom entity recognition.
Train and evaluate the model – When you don’t indicate the model ARN (Amazon Resource Name) to use, it implies that a new one is going to be built from scratch. For that, the first iteration of flywheel will create the model based on the train dataset uploaded. For successive iterations, these are the possible cases:
If no new train or test datasets are uploaded since the last iteration, the flywheel iteration will finish without any change.
If there are only new test datasets since the last iteration, the flywheel iteration will report the performance of the current active model based on the new test datasets.
If there are only new train datasets, the flywheel iteration will train a new model.
If there are new train and test datasets, the flywheel iteration will train a new model and report the performance of the current active model.
Promote new active model version – Based on the performance of the different flywheel iterations, you can update the active model version to the best one.
Deploy an endpoint – After running a flywheel iteration and updating the active model version, you can run real-time (synchronous) inference on your model. You can create an endpoint with the flywheel ARN, which will by default use the currently active model version. When the active model for the flywheel changes, the endpoint automatically starts using the new active model without any customer intervention. An endpoint includes all the managed resources that make your custom model available for real-time inference.
In the following sections, we demonstrate the different ways to create a new Amazon Comprehend flywheel.
You need the following:
An active AWS account
An S3 bucket for your data location
An AWS Identity and Access Management (IAM) role with permissions to create an Amazon Comprehend flywheel and permissions to read and write to your data location S3 bucket
Create a flywheel with AWS CloudFormation
To start using an Amazon Comprehend flywheel with AWS CloudFormation, you need the following information about the AWS::Comprehend::Flywheel resource:
DataAccessRoleArn – The ARN of the IAM role that grants Amazon Comprehend permission to access the flywheel data
DataLakeS3Uri – The Amazon S3 URI of the flywheel’s data lake location
FlywheelName – The name for the flywheel
For more information, refer to AWS CloudFormation documentation.
Create a flywheel on the Amazon Comprehend console
In this example, we demonstrate how to build a flywheel for a custom classifier model on the Amazon Comprehend console that figures out the topic of the news.
Create a dataset
First, you need to create the dataset. For this post, we use the AG News Classification Dataset. In this dataset, data is classified in four news categories: WORLD, SPORTS, BUSINESS, and SCI_TEC.
Create a flywheel
Now we can create our flywheel. Complete the following steps:
On the Amazon Comprehend console, choose Flywheels in the navigation pane.
Choose Create new flywheel.
You can create a new flywheel from an existing model or create a new model. In this case, we create a new model from scratch.
For Flywheel name, enter a name (for this example, custom-news-flywheel).
Leave the Model field empty.
Select Custom classification for Custom model type.
For Language, leave the setting as English.
Select Using Multi-label mode for Classifier mode.
For Custom labels, enter BUSINESS,SCI_TECH,SPORTS,WORLD.
For the encryption settings, keep Use AWS owned key.
For the flywheel’s data lake location, select an S3 URI in your account that can be dedicated to this flywheel.
Each flywheel has an S3 data lake location where it stores flywheel assets and artifacts such as datasets and model statistics. Make sure not to modify or delete any objects from this location because it’s meant to be managed exclusively by the flywheel.
Choose Create an IAM role and enter a name for the role (CustomNewsFlywheelRole in our case).
It will take a couple of minutes to create the flywheel. Once created, the status will change to Active.
On the custom-news-flywheel details page, choose Create dataset.
For Dataset name, enter a name for the training dataset.
Leave CSV file for Data format.
Choose Training and select the training dataset from the S3 bucket.
Repeat these steps to create a test dataset.
After the uploaded dataset status changes to Completed, go to the Flywheel iterations tab and choose Run flywheel.
When the training is complete, go to the Model versions tab, select the recently trained model, and choose Make active model.
You can also observe the objective metrics F1 score, precision, and recall.
Return to the Datasets tab and choose Create dataset in the Test datasets section.
Enter the location of text.csv in the S3 bucket.
Wait until the status shows as Completed. This will create metrics on the active model using the test dataset.
If you choose Custom classification in the navigation pane, you can see all the document classifier models, even the ones trained using flywheels.
Create an endpoint
To create your model endpoint, complete the following steps:
On the Amazon Comprehend console, navigate to the flywheel you created.
On the Endpoints tab, choose Create endpoint.
Name the endpoint news-topic.
Under Classification models and flywheels, the active model version is already selected.
For Inference Units, choose 1 IU.
Select the acknowledgement check box, then choose Create endpoint.
After the endpoint has been created and is active, navigate to Use in real-time analysis on the endpoint’s details page.
Test the model by entering text in the Input text box.
Under Results, check the labels for the news topics.
Create an asynchronous analysis job
To create an analysis job, complete the following steps:
On the Amazon Comprehend console, navigate to the active model version.
Choose Create job.
For Name, enter batch-news.
For Analysis type¸ choose Custom classification.
For Classification models and flywheels, choose the flywheel you created (custom-news-flywheel).
Browse Amazon S3 to select the input file with the different news texts we want to create the analysis with and then choose One document per line (one news text per line).
The following screenshot shows the document uploaded for this exercise.
Choose where you want to save the output file in your S3 location.
For Access permissions, choose the IAM role CustomNewsFlywheelRole that you created earlier.
Choose Create job.
When the job is complete, download the output file and check the predictions.
To avoid future charges, clean up the resources you created.
On the Amazon Comprehend console, choose Flywheels in the navigation pane.
Select your flywheel and choose Delete.
Delete any endpoints you created.
Empty and delete the S3 buckets you created.
In this post, we saw how an Amazon Comprehend flywheel serves as a one-stop shop to perform MLOPs for your Amazon Comprehend models. We also discussed its value proposition and introduced basic flywheel concepts. Then we walked you through the different steps starting from creating a flywheel to creating an endpoint.
Learn more about Simplify continuous learning of Amazon Comprehend custom models using Comprehend flywheel. Try it out now and get started with our newly launched service, the Amazon Comprehend flywheel.
About the Authors
Alberto Menendez is an Associate DevOps Consultant in Professional Services at AWS and a member of Comprehend Champions. He loves helping accelerate customers´ journey to the cloud and creating solutions to solve their business challenges. In his free time, he enjoys practicing sports, especially basketball and padel, spending time with family and friends, and learning about technology.
Irene Arroyo Delgado is an Associate AI/ML Consultant in Professional Services at AWS and a member of Comprehend Champions. She focuses on productionizing ML workloads to achieve customers’ desired business outcomes by automating end-to-end ML lifecycles. She has experience building performant ML platforms and their integration with a data lake on AWS. In her free time, Irene enjoys traveling and hiking in the mountains.
Shweta Thapa is a Solutions Architect in Enterprise Engaged at AWS and a member of Comprehend Champions. She enjoys helping her customers with their journey and growth in the cloud, listening to their business needs, and offering them the best solutions. In her free time, Shweta enjoys going out for a run, traveling, and most of all spending time with her baby daughter.