AWS Feed
Develop and deploy ML models using Amazon SageMaker Data Wrangler and Amazon SageMaker Autopilot

Data generates new value to businesses through insights and building predictive models. However, although data is plentiful, available data scientists are far and few. Despite our attempts in recent years to produce data scientists from academia and elsewhere, we still see a huge shortage that will continue into the near future.

To accelerate model building, data scientists and ML practitioners often take advantage of AutoML (automated machine learning) tools that can augment their work. They can take away the tedious and iterative process of data preparation, model training and tuning. AutoML tools help data scientists improve their productivity when developing ML models.

In this post, we discuss how data scientists and other advanced analytics users can use Amazon SageMaker Data Wrangler and Amazon SageMaker Autopilot to analyze their data sets and build highly predictive ML models. To demonstrate these capabilities, we use the Pima Indian Diabetes public data set from UCI.

Solution overview

The Pima Indian Diabetes data set contains the information of 768 women from a population near Phoenix, Arizona. The outcome tested was diabetes. It carries 258 tested positive and 500 tested negative observations, with one target and eight attributes: pregnancies, glucose, blood pressure, skin thickness, insulin, BMI (body mass index), age, and pedigree diabetes function. We use this data set to demonstrate how to use Autopilot and Data Wrangler to build highly predictive ML models without having to write any code.

The high-level steps for building an ML model are as follows:

  1. Perform exploratory data analysis.
  2. Perform feature engineering.
  3. Train the model.
  4. Validate the model.
  5. Deploy the model.
  6. Make predictions.

1 2996 Flow

We walk through these steps as we build a binary classification model using the Pima Indian Diabetes data set.

Import your data set with Data Wrangler

Data Wrangler is a feature of Amazon SageMaker Studio that provides an end-to-end solution to import, prepare, transform, featurize, and analyze data. You can integrate a Data Wrangler data flow into your ML workflows to simplify and streamline data preprocessing and feature engineering using little to no coding.

  1. On the Studio console, under File, choose New.
  2. Choose Flow.

If this is your first time opening Data Wrangler, you may have to wait a few minutes for it to be ready.

  1. Rename your flow as needed.
  2. For Import data, choose your data source.

2 2996 Choose

  1. Upload the pima-indian-diabates.csv file from Amazon S3.

You can now preview your data set.

  1. In the Details pane, deselect Enable sampling (this is a small data set, so we don’t need it).

3 2996 Console

  1. Choose Import dataset.

You now have a flow diagram.

  1. Choose the + icon next to Data types and choose Edit data types.

4 2996 Console

  1. Make sure that Data Wrangler automatically inferred the correct data types for your data columns.

If not, you can easily modify them through the UI. If multiple data sources are present, you can join or concatenate them.

We can now create an analysis and add transformations.

Exploratory data analysis and feature engineering

Exploratory data analysis is an important step when building ML models. In this step, data scientists analyze data to listen to its story. If you have the patience to listen, data is a great storyteller. This step involves statistical analysis, summarization tables, histograms, scatter plots, outlier analysis, finding missing values, and more. We demonstrate some of these in this post.

  1. Choose the + icon next to Data types and choose Add analysis.
  2. On the Configure tab, for Analysis type, choose Table Summary.
  3. For Analysis name¸ enter a name (optional).
  4. Choose Preview to see a preview of the table.

5 2996 Console

The count summary shows that all columns have 768 entries. But on closer examination, we find that the minimum value is 0 for columns such as Glucose and BloodPressure. Missing values are stored as 0 in this data set. Let’s fix that.

  1. Choose Create and save this table.
  2. On the flow’s main page, choose the + icon next to Data types and choose Add transform.
  3. Under Search and edit, for Transform, choose Convert regex to missing.
  4. For Input column, choose Glucose.
  5. For Pattern, enter 0.
  6. Choose Preview.

The 0 entries under Glucose are now missing entries.

  1. Choose Add to save this step.

6 2996 Console

  1. Repeat these steps for the other columns with incorrect 0 entries: BloodPressure, SkinThickness, Insulin, and BMI.

Data Wrangler gives you a couple of options to fix missing values.

  1. Choose the + icon next to Data types and choose Add transform.
  2. Replace missing values with the median values for all five columns (Glucose, BloodPressure, SkinThickness, Insulin, and BMI).

This completes one iteration of analysis and transformation.

Data Wrangler gives you an option to build a quick model to see how predictive your features are.

  1. Choose the + icon next to Data types and choose Add analysis.
  2. On the Configure tab, for Analysis type, choose Quick Model.
  3. For Analysis name¸ enter a name.
  4. For Label¸ choose Class.

The following chart shows the F1 score and the importance of the predictive features.

7 2996 Console

The F1 score is a commonly used metric in classification problems; it represents the harmonic average of recall and precision. If we build a model with this data at this stage, we get an approximate F1 score of 0.735 (1 being the best F1 score) and find that Glucose is the most important explanatory feature.

Another valuable feature of Data Wrangler is checking for target leakage. Target leakage is a phenomenon in which the target that you’re trying to predict has leaked into one or more of your features, and this feature isn’t available at prediction time.

  1. Choose the + icon next to Data types and choose Add analysis.
  2. For Analysis type¸ choose Target leakage.
  3. For Problem type, choose classification.
  4. For Target, choose Class.
  5. Choose Create.

8 2996 Console

We don’t have a target leakage situation in this data set, but if we did, we would need to remove that column from the data set so that the model doesn’t falsely show a perfect model during training.

  1. Next, we draw some scatter plots for Glucose vs. BloodPressure.

9 2996 Console

Women that are less than 100 in Glucose and less than 80 in BloodPressure seem to have a lesser chance for diabetes. Let’s create a new feature using that information.

  1. We use the Custom formula feature in the transformation options.

10 2996 Console

This custom formula creates a new column in the data set.

Next, let’s check if Pregnancies/Age could have some effect on the target.

  1. Create a new column using the Custom formula

11 2996 Console

  1. Next, we draw a histogram to see its effect.

12 2996 Console

As we can see, this new feature could have an influence on our target.

A quick model after adding these two features shows an improvement in our model’s F1 score.

13 2996 Console

Other features are available that also don’t require any coding, such as finding outliers and scaling features, but we don’t need them for this data set.

  1. The last step is to export data in this new format.

14 2996 Console

  1. Choose Data Wrangler job to create a Python notebook.
  2. Under Run, choose Run all cells to run the notebook.

The notebook creates output for this flow as a CSV file in Amazon S3. You can see the S3 path for the output file in the notebook. Depending on your input data file, Data Wrangler might split the output into multiple files. If so, you need to combine them into a single CSV file with a single header, which you then feed into Autopilot.

Build and deploy your model with SageMaker Autopilot

Autopilot allows you to automatically build ML models. It explores your data, selects the algorithms relevant to your problem type, and prepares the data to facilitate model training and tuning. It ranks all of the optimized models tested by their performance and finds the best performing model, which you can deploy at a fraction of the time normally required.

We can either run Autopilot directly on the raw data or feed it with the enhanced data set that we generated with Data Wrangler.

  1. On the Studio console, under File, choose New.
  2. Choose Experiment.
  3. For Experiment name, enter a name.
  4. For Connect your data, enter the S3 bucket of your uploaded input data.
  5. For Target, type Class.

15 2996 Console

  1. For Output data location, specify the location of the S3 bucket where you want the results saved.
  2. For Select the machine learning problem type, choose Binary classification.

If you’re not sure what problem type to use, you can leave it as Auto and Autopilot will figure it out.

  1. For Objective metric, choose F1.
  2. Choose Create Experiment.

16 2996 Console

Autopilot analyzes the input data, processes it, selects the right ML algorithm, and runs several trials of experiments on it to tune the model for best performance. It then ranks these trials and presents you the best model.

17 2996 Console

  1. Choose a model from the list and deploy it to an endpoint.
  2. Specify a name for the endpoint, instance type for it, and instance count.
  3. You can also have the endpoint return predicted labels and their probabilities.

18 2996 Console

You can see the creation of this endpoint on the SageMaker console.

19 2996 Endpoints

  1. Choose the endpoint to see more details on it.

You see an endpoint URL that you can use to make predictions in real time.

  1. Under Production variants, make a note of the model name.

19 2996 Variants

  1. On the SageMaker console, under Inference in the navigation pane, choose Batch transform job.
  2. Choose Create batch transform job.

20 2996 batch

  1. For Model name, enter the model name you saved earlier.
  2. For Instance type, choose an instance type.

21 2996 create batch v2

  1. For Content type, enter text/csv.
  2. For S3 location, enter the path to your input bucket.
  3. For S3 output path, enter the path to your output bucket.

22 2996 input data v2

When the batch transformation job is complete, you can see your inference job’s output in the S3 bucket.

Conclusion

In this post, you have learned an easy way for conducting exploratory data analysis, ML model development, deployment, and batch transformation to make predictions. This technique can be used by anyone that has access to data and wants to quickly build powerful machine learning models and thereby increase their productivity. Learn more about Amazon SageMaker Data Wrangler and Amazon SageMaker Autopilot by visiting their product pages.


About the Author

Raju PenmatchaRaju Penmatcha is a Senior AI/ML Specialist Solutions Architect at AWS. He works with education, government, and nonprofit customers on machine learning and artificial intelligence related projects, helping them build solutions using AWS. When not helping customers, he likes traveling to new places.