AWS Feed
HawkEye 360 uses Amazon SageMaker Autopilot to streamline machine learning model development for maritime vessel risk assessment

This post is cowritten by Ian Avilez and Tim Pavlick from HawkEye 360.

HawkEye 360 is a commercial radio frequency (RF) satellite constellation data analytics provider. Our signals of interest include very high frequency (VHF) push-to-talk radios, maritime radar systems, AIS beacons, satellite mobile comms, and more. Our Mission Space offering, released in February 2021, allows mission analysts to intuitively visualize RF signals and analytics, which allows them to identify activity and understand trends. This capability improves maritime situational awareness for mission analysts, allowing them to identify and characterize nefarious behavior such as illegal fishing or ship-to-ship transfer of illicit goods.

The following screenshot shows HawkEye 360’s Mission Space experience.

1 Graphic Updated

RF data can be overwhelming to the naked eye without filtering and advanced algorithms to parse through and characterize the vast amount of raw data. HawkEye 360 partnered with the Amazon ML Solutions Lab to build machine learning (ML) capabilities into our analytics. With the guidance of the Amazon ML Solutions Lab, we used Amazon SageMaker Autopilot to rapidly generate high-quality AI models for maritime vessel risk assessment, maintain full visibility and control of model creation, and provide the ability to easily deploy and monitor a model in a production environment.

Hidden patterns and relationships among vessel features

Seagoing vessels are distinguished by several characteristics relating to the vessel itself, its operation and management, and its historical behavior. Knowing which characteristics are indicative of a suspicious vessel isn’t immediately clear. One of HawkEye 360’s missions is to discover hidden patterns and automatically alert analysts to anomalous maritime activity. Hawkeye 360 accomplishes this alerting regions by using a diverse set of variables, in combination with proprietary RF geo-analytics. A key focus of these efforts is to identify which vessels are more likely to engage in suspicious maritime activity, such as illegal fishing or ship-to-ship transfer of illicit goods. ML algorithms reveal hidden patterns, where they exist, that would otherwise be lost in the vast sea of complexity.

The following image demonstrates some of the existing pattern finding behavior that has been built into Mission Space. Mission Space automatically identifies other instances of a suspicious vessel. Identifying the key features, that are most predictive of suspicious behavior, allows for easy display of those features in Mission Space. This enables users to understand links between bad actors that would otherwise have never been seen. Mission Space was purposefully designed to point out these connections to mission analysts.

The following image demonstrates some of the existing pattern finding behavior that has been built into Mission Space.

Challenges detecting anomalous behavior with maritime vessels

HawkEye 360’s data lake includes a large volume of vessel information, history, and analytics variables. With such a wide array of RF data and analytics, some natural data handling issues must be addressed. Sporadic reporting by vessels results in missing values across datasets. Variations amongst data types must be taken into account. Previously, data exploration and baseline modeling typically would takes up a large chunk of an analysts’ time. After the data is prepared, a series of automatic experiments is run to narrow down to a set of the most promising AI models, and in a stepwise fashion from there, to select the one most appropriate for the data and the research questions. For HawkEye 360, this automated exploration is key to determining which features, and feature combinations, are critical to predicting how likely a vessel is to engage in suspicious behavior.

We used Autopilot to expedite this process by quickly identifying which features of the data are useful in predicting suspicious behavior. Automation of data exploration and analysis enables our data scientists to spend less time wrangling data and manually engineering features, and expedites the ability to identify the vessel features that are most predictive of suspicious vessel behavior.

How we used Autopilot to quickly generate high-quality ML models

As part of an Autopilot job, several candidate models are generated rapidly for evaluation with a single API call. Autopilot inspected the data and evaluated several models to determine the optimal combination of preprocessing methods, ML algorithms, and hyperparameters. This significantly shortened the model exploration time frame and allowed us to quickly test the suitability of ML to our unique hypotheses.

The following code shows our setup and API call:

input_data_config = [{ 'DataSource': { 'S3DataSource': { 'S3DataType': 'S3Prefix', 'S3Uri': 's3://{}/{}/train'.format(bucket,prefix) } }, 'TargetAttributeName': 'ship_sanctioned_ofac' } ] output_data_config = { 'S3OutputPath': 's3://{}/{}/output'.format(bucket,prefix) } from time import gmtime, strftime, sleep
timestamp_suffix = strftime('%d-%H-%M-%S', gmtime()) auto_ml_job_name = 'automl-darkcount-' + timestamp_suffix
print('AutoMLJobName: ' + auto_ml_job_name) sm.create_auto_ml_job(AutoMLJobName=auto_ml_job_name, InputDataConfig=input_data_config, OutputDataConfig=output_data_config, RoleArn=role)

Autopilot job process

An Autopilot job consists of the following actions:

  • Dividing the data into train and validation sets
  • Analyzing the data to recommend candidate configuration
  • Performing feature engineering to generate optimal transformed features appropriate for the algorithm
  • Tuning hyperparameters to generate a leaderboard of models
  • Surfacing the best candidate model based on the given evaluation metric

After we trained several models, Autopilot racks and stacks the trained candidates based on a given metric (see the following code). For this application, we used an F1 score, which gives an even weight to both precision and recall. This is an important consideration when classes are imbalanced, which they are in this dataset.

candidates = sm.list_candidates_for_auto_ml_job(AutoMLJobName=auto_ml_job_name, SortBy='FinalObjectiveMetricValue')['Candidates']
index = 1
print("List of model candidates in descending objective metric:")
for candidate in candidates: print (str(index) + " " + candidate['CandidateName'] + " " + str(candidate['FinalAutoMLJobObjectiveMetric']['Value'])) index += 1

The following code shows our output:

List of model candidates in descending objective metric: 1 tuning-job-1-1be4d5a5fb8e42bc84-238-e264d09f 0.9641900062561035
2 tuning-job-1-1be4d5a5fb8e42bc84-163-336eb2e7 0.9641900062561035
3 tuning-job-1-1be4d5a5fb8e42bc84-143-5007f7dc 0.9641900062561035
4 tuning-job-1-1be4d5a5fb8e42bc84-154-cab67dc4 0.9641900062561035
5 tuning-job-1-1be4d5a5fb8e42bc84-123-f76ad56c 0.9641900062561035
6 tuning-job-1-1be4d5a5fb8e42bc84-117-39eac182 0.9633200168609619
7 tuning-job-1-1be4d5a5fb8e42bc84-108-77addf80 0.9633200168609619
8 tuning-job-1-1be4d5a5fb8e42bc84-179-1f831078 0.9633200168609619
9 tuning-job-1-1be4d5a5fb8e42bc84-133-917ccdf1 0.9633200168609619
10 tuning-job-1-1be4d5a5fb8e42bc84-189-102070d9 0.9633200168609619

We can now create a model from the best candidate, which can be quickly deployed into production:

model_name = 'automl-darkcount-25-23-07-39' model = sm.create_model(Containers=best_candidate['InferenceContainers'], ModelName=model_name, ExecutionRoleArn=role) print('Model ARN corresponding to the best candidate is : {}'.format(model['ModelArn']))

The following code shows our output:

Model ARN corresponding to the best candidate is : arn:aws:sagemaker:us-east-1:278150328949:model/automl-darkcount-25-23-07-39

Maintaining full visibility and control

The process to generate a model is completely transparent. Two notebooks are generated for any model Autopilot creates:

  • Data exploration notebook – Describes your dataset and what Autopilot learned about your dataset
  • Model candidate notebook – Lists data transformations used as well as candidate model building pipelines consisting of feature transformers paired with main estimators

Conclusion

We used Autopilot to quickly generate many candidate models to determine ML feasibility and baselining ML performance on the vessel data. The automaticity of Autopilot allowed our data scientists to spend 50% less time developing ML capabilities by automating the manual tasks such as data analysis, feature engineering, model development, and model deployment.

With HawkEye 360’s new RF data analysis application, Mission Space, identifying which vessels have the potential to engage in suspicious activity allows users to easily know where to focus their scarce attention and investigate further. Expediting the data understanding and model creation allows for cutting-edge insights to be quickly assimilated into Mission Space, which accelerates the evolution of Mission Space’s capabilities as shown in the following map. We can see that a Mission Analyst identified a specific rendezvous (highlighted in magenta) and Mission Space automatically identified other related rendezvous (in purple).

For example, in the following map, we can see Mission Space identified a specific rendezvous

For more information about HawkEye 360’s Mission Space offering, see Misson Space.

If you’d like assistance in accelerating the use of ML in your products and services, contact the Amazon ML Solutions Lab.


About the Authors

  Tim PavlickTim Pavlick, PhD, is VP of Product at HawkEye 360. He is responsible for the conception, creation, and productization of all HawkEye space innovations. Mission Space is HawkEye 360’s flagship product, incorporating all the data and analytics from the HawkEye portfolio into one intuitive RF experience. Dr. Pavlick’s prior invention contributions include Myca, IBM’s AI Career Coach, Grit PTSD monitor for Veterans, IBM Defense Operations Platform, Smarter Planet Intelligent Operations Center, AI detection of dangerous hate speech on the internet, and the STORES electronic food ordering system for the US military. Dr. Pavlick received his PhD in Cognitive Psychology from the University of Maryland College Park.

Ian AvilezIan Avilez is a Data Scientist with HawkEye 360. He works with customers to highlight the insights that can be gained by combining different datasets and looking at that data in various ways.

 

 

 

Dan FordDan Ford is a Data Scientist at the Amazon ML Solution Lab, where he helps AWS National Security customers build state-of-the-art ML solutions.

 

 

 

Gaurev ReleGaurav Rele is a Data Scientist at the Amazon ML Solution Lab, where he works with AWS customers across different verticals to accelerate their use of machine learning and AWS Cloud services to solve their business challenges.