AWS Feed
Standardizing Database Migrations with AWS Database Migration Service and AWS Service Catalog

Companies and organizations are moving data and technology infrastructure to AWS to modernize their applications and gain access to cloud services. The move results in lower costs, increased productivity, and reduced downtime. Some customers are migrating data to Amazon Simple Storage Service (Amazon S3) to take advantage of AWS AI and ML services, while others are moving their on-premises database to MySQL or PostgreSQL to avoid high lock-in and incongruent licensing terms.

As customers embark on this journey, they quickly realize that scaling database migrations is challenging. Our customers often ask us these questions:

  • How can we make this process simple and transparent to the teams responsible for database migrations?
  • How do we ensure that our customers are not faced with service interruption during the migration?
  • We have many databases. How do we make sure that the process is consistent and less error-prone across each migration?
  • How can we automate and remove even more setup complexity from the migration process?

In this blog post, we show you how to use the AWS Database Migration Service (AWS DMS) for database migrations and AWS Service Catalog to standardize the migration workflow and improve accessibility for all users. We demonstrate how an organization can create a governed process that is deployable with a few button clicks and leverage a standard pattern for database migrations across teams and geographies.

The solution, deployed as a CloudFormation template, includes the following services:

  • AWS DMS – This database migration service supports both homogeneous migrations (for example, Microsoft SQL to Microsoft SQL and Oracle to Oracle) and heterogeneous migrations between different database platforms, such as Microsoft SQL Server to Amazon Aurora. You can also continuously replicate the data with high availability using AWS DMS.
  • AWS Service Catalog – An AWS Service Catalog product is an IT service or application you make available for deployment. You can create a product with an AWS CloudFormation template. Portfolios are collections of products.
  • AWS CloudFormation – AWS CloudFormation gives you an easy way to model a collection of related AWS and third-party resources, provision them quickly and consistently, and manage them throughout their lifecycles, by treating infrastructure as code.
  • Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server – This managed relational database service sets up, operates, and scales a relational database in the cloud.
  • Amazon Elastic Compute Cloud (Amazon EC2) – Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud.
  • AWS Lambda – This service lets you run code without provisioning or managing servers.
  • Amazon S3 – Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
  • Amazon Virtual Private Cloud (Amazon VPC) – Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define.
  • AWS Schema Conversion Tool – This tool automatically converts the source database schema and many of the database code objects, including views, stored procedures, and functions to a format compatible with the target database.

Architecture

The following diagram shows an overview of the architecture:

The diagram depicts Two VPCs, a Source and Target. In the Source VPC, there is a EC2-based MS SQL Instance. In the Target VPC there are three components: An RDS MS SQL Database, a Database Migration Services Replication Instance, and a Migration Utility EC2 instance with tools. The two VPCs are connected with VPC Peering.

Figure 1: Architecture overview

Networking highlights

  • A virtual private cloud (VPC) to simulate an on-premises (or any source) environment. In a real-world scenario, this might be a data center or a VPC that contains a database running on an EC2 instance.
  • Another VPC to contain the target Amazon RDS for SQL Server instance and the AWS DMS components, specifically the replication instance and any utility instances.
  • A peering connection between the VPCs to allow for network communication between the source and target VPCs. In a real-world scenario, you should consider a reliable, encrypted connection; this might be an AWS Site-to-Site VPN with an AWS Direct Connect connection to your on-premises environment.

Walkthrough

In this solution, we show you how to use AWS DMS to set up the components required to migrate a SQL Server database to an Amazon RDS for SQL Server instance. We provide AWS CloudFormation scripts to help you get started quickly. Here are the steps you will follow, in order:

Deploy Service Catalog products

  • Provision the solution AWS Service Catalog portfolio and products that contain the solution components.

Deploy solution prerequisites

  • Create the networking and security environment for the database migrations.
  • Launch the Amazon RDS for SQL Server database instance. This is the target database.

Migrate

  • Set up the AWS DMS components (replication instance, replication task).
  • Set up a utility server to host assistive tools (for example, the AWS Schema Conversion Tool).
  • Start the database migration process from source database to target database.

Deploy AWS Service Catalog components

You need an AWS account to deploy the solution code. If you don’t already have an AWS account, create one at https://aws.amazon.com by following the on-screen instructions. Your access to the AWS account must have IAM permissions to launch AWS CloudFormation templates that create IAM roles.

Download content and use your own bucket

Download this zip file, and extract its contents.
Create an S3 bucket in the desired region and make a note of the bucket name
Upload the contents of the unzipped file to a folder in the bucket (for example ‘scdms’).
Open the scdms/ folder in S3
Choose Deployer-Helper.yml.
In the Object Overview section, copy theObject URL‘.

Deploy the CloudFormation for the Solution

Open the AWS CloudFormation Console: https://console.aws.amazon.com/cloudformation

Choose Create Stack [with new resources (standard)]
Choose Amazon S3 URL, paste the link you copied into Amazon S3 URL, and then choose Next.
In Specify stack details, for Stack name, enter ‘ServiceCatalogDMSSolution’
Under Parameters:

  • For HoldingBucket, enter the S3 bucket you created where the content is
  • For HoldingKeyPrefix, enter the folder name you selected with the content (in our example, ‘scdms/’)

Choose ‘Next’, and proceed to create the stack

When the stack completes creation, navigate to the Outputs section. Click on the URL Value for the ‘SolutionCloudFormationTemplateLaunch’ Key. This will take you to the CloudFormation Create Stack page for the Solution components.

  1. On Specify stack details, add the ARN of the user, group, or role that you want to deploy the created product (for example, arn:aws:iam::1234567890:user/alloweduser or arn:aws:iam::123456789012:root). You do not need to add or edit any of the other properties for the AWS CloudFormation stackset. We have also pre-populated your current account number to make this easier.
  2. Deploy the stack.
  3. You can monitor the stack creation progress on the Events tab. The AWS CloudFormation stack launch is complete when the stack status is CREATE_COMPLETE.

Now that the AWS Service Catalog products, portfolio, and required associations have been deployed, let’s examine what we built.

  1. Open the  AWS Service Catalog console. The menu on the left contains an End User section and an Administrative section.
    Let’s explore the Administrative section first.
  2. In Portfolios, you should see the portfolio we deployed in the list.
  3. Choose the portfolio and examine its configuration, specifically the products associated to it and the groups, roles, and users. If you drill down to the product, you can see the initial version we deployed and the AWS CloudFormation template that backs it.
  4. In the upper left of the console, choose Products to open the End User section. (If you look at the provisioned product list, you will most likely not have any yet. All we have done thus far is create the product. We have not provisioned an instance of it.)
  5. The product we just deployed should appear In the list of available products. Choose the product and examine its metadata, such as its owner and available versions.

Deploy solution prerequisites

We can now provision the infrastructure. These are components that you likely already have in a real-world environment, but they are required here for demonstration purposes.

  1. In the AWS Service Catalog console, navigate to the Migrations – Demo Infrastructure product, and then choose Launch.
  2. In the ‘Provisioned product name’ field, type in the desired name; for example, use ‘MigrationDemoInfrastructure’.
  3. In Parameters, use the default values for most fields, but make the following additions:
    1. Choose two Availability Zones.
    2. In Workstation CIDR, use your workstation’s public IP address and CIDR Prefix (e.g. /32).
      To get your workstation IP address, see https://checkip.amazonaws.com/.
      The format should be: A.B.C.D/X (e.g. 192.168.1.34/32)
      The CIDR allows RDP and SQL access to the EC2 instance, utility server, and RDS host.
    3. Enter a user name and secure password to use for the source database and server.
    4. Enter a database name (for example, dms_sample).
  4. Launch the product.

After the deployment is complete, we can provision the target database with Amazon RDS.

  1. In the AWS Service Catalog console, navigate to the Migrations – RDS MSSQL Database Instance product, and then choose Launch.
  2. In the ‘Provisioned product name’ field, type in the desired name; for example, use ‘TargetRDS’.
  3. In Parameters, use the default values for most fields, but make the following additions:
    1. Choose the two TargetVPC subnets (created by the ‘MigrationDemoInfrastructure’ product, available in the output section).
    2. Choose the RDS security group (created by the ‘MigrationDemoInfrastructure’ product, available in the output section).
    3. Enter a user name and secure password for the target database.
  4. Launch the product.

Deploy migration components

After the Amazon RDS deployment is complete, we can deploy the utility server that contains the required scripts and the Schema Conversion Tool.

  1. In the AWS Service Catalog console, navigate to the Migrations – Utility Server product, and then choose Launch.
  2. In the ‘Provisioned product name’ field, type in the desired name; for example, use ‘DemoUtil’.
  3. In Parameters, use the default values for most fields, but make the following additions:
    1. Choose the two TargetVPC subnets (created by the infrastructure product).
    2. Choose the utility security group – the full name will be in the output of the ‘MigrationDemoInfrastructure’ provisioned product.
    3. Enter a user name and secure password for the utility server.
    4. Enter an endpoint, database name, user name, and password for the target database (the values will be in the output section of the ‘TargetRDS’ provisioned product).
  4. Launch the product.

We can also deploy the migration components.

  1. In the AWS Service Catalog console, navigate to the Migrations – Database Migration Service Components product, and then choose Launch.
  2. In the ‘Provisioned product name’ field, type in the desired name; for example, use ‘DemoDMS’.
  3. In Parameters, use the default values for most fields, but make the following additions:
    1. Choose the two TargetVPC subnets (created by the ‘MigrationDemoInfrastructure’ product, available in the output section).
    2. Choose the DMS security group (created by the ‘MigrationDemoInfrastructure’ product, available in the output section).
    3. Enter an endpoint (private IP address created by the ‘MigrationDemoInfrastructure’ product, available in the output section) as well as the database name, user name, and password for the source database (same as used for the ‘MigrationDemoInfrastructure’ product).
    4. Enter an endpoint (created by the ‘TargetRDS’ product, available in the output section), database name, user name, and password for the target database (same as used for the ‘TargetRDS’ product).
  4. Launch the product.

After the step completes, the provisioning is complete. You can now start the replication task. To see the replication in action, create tables and add data to the source database.

Cleanup

Note: You are responsible for the cost of the AWS services used while running this sample deployment. There is no additional cost for using this solution. For details, see the pricing pages for each AWS service you will be using in this solution. After you are done with the implementation for this migration solution, delete the resources to avoid unnecessary charges.

To terminate the product

  1. Open the AWS Service Catalog console.
  2. Choose Provisioned Products and then choose the product.
  3. In the action menu, choose Terminate. Follow the instructions to complete termination.

To delete the deployed stacks

  1. Open the AWS CloudFormation console.
  2. Choose the each stack you created (e.g. ServiceCatalogDatabaseMigration, ServiceCatalogDMSSolution), and then choose Delete.
  3. You can track the progress of the deletion on the Events tab.
  4. When the status changes from DELETE_IN_PROGRESS to DELETE_COMPLETE,  the stack disappears from the list.

Additional Considerations

In our example, we are using AWS DMS to follow a minimalist approach to migrating data so it doesn’t copy the entire schema structure from the source to the target. However, you can choose to use the full capabilities of AWS DMS to copy the entire schema structure from source to target with the help of the Schema Conversion Tool provisioned on the utility server. After the AWS DMS migration is complete, you can also perform post-migration activities, such as creating indexes and enabling foreign keys. These activities can be built as AWS Service Catalog predefined actions.

Resources

The DMS example in this post is built upon the ‘MS SQL Server – AWS Database Migration Service (DMS) Replication Demo‘ project on GitHub; for specifics on the CloudFormation scripts used here, please refer to that repository.

Conclusion

We hope you found this post informative and useful for your database migration projects. The post showed you how you could standardize and simplify your database migrations for your teams with AWS DMS and AWS Service Catalog. We walked through how you build a Migrations portfolio in Service Catalog for your end users. Within the portfolio, your end users will be able to pick and launch the appropriate approved tools to start off the DMS process.  Have a go at it by running the provided CloudFormation scripts and have it set up the example environment automatically for you!

If you have questions about implementing the solution described in this post, you can start a new thread on the AWS Service Catalog Forum or contact AWS Support.

About the Authors

Author: Yosef Lifshits

Yosef Lifshits is an Enterprise Solutions Architect helping Financial Services customers realize the potential of cloud computing on AWS. Outside of work, Yosef enjoys spending time with his family, traveling, or working on projects like smart home automation.

Author: Nivas DurairajNivas Durairaj is a Specialist for AWS Service Catalog and AWS Control Tower. He is passionate about technology and enjoys collaborating with customers on their journey to the cloud. Outside of work, Nivas likes playing tennis, hiking, doing yoga and traveling around the world.