AWS Feed
Standardize with speed using AWS Service Catalog stack import
If you’ve used AWS Service Catalog, you probably know how it helps organizations increase standardization, encourage compliance, and improve speed and agility. This is done by enabling central administrators to publish and manage a standard set of compliant products that users can consume in a self-service manner.
Customers often start by creating an AWS CloudFormation-based product in AWS Service Catalog. Using AWS Service Catalog these products are shared so that end users can later provision them in a self-service manner. Sometimes customers who have created AWS CloudFormation stacks outside of AWS Service Catalog still want to take advantage of its benefits:
- Applying launch constraints to allow resource provisioning while still keeping in place strict user AWS Identity and Access Management (IAM) permissions.
- Enabling self-service actions to stacks so end users can perform approved actions like start, stop, and terminate actions on EC2 instances.
- Standardizing products across environments for your organization.
- Associating tags through the TagOption library to standardize on tagging schemas.
- Simplifying budget and cost visibility for deployed stacks.
To bring these stacks under AWS Service Catalog management, customers previously had to terminate and recreate the existing stacks. For nonproduction and short-lived components, this wasn’t a problem. However for mission-critical scenarios where maximum uptime is vital, requiring stacks to be recreated was not ideal. Similarly, some customer applications had integrations dependent on unique Amazon S3 bucket names or specific Amazon RDS database endpoints which would have to change when the stack was recreated.
For cases like these, the AWS Service Catalog team has released a stack import feature to import and govern your CloudFormation templates and running stacks. You can now bring these long-running stacks under AWS Service Catalog control while the underlying resources are untouched.
How it works
The stack import feature maps running CloudFormation stacks to an AWS Service Catalog product, artifact, and owner. The CloudFormation templates associated with the stacks are also used to create these reusable AWS Service Catalog products.
The stack import request compares the CloudFormation stack template with the associated AWS Service Catalog product provisioning artifact. If the resource definition matches, the resource is imported into AWS Service Catalog.
Stack import can be run from the AWS Service Catalog console, AWS Command Line Interface (CLI), or Service Catalog stack import API. We provide examples of all three operations.
Getting started
Before you begin, make sure you have the following:
- The appropriate Identity and Access Management (IAM) policy assigned to the user or role that you will be using to perform the stack import. The user or role must have the IAM policy permissions cloudformation:GetTemplate and cloudformation:DescribeStacks at a minimum as described in the API documentation. For the creation of the product, we created an administrator user with the AWSServiceCatalogAdminFullAccess policy attached. Depending on your security needs, you might want to use your own policies.
- An AWS Service Catalog portfolio. You will add the product corresponding to the CloudFormation stack to your portfolio.
- One or more CloudFormation stacks that were created outside of AWS Service Catalog. This stack should have a status of CREATE_COMPLETE, UPDATE_COMPLETE, UPDATE_ROLLBACK_COMPLETE, IMPORT_COMPLETE, or IMPORT_ROLLBACK_COMPLETE. The administrator user should have access to the CloudFormation template that corresponds to the stack to be imported.
- The AWS CLI installed and configured correctly with permissions that allow the same level of access required to import a stack using the console.
Using the console to import a stack into AWS Service Catalog
In our example, we are using the Amazon RDS sample template with a deletion policy to create several stacks in our account. However, this can also be done with any valid CloudFormation stack. To add a new product, open the AWS Service Catalog console and follow the instructions in the AWS Service Catalog Administrator Guide. Use the template file for your stack, the URL of the CloudFormation template, or the ARN of the CloudFormation stack. You’ll find this stack information in the CloudFormation console or by using the AWS CLI.
After you have added a product, in the AWS Service Catalog console, choose Provisioned products, and from the Actions menu, choose Import stack.
Figure 1: Provisioned products page
On the Import as Provisioned product page, enter the ARN of the stack you want to import. Under Service Catalog product details, choose the product whose template corresponds to the stack and the version associated with the product. In Provisioned product name, enter a descriptive name, and then choose Import.
Figure 2: Import as Provisioned products page
That’s it! You’ve just imported your first CloudFormation stack into AWS Service Catalog. You didn’t need to recreate the stack, so endpoints and other important stack features have not changed. You can now tag the resource, apply constraints, and perform other management functions just as you would for any other product.
Use AWS CLI and AWS SDK for Python to import stacks into AWS Service Catalog
If you have multiple stacks, you can import these programmatically using the AWS CLI and the AWS SDKs (in our example, the SDK for Python). When you use the AWS CLI to import a stack, you must provide the provisioned product name, the product ID, the provisioning artifact ID, and the physical ID.
We put a list of these values into a comma-separated value (CSV) file without headers so that each line represents a new stack to import. In our first example, the file is named stacks1.csv.
DBd1,prod-bllvp7oilp6cs,pa-rpjytv6dtou3s,arn:aws:cloudformation:us-east-1:123456789012:stack/DBdB/6e40f320-380e-11eb-98e8-0e7b928af6d7
DBd2,prod-bllvp7oilp6cs,pa-rpjytv6dtou3s,arn:aws:cloudformation:us-east-1:123456789012:stack/DBdC/8379fe80-380e-11eb-aec4-0e09ee5d9c1f
DBd3,prod-bllvp7oilp6cs,pa-rpjytv6dtou3s,arn:aws:cloudformation:us-east-1:123456789012:stack/DbdD/94147b80-380e-11eb-91f6-12f8925a37c4
The first value on each line is your provisioned product name. The second value is the product ID that matches the CloudFormation stack you’d previously uploaded to the console. You can find this ID in the AWS Service Catalog console. Choose Products, and then copy the ID of your stack’s product name. The third value is the provisioning artifact ID. You can find this ID by choosing the product name and copying the ID that corresponds to the correct version. The final value, the physical ID, is the CloudFormation stack ID unique to the stack you are importing. Keep in mind that in our example, these stacks all align with the same product ID and provisioning artifact ID. However, this is by no means required.
Create a separate file in the same directory as your stacks1.csv file and copy the following shell script into it:
#!/bin/bash
while IFS=, read -r provprod_name prod_id artifact_id phys_id; do
echo $provprod_name
echo $prod_id
echo $artifact_id
echo $phys_id
aws servicecatalog import-as-provisioned-product --product-id $prod_id --provisioning-artifact-id $artifact_id --provisioned-product-name $provprod_name --physical-id $phys_id
done < stacks1.csv
When you run the script, the values are displayed on the screen and a JSON response indicates the provisioned products were successfully created.
To query all provisioned products, use this command:
aws servicecatalog search-provisioned-products
Although this command may take a moment to reflect the import, the results are eventually consistent.
If you prefer to do this in a more OS-agnostic way, you can use an AWS SDK. Because we are using the AWS SDK for Python, make sure you follow the installation instructions for boto3. In the following code, we’re using the import_as_provisioned_product client API. First, create a CSV file with the same format shown earlier but with different provisioned product names and physical IDs. Name this file stacks2.csv. Then create a separate file in the same directory and copy the following code into it.
#!/usr/bin/env python3
import os
import logging
import boto3
from botocore.exceptions import ClientError
def import_stack(provprod_name, prod_id, artifact_id, phys_id):
"""import stack from a list of CloudFormation Stack ARNs and numbers
:param provprod_name: Provisioned Product Name which must be unique
:param prod_id: Product ID that matches the stack to be imported
:param artifact_id: Provisioning artifact ID of the version
:param phys_id: Physical ID, specifically the CFn Stack ARN
:return: True if stack imported, else False
"""
# import stacks from file
try:
sc = boto3.client('servicecatalog')
scimport = sc.import_as_provisioned_product(ProductId=prod_id,ProvisioningArtifactId=artifact_id,ProvisionedProductName=provprod_name,PhysicalId=phys_id)
except ClientError as e:
logging.error(e)
return False
return True
# Print out information on stacks and their numbers listed in a local file
file = open('stacks2.csv', mode = 'r', encoding = 'utf-8-sig')
lines = file.readlines()
file.close()
for line in lines:
line = line.split(',')
line = [i.strip() for i in line]
print(line[0])
print(line[1])
print(line[2])
print(line[3])
import_stack(line[0], line[1], line[2], line[3])
This code performs the same actions as the shell script. It parses the CSV data, prints it, and then imports the stack into AWS Service Catalog while preserving the existing stack resources.
Conclusion
In this post, we showed you how to use the stack import feature to import long running CloudFormation stacks into AWS Service Catalog without impacting the underlying resources. The console, AWS CLI, and SDK examples in this blog post should help you get started. For more information, see ImportAsProvisionedProduct in the AWS Service Catalog Developer Guide.
About the authors
Chris Spruell is a Senior Solutions Architect with Amazon Web Services (AWS) based out of Atlanta, GA. During his 25+ years in technology, he has worked with customers of all sizes to build resilient, secure, and operationally efficient architectures for their enterprise workloads.
Kiran Lakkireddy is a Senior Solutions Architect based out of Chicago, IL. He helps customers adopt AWS successfully and works with them to ensure that their environments are architected for success and according to AWS best practices.