Amazon Web Services Feed
Central Logging in Multi-Account Environments
To create your central-logging bucket do the following:
- Save the template file to your local developer machine as “central-log-bucket.json”
- From the CloudFormation console, select “create new stack” and import the file “central-log-bucket.json”
- Fill in the parameters and complete stack creation steps (as indicated in the screenshot below)
- Verify the bucket has been created successfully and take a note of the bucket name
Step 2: Create data processing Lambda function
Use the template below to create a Lambda function in your logging account that will be used by Amazon Firehose for data transformation during the delivery process to S3. This function is based on the AWS Lambda kinesis-firehose-cloudwatch-logs-processor blueprint.
The function could be created manually from the blueprint or using the cloudformation template below. To find the blueprint navigate to Lambda -> Create -> Function -> Blueprints
This function will unzip the event message, parse it and verify that it is a valid CloudWatch log event. Additional processing can be added if needed. As this function is generic, it could be reused by all log-delivery streams.
{ "AWSTemplateFormatVersion":"2010-09-09", "Description": "Create cloudwatch data processing lambda function", "Resources":{ "LambdaRole": { "Type": "AWS::IAM::Role", "Properties": { "AssumeRolePolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "lambda.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }, "Path": "/", "Policies": [ { "PolicyName": "firehoseCloudWatchDataProcessing", "PolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "arn:aws:logs:*:*:*" } ] } } ] } }, "FirehoseDataProcessingFunction": { "Type": "AWS::Lambda::Function", "Properties": { "Handler": "index.handler", "Role": {"Fn::GetAtt": ["LambdaRole","Arn"]}, "Description": "Firehose cloudwatch data processing", "Code": { "ZipFile" : { "Fn::Join" : ["n", [ "'use strict';", "const zlib = require('zlib');", "function transformLogEvent(logEvent) {", " return Promise.resolve(`${logEvent.message}n`);", "}", "exports.handler = (event, context, callback) => {", " Promise.all(event.records.map(r => {", " const buffer = new Buffer(r.data, 'base64');", " const decompressed = zlib.gunzipSync(buffer);", " const data = JSON.parse(decompressed);", " if (data.messageType !== 'DATA_MESSAGE') {", " return Promise.resolve({", " recordId: r.recordId,", " result: 'ProcessingFailed',", " });", " } else {", " const promises = data.logEvents.map(transformLogEvent);", " return Promise.all(promises).then(transformed => {", " const payload = transformed.reduce((a, v) => a + v, '');", " const encoded = new Buffer(payload).toString('base64');", " console.log('---------------payloadv2:'+JSON.stringify(payload, null, 2));", " return {", " recordId: r.recordId,", " result: 'Ok',", " data: encoded,", " };", " });", " }", " })).then(recs => callback(null, { records: recs }));", "};" ]]} }, "Runtime": "nodejs8.10", "Timeout": "60" } } }, "Outputs":{ "Function" : { "Description": "Function ARN", "Value": {"Fn::GetAtt": ["FirehoseDataProcessingFunction","Arn"]}, "Export" : { "Name" : {"Fn::Sub": "${AWS::StackName}-Function" }} } }
}
To create the function follow the steps below:
- Save the template file as “central-logging-lambda.json”
- Login to logging account and, from the CloudFormation console, select “create new stack”
- Import the file “central-logging-lambda.json” and click next
- Follow the steps to create the stack and verify successful creation
- Take a note of Lambda function arn from the output section
Step 3: Create log destination in logging account
Log destination is used as the target of a subscription from application accounts, log destination can be shared between multiple subscriptions however according to the architecture suggested in this solution all logs streamed to the same destination will be stored in the same S3 location, if you would like to store log data in different hierarchy or in a completely different bucket you need to create separate destinations.
As noted previously, your destination and subscription have to be in the same region
Use the template below to create destination stack in logging account.
{ "AWSTemplateFormatVersion":"2010-09-09", "Description": "Create log destination and required resources", "Parameters":{ "LogBucketName":{ "Type":"String", "Default":"central-log-do-not-delete", "Description":"Destination logging bucket" }, "LogS3Location":{ "Type":"String", "Default":"<BU>/<ENV>/<SOURCE_ACCOUNT>/<LOG_TYPE>/", "Description":"S3 location for the logs streamed to this destination; example marketing/prod/999999999999/flow-logs/" }, "ProcessingLambdaARN":{ "Type":"String", "Default":"", "Description":"CloudWatch logs data processing function" }, "SourceAccount":{ "Type":"String", "Default":"", "Description":"Source application account number" } }, "Resources":{ "MyStream": { "Type": "AWS::Kinesis::Stream", "Properties": { "Name": {"Fn::Join" : [ "", [{ "Ref" : "AWS::StackName" },"-Stream"] ]}, "RetentionPeriodHours" : 48, "ShardCount": 1, "Tags": [ { "Key": "Solution", "Value": "CentralLogging" } ] } }, "LogRole" : { "Type" : "AWS::IAM::Role", "Properties" : { "AssumeRolePolicyDocument" : { "Statement" : [ { "Effect" : "Allow", "Principal" : { "Service" : [ {"Fn::Join": [ "", [ "logs.", { "Ref": "AWS::Region" }, ".amazonaws.com" ] ]} ] }, "Action" : [ "sts:AssumeRole" ] } ] }, "Path" : "/service-role/" } }, "LogRolePolicy" : { "Type" : "AWS::IAM::Policy", "Properties" : { "PolicyName" : {"Fn::Join" : [ "", [{ "Ref" : "AWS::StackName" },"-LogPolicy"] ]}, "PolicyDocument" : { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["kinesis:PutRecord"], "Resource": [{ "Fn::GetAtt" : ["MyStream", "Arn"] }] }, { "Effect": "Allow", "Action": ["iam:PassRole"], "Resource": [{ "Fn::GetAtt" : ["LogRole", "Arn"] }] } ] }, "Roles" : [ { "Ref" : "LogRole" } ] } }, "LogDestination" : { "Type" : "AWS::Logs::Destination", "DependsOn" : ["MyStream","LogRole","LogRolePolicy"], "Properties" : { "DestinationName": {"Fn::Join" : [ "", [{ "Ref" : "AWS::StackName" },"-Destination"] ]}, "RoleArn": { "Fn::GetAtt" : ["LogRole", "Arn"] }, "TargetArn": { "Fn::GetAtt" : ["MyStream", "Arn"] }, "DestinationPolicy": { "Fn::Join" : ["",[ "{"Version" : "2012-10-17","Statement" : [{"Effect" : "Allow",", " "Principal" : {"AWS" : "", {"Ref":"SourceAccount"} ,""},", ""Action" : "logs:PutSubscriptionFilter",", " "Resource" : "", {"Fn::Join": [ "", [ "arn:aws:logs:", { "Ref": "AWS::Region" }, ":" ,{ "Ref": "AWS::AccountId" }, ":destination:",{ "Ref" : "AWS::StackName" },"-Destination" ] ]} ,""}]}" ]]} } }, "S3deliveryStream": { "DependsOn": ["S3deliveryRole", "S3deliveryPolicy"], "Type": "AWS::KinesisFirehose::DeliveryStream", "Properties": { "DeliveryStreamName": {"Fn::Join" : [ "", [{ "Ref" : "AWS::StackName" },"-DeliveryStream"] ]}, "DeliveryStreamType": "KinesisStreamAsSource", "KinesisStreamSourceConfiguration": { "KinesisStreamARN": { "Fn::GetAtt" : ["MyStream", "Arn"] }, "RoleARN": {"Fn::GetAtt" : ["S3deliveryRole", "Arn"] } }, "ExtendedS3DestinationConfiguration": { "BucketARN": {"Fn::Join" : [ "", ["arn:aws:s3:::",{"Ref":"LogBucketName"}] ]}, "BufferingHints": { "IntervalInSeconds": "60", "SizeInMBs": "50" }, "CompressionFormat": "UNCOMPRESSED", "Prefix": {"Ref": "LogS3Location"}, "RoleARN": {"Fn::GetAtt" : ["S3deliveryRole", "Arn"] }, "ProcessingConfiguration" : { "Enabled": "true", "Processors": [ { "Parameters": [ { "ParameterName": "LambdaArn", "ParameterValue": {"Ref":"ProcessingLambdaARN"} }], "Type": "Lambda" }] } } } }, "S3deliveryRole": { "Type": "AWS::IAM::Role", "Properties": { "AssumeRolePolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "firehose.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "sts:ExternalId": {"Ref":"AWS::AccountId"} } } } ] } } }, "S3deliveryPolicy": { "Type": "AWS::IAM::Policy", "Properties": { "PolicyName": {"Fn::Join" : [ "", [{ "Ref" : "AWS::StackName" },"-FirehosePolicy"] ]}, "PolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:AbortMultipartUpload", "s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket", "s3:ListBucketMultipartUploads", "s3:PutObject" ], "Resource": [ {"Fn::Join": ["", [ {"Fn::Join" : [ "", ["arn:aws:s3:::",{"Ref":"LogBucketName"}] ]}]]}, {"Fn::Join": ["", [ {"Fn::Join" : [ "", ["arn:aws:s3:::",{"Ref":"LogBucketName"}] ]}, "*"]]} ] }, { "Effect": "Allow", "Action": [ "lambda:InvokeFunction", "lambda:GetFunctionConfiguration", "logs:PutLogEvents", "kinesis:DescribeStream", "kinesis:GetShardIterator", "kinesis:GetRecords", "kms:Decrypt" ], "Resource": "*" } ] }, "Roles": [{"Ref": "S3deliveryRole"}] } } }, "Outputs":{ "Destination" : { "Description": "Destination", "Value": {"Fn::Join": [ "", [ "arn:aws:logs:", { "Ref": "AWS::Region" }, ":" ,{ "Ref": "AWS::AccountId" }, ":destination:",{ "Ref" : "AWS::StackName" },"-Destination" ] ]}, "Export" : { "Name" : {"Fn::Sub": "${AWS::StackName}-Destination" }} } }
}
To create your log destination and all required resources, follow these steps:
- Save your template as “central-logging-destination.json”
- Login to your logging account and, from the CloudFormation console, select “create new stack”
- Import the file “central-logging-destination.json” and click next
- Fill in the parameters to configure the log destination and click Next
- Follow the default steps to create the stack and verify successful creation
- Bucket name is the same as in the “create central logging bucket” step
- LogS3Location is the directory hierarchy for saving log data that will be delivered to this destination
- ProcessingLambdaARN is as created in “create data processing Lambda function” step
- SourceAccount is the application account number where the subscription will be created
- Take a note of destination ARN as it appears in outputs section.
Step 4: Create the log subscription in your application account
In this section, we will create the subscription filter in one of the application accounts to stream logs from the CloudWatch log group to the log destination that was created in your logging account.
Create log subscription filter
The subscription filter is created between the CloudWatch log group and a destination endpoint. Asubscription could be filtered to send part (or all) of the logs in the log group. For example,you can create a subscription filter to stream only flow logs with status REJECT.
Use the CloudFormation template below to create subscription filter. Subscription filter and log destination must be in the same region.
{ "AWSTemplateFormatVersion":"2010-09-09", "Description": "Create log subscription filter for a specific Log Group", "Parameters":{ "DestinationARN":{ "Type":"String", "Default":"", "Description":"ARN of logs destination" }, "LogGroupName":{ "Type":"String", "Default":"", "Description":"Name of LogGroup to forward logs from" }, "FilterPattern":{ "Type":"String", "Default":"", "Description":"Filter pattern to filter events to be sent to log destination; Leave empty to send all logs" } }, "Resources":{ "SubscriptionFilter" : { "Type" : "AWS::Logs::SubscriptionFilter", "Properties" : { "LogGroupName" : { "Ref" : "LogGroupName" }, "FilterPattern" : { "Ref" : "FilterPattern" }, "DestinationArn" : { "Ref" : "DestinationARN" } } } }
}
To create a subscription filter for one of CloudWatch log groups in your application account, follow the steps below:
- Save the template as “central-logging-subscription.json”
- Login to your application account and, from the CloudFormation console, select “create new stack”
- Select the file “central-logging-subscription.json” and click next
- Fill in the parameters as appropriate to your environment as you did above
a. DestinationARN is the value of obtained in “create log destination in logging account” step
b. FilterPatterns is the filter value for log data to be streamed to your logging account (leave empty to stream all logs in the selected log group)
c. LogGroupName is the log group as it appears under CloudWatch Logs - Verify successful creation of the subscription
This completes the deployment process in both the logging- and application-account side. After a few minutes, log data will be streamed to the central-logging destination defined in your logging account.
Step 5: Analyzing log data
Once log data is centralized, it opens the door to run analytics on the consolidated data for business or security reasons. One of the powerful services that AWS offers is Amazon Athena.
Amazon Athena allows you to query data in S3 using standard SQL.
Follow the steps below to create a simple table and run queries on the flow logs data that has been collected from your application accounts
- Login to your logging account and from the Amazon Athena console, use the DDL below in your query editor to create a new table
CREATE EXTERNAL TABLE IF NOT EXISTS prod_vpc_flow_logs ( Version INT, Account STRING, InterfaceId STRING, SourceAddress STRING, DestinationAddress STRING, SourcePort INT, DestinationPort INT, Protocol INT, Packets INT, Bytes INT, StartTime INT, EndTime INT, Action STRING, LogStatus STRING ) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe' WITH SERDEPROPERTIES ( "input.regex" = "^([^ ]+)\s+([0-9]+)\s+([^ ]+)\s+([^ ]+)\s+([^ ]+)\s+([^ ]+)\s+([^ ]+)\s+([^ ]+)\s+([^ ]+)\s+([^ ]+)\s+([0-9]+)\s+([0-9]+)\s+([^ ]+)\s+([^ ]+)$") LOCATION 's3://central-logging-company-do-not-delete/';
2. Click ”run query” and verify a successful run/ This creates the table “prod_vpc_flow_logs”
3. You can then run queries against the table data as below:
Conclusion
By following the steps I’ve outlined, you will build a central logging solution to stream CloudWatch logs from one application account to a central logging account. This solution is repeatable and could be deployed multiple times for multiple accounts and logging requirements.