Skip to content

Commit

Permalink
Merge pull request #24 from awslabs/1.3.1
Browse files Browse the repository at this point in the history
1.3.1
  • Loading branch information
hackersifu authored Jul 22, 2021
2 parents 6927cf6 + 351e1af commit 8b2733d
Show file tree
Hide file tree
Showing 4 changed files with 106 additions and 54 deletions.
12 changes: 11 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,4 +114,14 @@
* Updated Cleanup section to reflect new cleanup capabilities.
* Updated IAM Permissions examples within the README.
* AWS CloudFormation template for deploying IAM Permissions to run cleanup code.
* Header in files to reflect "Assisted Log Enabler for AWS", instead of "Assisted Log Enabler (ALE)".
* Header in files to reflect "Assisted Log Enabler for AWS", instead of "Assisted Log Enabler (ALE)".

## [1.3.1] - 2021-07-22

### Added
* Randomization to the end of the Amazon S3 bucket name in both single and multi account modes.
* Instructions for deploying the AWS CloudFormation Stack individually, within the AWS Organizations root account for multi-account deployment.
* Link for the AWS Security Analytics Bootstrap within the README.

### Changed
* Feedback section within README to contain link to Issues section.
31 changes: 23 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -205,9 +205,9 @@ python3 assisted_log_enabler.py --mode single_account --cloudtrail
* Ensure that the AWS Account you're in is the account you want to store the logs. Additionally, ensure that the AWS account you're in has access to the AWS Organizations information within your AWS environment.
* You may have to register your AWS account as a delegated administrator within AWS CloudFormation, in order to run this code in an AWS account of your choosing. Please see the following link for more details: [Register a delegated administrator](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-delegated-admin.html)
2. Within the AWS Console, go to AWS CloudFormation.
3. Within AWS CloudFormation, go to StackSets.
3. To deploy the IAM Permissions within all child accounts: Within AWS CloudFormation, go to StackSets.
4. Within the StackSets screen, select Create StackSet.
5. In Step 1, under Specify Template, selecte Upload a template file, and use the AWS CloudFormation template provided in the permissions folder. [Link to the file](https://github.com/awslabs/assisted-log-enabler-for-aws/blob/main/permissions/ALE_child_account_role.yaml)
5. In Step 1, under Specify Template, select Upload a template file, and use the AWS CloudFormation template provided in the permissions folder. [Link to the file](https://github.com/awslabs/assisted-log-enabler-for-aws/blob/main/permissions/ALE_child_account_role.yaml)
6. In Step 2, under StackSet Name, add a descriptive name.
7. In Step 2, under Parameters, add the parameters required:
* AssistedLogEnablerPolicyName: You can leave this default, but you can also change it if desired.
Expand All @@ -218,23 +218,35 @@ python3 assisted_log_enabler.py --mode single_account --cloudtrail
9. In Step 4, under Deployment targets, select the option that fits for your AWS Organization.
* If you Deploy to Organization, it will deploy to all AWS accounts except the root AWS account. If you want to include that one, you can either deploy the template to the root AWS account directly, or use the other option (details below).
* If you Deploy to organizational units (OUs), you can deploy directly to OUs that you define, including the root OU.
10. In Step 4, under Specify Regions, select US East (N.Virginia)
10. In Step 4, under Specify Regions, select US East (N.Virginia).
* There's no need to select multiple regions here. This template only deploys AWS IAM resources, which are Global.
11. In Step 4, under Deployment options, leave the default settings.
12. In Step 5, review the settings you've set in the previous steps. If all is correct, check the box that states "I acknowledge that AWS CloudFormation might create IAM resources with custom names."
* Once this is submitted, you'll need to wait until the StackSet is fully deployed. If there are errors, please examine the error and ensure that all the information from the above steps are correct.
13. Once the StackSet is successfully deployed, click on the icon for AWS Cloudshell next to the search bar.
13. To deploy the IAM Permissions within the AWS Account where Assisted Log Enabler for AWS is being ran: Within AWS CloudFormation, go to Stacks.
14. Within the Stacks screen, go to the Create Stack dropdown, and select With new resources.
15. In Step 1, select Upload a template file, select Choose File, and use the AWS CloudFormation template provided in the permissions folder. [Link to the file](https://github.com/awslabs/assisted-log-enabler-for-aws/blob/main/permissions/ALE_child_account_role.yaml)
16. In Step 2, under Stack Name, add a descriptive name.
17. In Step 2, under Parameters, add the parameters required:
* AssistedLogEnablerPolicyName: You can leave this default, but you can also change it if desired.
* OrgId: Provide the AWS Organization ID
* SourceAccountNumber: Provide the source AWS account number that the Assisted Log Enabler for AWS will be running.
18. In Step 3, add any tags that you desire, as well as any permissions options that you want to select.
* The service-managed permissions work just fine for Assisted Log Enabler for AWS, but you can use self-service permissions if desired.
19. In Step 5, review the settings you've set in the previous steps. If all is correct, check the box that states "I acknowledge that AWS CloudFormation might create IAM resources with custom names."
* Once this is submitted, you'll need to wait until the StackSet is fully deployed. If there are errors, please examine the error and ensure that all the information from the above steps are correct.
20. Once both the StackSet and Stack are successfully deployed, click on the icon for AWS Cloudshell next to the search bar.
* Ensure that you're in a region where AWS CloudShell is currently available.
14. Once the session begins, download the Assisted Log Enabler within the AWS CloudShell session.
21. Once the session begins, download the Assisted Log Enabler within the AWS CloudShell session.
```
git clone https://github.com/awslabs/assisted-log-enabler-for-aws.git
```
15. Unzip the file, and change the directory to the unzipped folder:
22. Unzip the file, and change the directory to the unzipped folder:
```
unzip assisted-log-enabler-for-aws-main.zip
cd assisted-log-enabler-for-aws-main
```
16. Run the following command to run the Assisted Log Enabler in multi account mode, for the AWS service or services you want to check for:
23. Run the following command to run the Assisted Log Enabler in multi account mode, for the AWS service or services you want to check for:
```
# For all services:
python3 assisted_log_enabler.py --mode multi_account --all
Expand Down Expand Up @@ -281,6 +293,9 @@ NEW! A cleanup mode is available within the Assisted Log Enabler for AWS (curren
python3 assisted_log_enabler.py --mode cleanup --single_r53querylogs
```

## Additional Tools
For analysing logs created by Assisted Log Enabler for AWS, consider taking a look at the AWS Security Analytics Bootstrap, a tool that provides an Amazon Athena analysis environment that's quick to deploy, ready to use, and easy to maintain. [Link](https://github.com/awslabs/aws-security-analytics-bootstrap)


## Costs
For answers to cost-related questions involved with this solution, refer to the following links:
Expand All @@ -292,7 +307,7 @@ For answers to cost-related questions involved with this solution, refer to the


## Feedback
Please use the Issues section to submit any feedback, such as features or recommendations, as well as any bugs that are encountered.
Please use the [Issues](https://github.com/awslabs/assisted-log-enabler-for-aws/issues) section to submit any feedback, such as features or recommendations, as well as any bugs that are encountered.


## Security
Expand Down
55 changes: 34 additions & 21 deletions subfunctions/ALE_multi_account.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@
import datetime
import argparse
import csv
import string
import random
from botocore.exceptions import ClientError
from datetime import timezone

Expand All @@ -32,6 +34,14 @@
region_list = ['af-south-1', 'ap-east-1', 'ap-south-1', 'ap-northeast-1', 'ap-northeast-2', 'ap-northeast-3', 'ap-southeast-1', 'ap-southeast-2', 'ca-central-1', 'eu-central-1', 'eu-west-1', 'eu-west-2', 'eu-west-3', 'eu-north-1', 'eu-south-1', 'me-south-1', 'sa-east-1', 'us-east-1', 'us-east-2', 'us-west-1', 'us-west-2']


# 0. Define random string for S3 Bucket Name
def random_string_generator():
lower_letters = string.ascii_lowercase
numbers = string.digits
unique_end = (''.join(random.choice(lower_letters + numbers) for char in range(6)))
return unique_end


# 1. Obtain the AWS Accounts inside of AWS Organizations
def org_account_grab():
"""Function to list accounts inside of AWS Organizations"""
Expand Down Expand Up @@ -59,26 +69,26 @@ def get_account_number():


# 3. Create a Bucket and Lifecycle Policy
def create_bucket(organization_id, account_number):
def create_bucket(organization_id, account_number, unique_end):
"""Function to create the bucket for storing logs"""
try:
logging.info("Creating bucket in %s" % account_number)
logging.info("CreateBucket API Call")
if region == 'us-east-1':
logging_bucket_dict = s3.create_bucket(
Bucket="aws-log-collection-" + account_number + "-" + region
Bucket="aws-log-collection-" + account_number + "-" + region + "-" + unique_end
)
else:
logging_bucket_dict = s3.create_bucket(
Bucket="aws-log-collection-" + account_number + "-" + region,
Bucket="aws-log-collection-" + account_number + "-" + region + "-" + unique_end,
CreateBucketConfiguration={
'LocationConstraint': region
}
)
logging.info("Bucket Created.")
logging.info("Setting lifecycle policy.")
lifecycle_policy = s3.put_bucket_lifecycle_configuration(
Bucket="aws-log-collection-" + account_number + "-" + region,
Bucket="aws-log-collection-" + account_number + "-" + region + "-" + unique_end,
LifecycleConfiguration={
'Rules': [
{
Expand All @@ -100,22 +110,22 @@ def create_bucket(organization_id, account_number):
)
logging.info("Lifecycle Policy successfully set.")
create_ct_path = s3.put_object(
Bucket="aws-log-collection-" + account_number + "-" + region,
Bucket="aws-log-collection-" + account_number + "-" + region + "-" + unique_end,
Key='cloudtrail/AWSLogs/' + account_number + '/')
create_ct_path_vpc = s3.put_object(
Bucket="aws-log-collection-" + account_number + "-" + region,
Bucket="aws-log-collection-" + account_number + "-" + region + "-" + unique_end,
Key='vpcflowlogs/')
create_ct_path_r53 = s3.put_object(
Bucket="aws-log-collection-" + account_number + "-" + region,
Bucket="aws-log-collection-" + account_number + "-" + region + "-" + unique_end,
Key='r53querylogs/')
bucket_policy = s3.put_bucket_policy(
Bucket="aws-log-collection-" + account_number + "-" + region,
Policy='{"Version": "2012-10-17", "Statement": [{"Sid": "AWSCloudTrailAclCheck20150319","Effect": "Allow","Principal": {"Service": "cloudtrail.amazonaws.com"},"Action": "s3:GetBucketAcl","Resource": "arn:aws:s3:::aws-log-collection-' + account_number + '-' + region + '"},{"Sid": "AWSCloudTrailWrite20150319","Effect": "Allow","Principal": {"Service": "cloudtrail.amazonaws.com"},"Action": "s3:PutObject","Resource": "arn:aws:s3:::aws-log-collection-' + account_number + '-' + region + '/cloudtrail/AWSLogs/' + account_number + '/*","Condition": {"StringEquals": {"s3:x-amz-acl": "bucket-owner-full-control"}}},{"Sid": "AWSLogDeliveryAclCheck","Effect": "Allow","Principal": {"Service": "delivery.logs.amazonaws.com"},"Action": "s3:GetBucketAcl","Resource": "arn:aws:s3:::aws-log-collection-' + account_number + '-' + region + '"},{"Sid": "AWSLogDeliveryWriteVPC","Effect": "Allow","Principal": {"Service": "delivery.logs.amazonaws.com"},"Action": "s3:PutObject","Resource": "arn:aws:s3:::aws-log-collection-' + account_number + '-' + region + '/vpcflowlogs/*","Condition": {"StringEquals": {"s3:x-amz-acl": "bucket-owner-full-control"}}},{"Sid": "AWSLogDeliveryWriteR53","Effect": "Allow","Principal": {"Service": "delivery.logs.amazonaws.com"},"Action": "s3:PutObject","Resource": "arn:aws:s3:::aws-log-collection-' + account_number + '-' + region + '/r53querylogs/*","Condition": {"StringEquals": {"s3:x-amz-acl": "bucket-owner-full-control"}}}]}'
Bucket="aws-log-collection-" + account_number + "-" + region + "-" + unique_end,
Policy='{"Version": "2012-10-17", "Statement": [{"Sid": "AWSCloudTrailAclCheck20150319","Effect": "Allow","Principal": {"Service": "cloudtrail.amazonaws.com"},"Action": "s3:GetBucketAcl","Resource": "arn:aws:s3:::aws-log-collection-' + account_number + '-' + region + '-' + unique_end + '"},{"Sid": "AWSCloudTrailWrite20150319","Effect": "Allow","Principal": {"Service": "cloudtrail.amazonaws.com"},"Action": "s3:PutObject","Resource": "arn:aws:s3:::aws-log-collection-' + account_number + '-' + region + '-' + unique_end + '/cloudtrail/AWSLogs/' + account_number + '/*","Condition": {"StringEquals": {"s3:x-amz-acl": "bucket-owner-full-control"}}},{"Sid": "AWSLogDeliveryAclCheck","Effect": "Allow","Principal": {"Service": "delivery.logs.amazonaws.com"},"Action": "s3:GetBucketAcl","Resource": "arn:aws:s3:::aws-log-collection-' + account_number + '-' + region + '-' + unique_end + '"},{"Sid": "AWSLogDeliveryWriteVPC","Effect": "Allow","Principal": {"Service": "delivery.logs.amazonaws.com"},"Action": "s3:PutObject","Resource": "arn:aws:s3:::aws-log-collection-' + account_number + '-' + region + '-' + unique_end + '/vpcflowlogs/*","Condition": {"StringEquals": {"s3:x-amz-acl": "bucket-owner-full-control"}}},{"Sid": "AWSLogDeliveryWriteR53","Effect": "Allow","Principal": {"Service": "delivery.logs.amazonaws.com"},"Action": "s3:PutObject","Resource": "arn:aws:s3:::aws-log-collection-' + account_number + '-' + region + '-' + unique_end + '/r53querylogs/*","Condition": {"StringEquals": {"s3:x-amz-acl": "bucket-owner-full-control"}}}]}'
)
logging.info("Setting the S3 bucket Public Access to Blocked")
logging.info("PutPublicAccessBlock API Call")
bucket_private = s3.put_public_access_block(
Bucket="aws-log-collection-" + account_number + "-" + region,
Bucket="aws-log-collection-" + account_number + "-" + region + "-" + unique_end,
PublicAccessBlockConfiguration={
'BlockPublicAcls': True,
'IgnorePublicAcls': True,
Expand All @@ -129,7 +139,7 @@ def create_bucket(organization_id, account_number):


# 4. Find VPCs and turn flow logs on if not on already.
def flow_log_activator(account_number, OrgAccountIdList, region_list):
def flow_log_activator(account_number, OrgAccountIdList, region_list, unique_end):
"""Function to define the list of VPCs without logging turned on"""
logging.info("Creating a list of VPCs without Flow Logs on.")
for org_account in OrgAccountIdList:
Expand Down Expand Up @@ -176,7 +186,7 @@ def flow_log_activator(account_number, OrgAccountIdList, region_list):
ResourceType='VPC',
TrafficType='ALL',
LogDestinationType='s3',
LogDestination='arn:aws:s3:::aws-log-collection-' + account_number + '-' + region + '/vpcflowlogs',
LogDestination='arn:aws:s3:::aws-log-collection-' + account_number + '-' + region + '-' + unique_end + '/vpcflowlogs',
LogFormat='${version} ${account-id} ${interface-id} ${srcaddr} ${dstaddr} ${srcport} ${dstport} ${protocol} ${packets} ${bytes} ${start} ${end} ${action} ${log-status} ${vpc-id} ${type} ${tcp-flags} ${subnet-id} ${sublocation-type} ${sublocation-id} ${region} ${pkt-srcaddr} ${pkt-dstaddr} ${instance-id} ${az-id} ${pkt-src-aws-service} ${pkt-dst-aws-service} ${flow-direction} ${traffic-path}'
)
logging.info("VPC Flow Logs are turned on for account " + org_account + ".")
Expand Down Expand Up @@ -245,7 +255,7 @@ def eks_logging(region_list, OrgAccountIdList):


# 6. Turn on Route 53 Query Logging.
def route_53_query_logs(region_list, account_number, OrgAccountIdList):
def route_53_query_logs(region_list, account_number, OrgAccountIdList, unique_end):
"""Function to turn on Route 53 Query Logs for VPCs"""
for org_account in OrgAccountIdList:
for aws_region in region_list:
Expand Down Expand Up @@ -294,7 +304,7 @@ def route_53_query_logs(region_list, account_number, OrgAccountIdList):
logging.info("CreateResolverQueryLogConfig API Call")
create_query_log = route53resolver_ma.create_resolver_query_log_config(
Name='Assisted_Log_Enabler_Query_Logs_' + aws_region,
DestinationArn='arn:aws:s3:::aws-log-collection-' + account_number + '-' + region + '/r53querylogs',
DestinationArn='arn:aws:s3:::aws-log-collection-' + account_number + '-' + region + '-' + unique_end + '/r53querylogs',
CreatorRequestId=timestamp_date_string,
Tags=[
{
Expand Down Expand Up @@ -325,30 +335,33 @@ def run_eks():

def run_vpc_flow_logs():
"""Function that runs the defined VPC Flow Log logging code"""
unique_end = random_string_generator()
account_number = get_account_number()
OrgAccountIdList, organization_id = org_account_grab()
create_bucket(organization_id, account_number)
flow_log_activator(account_number, OrgAccountIdList, region_list)
create_bucket(organization_id, account_number, unique_end)
flow_log_activator(account_number, OrgAccountIdList, region_list, unique_end)
logging.info("This is the end of the script. Please feel free to validate that logs have been turned on.")


def run_r53_query_logs():
"""Function that runs the defined R53 Query Logging code"""
unique_end = random_string_generator()
account_number = get_account_number()
OrgAccountIdList, organization_id = org_account_grab()
create_bucket(organization_id, account_number)
route_53_query_logs(region_list, account_number, OrgAccountIdList)
create_bucket(organization_id, account_number, unique_end)
route_53_query_logs(region_list, account_number, OrgAccountIdList, unique_end)
logging.info("This is the end of the script. Please feel free to validate that logs have been turned on.")


def lambda_handler(event, context):
"""Function that runs all of the previously defined functions"""
unique_end = random_string_generator()
account_number = get_account_number()
OrgAccountIdList, organization_id = org_account_grab()
create_bucket(organization_id, account_number)
flow_log_activator(account_number, OrgAccountIdList, region_list)
create_bucket(organization_id, account_number, unique_end)
flow_log_activator(account_number, OrgAccountIdList, region_list, unique_end)
eks_logging(region_list, OrgAccountIdList)
route_53_query_logs(region_list, account_number, OrgAccountIdList)
route_53_query_logs(region_list, account_number, OrgAccountIdList, unique_end)
logging.info("This is the end of the script. Please feel free to validate that logs have been turned on.")


Expand Down
Loading

0 comments on commit 8b2733d

Please sign in to comment.