Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cloudformation stacks End to End tests - test suite structure #13

Open
wants to merge 9 commits into
base: develop
Choose a base branch
from

Conversation

ahegdeNR
Copy link
Contributor

@ahegdeNR ahegdeNR commented Dec 19, 2024

This PR addresses all the basic requirements and test suite structure to run e2e tests on cloudformation stack template deployments. Refer to test documentation here

  1. Build and package all the template files
  2. Deploy lambda-template.yaml with s3 trigger
  3. Deploy lambda-template.yaml with cloudwatch trigger
  4. Validate stack creation status
  5. Validate all resources created by stack are correct
  6. Delete stack
  7. Add all the above tests as part of CI/CD
  8. Create IAM role required to run these tests in logging integrations AWS account (AWSUnifiedLambda_E2ETest_Role)

@ahegdeNR ahegdeNR changed the title End to End tests - PART 1 Cloudformation stacks End to End tests - test suite structure Dec 31, 2024
@ahegdeNR ahegdeNR requested review from maya-jha and hrai-nr December 31, 2024 11:26
@ahegdeNR ahegdeNR marked this pull request as ready for review December 31, 2024 11:26
role-to-assume: ${{ secrets.AWS_E2E_ROLE }}
aws-region: us-east-1

- name: Run e2e tests
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we having a separate workflow running ?? We can include this in PR workflow itself . Just the Run e2e tests step.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will run as part of merge workflow. Right now there are 2 separate workflows for lambda code deployment, templates push. This is a new workflow

NEW_RELIC_LICENSE_KEY: ${{ secrets.NEW_RELIC_LICENSE_KEY }}
run: |
cd e2e-tests/
./build-templates.sh
Copy link
Contributor

@voorepreethi voorepreethi Jan 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As it involves couple of script files, add a step or job to validate these script files. can use something like shellcheck

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If any of the scripts fail, the workflow step itself fails. All the required checks need to be added as part of the test files itself. There's no additional check required here

deploy_stack "$LAMBDA_TEMPLATE_BUILD_DIR/$LAMBDA_TEMPLATE" "$S3_TRIGGER_CASE" "$NEW_RELIC_LICENSE_KEY" "$NEW_RELIC_REGION" "$NEW_RELIC_ACCOUNT_ID" "false" "$S3_BUCKET_NAMES" "''" "''"
validate_stack_deployment_status "$S3_TRIGGER_CASE"
validate_stack_resources "$S3_TRIGGER_CASE" "$S3_BUCKET_NAME" "$S3_BUCKET_PREFIX"
delete_stack "$S3_TRIGGER_CASE"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we delete the resources only on success of stack , so that even if there is any issue in the deployment , the resources can be used to traceback

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete stack happens only of the stack is in create success state. Any other state the deletion fails and the step in workflow also fails.

BUILD_DIR="$BUILD_DIR_BASE/$BASE_NAME"

sam build -u --template-file "../$TEMPLATE_FILE" --build-dir "$BUILD_DIR"
sam package --s3-bucket "$S3_BUCKET" --template-file "$BUILD_DIR/template.yaml" --output-template-file "$BUILD_DIR/$TEMPLATE_FILE"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As part of cleaning up of instances, delete the build file as well from the s3 bucket which gets uploaded as part of this command

@ahegdeNR ahegdeNR changed the base branch from main to development-2025 January 3, 2025 06:22
@ahegdeNR ahegdeNR changed the base branch from development-2025 to main January 3, 2025 06:30
@ahegdeNR ahegdeNR changed the base branch from main to develop January 3, 2025 06:31
on:
pull_request:
branches:
- main
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We would like to run it before merging to main, not with every pull request. Do we have to do something like what we are doing for fluent bit e.g. having a prerelease stage to run e2e tests: https://github.com/newrelic/fluent-bit-package/blob/main/.github/workflows/prerelease.yml

Comment on lines +15 to +25
--template-file "$template_file" \
--stack-name "$stack_name" \
--parameter-overrides \
LicenseKey="$license_key" \
NewRelicRegion="$new_relic_region" \
NewRelicAccountId="$new_relic_account_id" \
StoreNRLicenseKeyInSecretManager="$secret_license_key" \
S3BucketNames="$s3_bucket_names" \
LogGroupConfig="$log_group_config" \
CommonAttributes="$common_attributes" \
--capabilities CAPABILITY_IAM
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each Template expect different set of params. How are you going to use a generic deploy method for all templates?

# test case constants
CLOUDWATCH_TRIGGER_CASE=cloudwatch-trigger-stack-1

validate_stack_resources() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use more specific name may be validate_lambda_subscription_created

Comment on lines +14 to +23
lambda_physical_id=$(aws cloudformation describe-stack-resources \
--stack-name "$stack_name" \
--logical-resource-id "$LAMBDA_LOGICAL_RESOURCE_ID" \
--query "StackResources[0].PhysicalResourceId" \
--output text
)
lambda_function_arn=$(aws lambda get-function --function-name "$lambda_physical_id" \
--query "Configuration.FunctionArn" \
--output text
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can refactor and put in common script file.

@@ -0,0 +1,19 @@
# test resources
S3_BUCKET=unified-lambda-serverless-1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should explictly use a name which makes it obvious that its a test bucket maybe: unified-lambda-e2e-test-templates

source config-file.cfg

# test case constants
S3_TRIGGER_CASE=s3-trigger-stack-1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can use e2e in stack name to make it clear that its related to e2e.

Comment on lines +41 to +42
./lambda-cloudwatch-trigger.sh
./lambda-s3-trigger.sh
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we run these tests in parallel?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants