Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
name: integration-tests
name: snapshot-tests

on:
push:
branches: [main]
pull_request:

jobs:
integration-tests:
snapshot-tests:
runs-on: ubuntu-latest

steps:
Expand Down Expand Up @@ -35,10 +35,5 @@ jobs:
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: ./scripts/run_integration_tests.sh
run: ./scripts/run_snapshot_tests.sh

- name: Test getting recommended monitors
env:
DD_API_KEY: ${{ secrets.DD_API_KEY }}
DD_APP_KEY: ${{ secrets.DD_APP_KEY }}
run: npm run test:integration
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll be honest, I never really looked that closely into what these tests were doing but I assumed that it was deploying to AWS then pulling the deployed template down and asserting that it is both deployable and matches the snapshot. [In my opinion] just doing a snapshot test like this isn't really all that useful, and trying to add a deploy step here is worth doing if we have the credentials available.

It tests nothing in this repo -- those monitors are configured server-side by Datadog's backend team, so no PR here can affect whether this test passes or fails.
...
The test already has a dedicated home in monitor_api_integration_test.yml, which runs on a daily cron with a Slack alert on failure -- the right place for a canary that monitors external API availability.

I think I've added a monitor to this list before, but if we don't really own it I'd go even further and say that we shouldn't be the ones trying to test it.

Since the plugin does get the recommended monitors from an API, then creates / updates / deletes them based on the response, I think the including that in (actual) integration tests that run as a part of a PR is still the appropriate place to run these tests, if a PR breaks that we should not merge it in and we're missing that validation. Maybe not by specific names like how the test works now, but that something was created.

We also need a Datadog API / app key for the remote instrumentation tests, and they are stored in Secrets Manager in the sandbox account and retrieved during the integration tests, I would recommend doing something similar to that if there is a problem storing the keys in GitHub.

2 changes: 1 addition & 1 deletion jest.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -10,5 +10,5 @@ module.exports = {
clearMocks: true,
collectCoverageFrom: ["src/**/*.ts"],
testRegex: "(src\\/).*(\\.spec\\.ts)$",
testPathIgnorePatterns: ["\\.snap$", "<rootDir>/node_modules/", "integration_tests/"],
testPathIgnorePatterns: ["\\.snap$", "<rootDir>/node_modules/", "snapshot_tests/"],
};
2 changes: 1 addition & 1 deletion jest.integration.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -9,5 +9,5 @@ module.exports = {
testEnvironment: "<rootDir>/testEnvironment.js",
clearMocks: true,
collectCoverageFrom: ["src/**/*.ts"],
testRegex: ["(integration_tests\\/).*(\\.spec\\.ts)$"],
testRegex: ["(snapshot_tests\\/).*(\\.spec\\.ts)$"],
};
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
"prebuild": "./scripts/check_layers_json.sh",
"build": "tsc",
"test": "jest --config jest.config.js",
"test:integration": "jest ./integration_tests --config jest.integration.config.js",
"test:integration": "jest ./snapshot_tests --config jest.integration.config.js",
"test:watch": "jest --watch",
"coverage": "jest --coverage",
"lint": "tslint --project tsconfig.json",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@

# Usage - run commands from repo root:
# To check if new changes to the plugin cause changes to any snapshots:
# ./scripts/run_integration_tests.sh
# ./scripts/run_snapshot_tests.sh
# To regenerate snapshots:
# UPDATE_SNAPSHOTS=true ./scripts/run_integration_tests.sh
# UPDATE_SNAPSHOTS=true ./scripts/run_snapshot_tests.sh

set -e

# To add new tests create a new yml file in the 'integration_tests' directory, append it to the SERVERLESS_CONFIGS array as well as creating a name for the
# To add new tests create a new yml file in the 'snapshot_tests' directory, append it to the SERVERLESS_CONFIGS array as well as creating a name for the
# snapshots that will be compared in your test. Add those snapshot names to the TEST_SNAPSHOTS and CORRECT_SNAPSHOTS arrays.
# Note: Each yml config, test, and correct snapshot file should be at the same index in their own array. e.g. All the files for the forwarder test are at index 0.
# In order for this script to work correctly these arrays should have the same amount of elements.
Expand All @@ -25,7 +25,7 @@ if [[ "$root_dir" =~ .*"serverless-plugin-datadog/scripts".* ]]; then
exit 1
fi

integration_tests_dir="$root_dir/integration_tests"
snapshot_tests_dir="$root_dir/snapshot_tests"
if [ "$UPDATE_SNAPSHOTS" = "true" ]; then
echo "Overwriting snapshots in this execution"
fi
Expand All @@ -36,7 +36,7 @@ yarn build
echo "Serverless Framework version:"
serverless --version

cd $integration_tests_dir
cd $snapshot_tests_dir
RAW_CFN_TEMPLATE=".serverless/cloudformation-template-update-stack.json"
for ((i = 0; i < ${#SERVERLESS_CONFIGS[@]}; i++)); do
echo "Running 'sls package' with ${SERVERLESS_CONFIGS[i]}"
Expand Down Expand Up @@ -68,17 +68,17 @@ for ((i = 0; i < ${#SERVERLESS_CONFIGS[@]}; i++)); do
set +e # Dont exit right away if there is a diff in snapshots
cd ..

python $scripts_dir/compare_snapshots.py $integration_tests_dir/${TEST_SNAPSHOTS[i]} $integration_tests_dir/${CORRECT_SNAPSHOTS[i]}
python $scripts_dir/compare_snapshots.py $snapshot_tests_dir/${TEST_SNAPSHOTS[i]} $snapshot_tests_dir/${CORRECT_SNAPSHOTS[i]}
return_code=$?
set -e
if [ $return_code -eq 0 ]; then
echo "SUCCESS: There were no differences between the ${TEST_SNAPSHOTS[i]} and ${CORRECT_SNAPSHOTS[i]}"
else
echo "FAILURE: There were differences between the ${TEST_SNAPSHOTS[i]} and ${CORRECT_SNAPSHOTS[i]}. Review the diff output above."
echo "If you expected the ${TEST_SNAPSHOTS[i]} to be different generate new snapshots by running this command from a development branch on your local repository: 'UPDATE_SNAPSHOTS=true ./scripts/run_integration_tests.sh'"
echo "If you expected the ${TEST_SNAPSHOTS[i]} to be different generate new snapshots by running this command from a development branch on your local repository: 'UPDATE_SNAPSHOTS=true ./scripts/run_snapshot_tests.sh'"
exit 1
fi
cd $integration_tests_dir
cd $snapshot_tests_dir
echo "===================================="
done
exit 0
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Loading