chore: remove monitors test from pr workflow#695
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #695 +/- ##
=======================================
Coverage 77.69% 77.69%
=======================================
Files 12 12
Lines 1112 1112
Branches 350 350
=======================================
Hits 864 864
Misses 118 118
Partials 130 130 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| env: | ||
| DD_API_KEY: ${{ secrets.DD_API_KEY }} | ||
| DD_APP_KEY: ${{ secrets.DD_APP_KEY }} | ||
| run: npm run test:integration |
There was a problem hiding this comment.
I'll be honest, I never really looked that closely into what these tests were doing but I assumed that it was deploying to AWS then pulling the deployed template down and asserting that it is both deployable and matches the snapshot. [In my opinion] just doing a snapshot test like this isn't really all that useful, and trying to add a deploy step here is worth doing if we have the credentials available.
It tests nothing in this repo -- those monitors are configured server-side by Datadog's backend team, so no PR here can affect whether this test passes or fails.
...
The test already has a dedicated home in monitor_api_integration_test.yml, which runs on a daily cron with a Slack alert on failure -- the right place for a canary that monitors external API availability.
I think I've added a monitor to this list before, but if we don't really own it I'd go even further and say that we shouldn't be the ones trying to test it.
Since the plugin does get the recommended monitors from an API, then creates / updates / deletes them based on the response, I think the including that in (actual) integration tests that run as a part of a PR is still the appropriate place to run these tests, if a PR breaks that we should not merge it in and we're missing that validation. Maybe not by specific names like how the test works now, but that something was created.
We also need a Datadog API / app key for the remote instrumentation tests, and they are stored in Secrets Manager in the sandbox account and retrieved during the integration tests, I would recommend doing something similar to that if there is a problem storing the keys in GitHub.

What does this PR do?
integration_tests/→snapshot_tests/(directory, script, workflow, and config references).Motivation
Monitors test removal: This step hits
api.datadoghq.com/api/v2/monitor/recommendedand asserts that certain named recommended monitors exist. It tests nothing in this repo -- those monitors are configured server-side by Datadog's backend team, so no PR here can affect whether this test passes or fails.The test was causing every Dependabot PR to fail on its first CI run because Dependabot PRs don't have access to regular repo secrets (
DD_API_KEY/DD_APP_KEY), only the restricted Dependabot secret scope. The fix in #692 addressed a separate CloudFormation credentials issue but missed this one.The test already has a dedicated home in
monitor_api_integration_test.yml, which runs on a daily cron with a Slack alert on failure -- the right place for a canary that monitors external API availability.Rename: The tests in
integration_tests/runsls package, generate a CloudFormation template, and diff it against checked-in snapshots. They never deploy anything and require no real AWS connectivity. Calling them "integration tests" implies live AWS interaction is expected -- which is exactly why the credential failure looked like a bug rather than an obvious non-issue. Renaming tosnapshot_testsmakes the nature of the tests self-evident.Note:
integrationTesting/testingModeconfig flags insrc/env.ts,src/forwarder.ts,src/index.tsare a different concept (bypass forwarder ARN validation) and are left untouched.Testing Guidelines
CI should pass on Dependabot PRs without any manual intervention.
Additional Notes
Fixes the remaining flakiness on Dependabot PRs not addressed by #692.
Types of changes
Check all that apply