Update#4
Open
hanalytics-data-service wants to merge 803 commits intohanalytics-data-service:masterfrom
Open
Conversation
…lert-filters use sets for alert filters
…and_schema - Replace double quotes with single quotes for proper SQL compatibility - Fixes SQL errors in Slack alert messages when database/schema names contain quotes - Added conditional check to handle None values safely
* no table view for dimension test * lint * update dbt package version
* release v0.19.5 * update package version * new bundle --------- Co-authored-by: GitHub Actions <noreply@github.com> Co-authored-by: Noy Arie <noy@elementary-data.com>
…e-quotes-with-single-quotes
…d-models-usage removed usage of deprecated `-m` flag in dbt
…e-quotes-with-single-quotes
…group-when-tracking-is-disabled disable group registration when tracking is disabled
update dbt package revision
…clickhouse Handle empty result in clickhouse
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…rsion security: pin lxml >=6.1.0 to fix CVE-2026-41066
…pp-1006-test-warehouseall-warehouses
…all-warehouses Harden warehouse testing
* ci: switch test-warehouse AWS auth to OIDC role (CORE-687) Replace the static AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY pair (used for the S3 report upload) and the per-profile athena access keys with short-lived credentials assumed via GitHub OIDC. - Add job-level permissions: id-token: write - Add aws-actions/configure-aws-credentials@v4 step that assumes the role from secrets.AWS_OIDC_ROLE_ARN in eu-west-1 - Drop the AWS_*_KEY env vars and the --aws-access-key-id / --aws-secret-access-key flags from the send-report step (boto3 picks up role creds from the env vars exported by configure-aws-credentials) - Drop aws_access_key_id / aws_secret_access_key from the dbt-athena profile (boto3 default credential chain) and pin work_group: oss_tests Requires the matching elementary-internal change (Terraform OIDC provider + role + oss_tests Athena workgroup) to be applied first, plus adding AWS_OIDC_ROLE_ARN as a repo secret. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * ci: add OIDC role to cleanup-stale-schemas for athena (CORE-687) cleanup-stale-schemas.yml runs the drop_stale_ci_schemas macro against athena (among other warehouses) on a daily cron. It currently relies on ATHENA_AWS_*_KEY values inside CI_WAREHOUSE_SECRETS — switch the athena matrix entry to assume the same OIDC role used by test-warehouse.yml. The aws-actions/configure-aws-credentials step is gated on the athena matrix entry so the other warehouses (snowflake, bigquery, etc.) skip the role assumption. Pairs with the dbt-data-reliability profile change that drops the static aws_access_key_id / aws_secret_access_key from the athena profile rendered by integration_tests/profiles/profiles.yml.j2. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * ci: grant id-token: write in reusable-workflow callers (CORE-687) test-all-warehouses.yml and test-release.yml call test-warehouse.yml as a reusable workflow. Per GitHub, id-token: write must be granted by the calling workflow — declaring it only on the called workflow's job is not sufficient. Add the permission to the relevant caller jobs. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * ci: stop hardcoding 'snapshots' target_schema in failed_snapshot fixture (CORE-687) failed_snapshot.sql is a parse-time fixture (intentionally invalid SQL) that's never actually run by CI. The hardcoded target_schema='snapshots' was forcing a fixed Glue database name and breaking the per-run schema isolation pattern. Use target.schema instead so it lands in the same ephemeral schema as the rest of the run. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…pp-1009-remind-docs
App 1009 remind docs
actions followups
…tions-to-version-instead-of-hash change to use pinned version for trusted actions
* CORE-742: switch Snowflake CI auth from password to private key The 'snowflake' target in profiles.yml.j2 now uses key-pair authentication (private_key, with optional private_key_passphrase) instead of password. generate_profiles.py's _yaml_inline filter is updated to safely render multi-line strings (e.g. PEM private keys) as a double-quoted YAML scalar with escaped newlines, so the rendered profiles.yml stays parseable inline. The CI_WAREHOUSE_SECRETS GitHub Actions secret needs a coordinated update: - remove key 'snowflake_password' - add key 'snowflake_private_key' (PEM or base64-encoded DER) - optional 'snowflake_private_key_passphrase' for encrypted keys The CI service user's RSA_PUBLIC_KEY must also be configured in Snowflake. Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * CORE-742: always quote string values in _yaml_inline (CodeRabbit feedback) Quoting only multi-line strings still allowed single-line string secrets like 'true', 'null', or '123' to be emitted unquoted and mis-parsed as booleans/None/integers when the rendered profiles.yml is loaded by dbt. Always quote string values for safety; non-string values (Undefined, dict, etc.) keep their previous behavior. Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> * CORE-742: only quote multi-line strings in _yaml_inline The previous change to always quote string values broke YAML type coercion for fields like redshift_port that come in as JSON strings but need to be rendered as bare YAML scalars so the dbt adapter parses them as integers. Revert to the original behavior: only multi-line strings (e.g. PEM private keys) are double-quoted with escaped newlines; single-line strings pass through unchanged so port/host/etc. continue to type-coerce correctly. Co-Authored-By: Itamar Hartstein <haritamar@gmail.com> --------- Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Co-authored-by: Itamar Hartstein <haritamar@gmail.com>
Replaces the tabulate-based ASCII table in code blocks with Slack's native Table Block (type: "table"), giving column-aligned, scannable tables directly in alert messages. - Bold rich_text header row, raw_text data cells - Falls back to JSON code block for >20 columns (Slack limit) or if a second table would appear in the same message - Removes unused tabulate import and cell-truncation helpers from BlockKitBuilder - Regenerates all block_kit test fixtures to match the new format Closes #2225
Slack rejects raw_text cells with empty text. Convert None to "NULL" (represents database NULL values) and guard against any other value that stringifies to an empty string with a single-space fallback. Adds a dedicated test to assert no raw_text cell ever has empty text.
Use native Slack Table Block for test results sample rendering
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.