Skip to content

feat(iac): Enable flag for teeing otel-collector metrics to otel-router#2600

Merged
wj-e2b merged 1 commit into
mainfrom
wj-metrics-routing
May 12, 2026
Merged

feat(iac): Enable flag for teeing otel-collector metrics to otel-router#2600
wj-e2b merged 1 commit into
mainfrom
wj-metrics-routing

Conversation

@wj-e2b
Copy link
Copy Markdown
Contributor

@wj-e2b wj-e2b commented May 9, 2026

No description provided.

@codecov
Copy link
Copy Markdown

codecov Bot commented May 9, 2026

❌ 7 Tests Failed:

Tests completed Failed Passed Skipped
2611 7 2604 5
View the full list of 13 ❄️ flaky test(s)
github.com/e2b-dev/infra/tests/integration/internal/tests/api/metrics::TestSandboxMetrics

Flake rate in main: 57.81% (Passed 127 times, Failed 174 times)

Stack Traces | 11.4s run time
=== RUN   TestSandboxMetrics
=== PAUSE TestSandboxMetrics
=== CONT  TestSandboxMetrics
    sandbox_metrics_test.go:44: 
        	Error Trace:	.../api/metrics/sandbox_metrics_test.go:44
        	Error:      	Should NOT be empty, but was 0
        	Test:       	TestSandboxMetrics
--- FAIL: TestSandboxMetrics (11.40s)
github.com/e2b-dev/infra/tests/integration/internal/tests/api/metrics::TestTeamMetrics

Flake rate in main: 69.74% (Passed 128 times, Failed 295 times)

Stack Traces | 0.36s run time
=== RUN   TestTeamMetrics
=== PAUSE TestTeamMetrics
=== CONT  TestTeamMetrics
    team_metrics_test.go:61: 
        	Error Trace:	.../api/metrics/team_metrics_test.go:61
        	Error:      	Should be true
        	Test:       	TestTeamMetrics
        	Messages:   	MaxConcurrentSandboxes should be >= 0
--- FAIL: TestTeamMetrics (0.36s)
github.com/e2b-dev/infra/tests/integration/internal/tests/api/sandboxes::TestGPGKeyserverWorks

Flake rate in main: 55.59% (Passed 127 times, Failed 159 times)

Stack Traces | 173s run time
=== RUN   TestGPGKeyserverWorks
=== PAUSE TestGPGKeyserverWorks
=== CONT  TestGPGKeyserverWorks
--- FAIL: TestGPGKeyserverWorks (173.43s)
github.com/e2b-dev/infra/tests/integration/internal/tests/api/sandboxes::TestGPGKeyserverWorks/with_network_config

Flake rate in main: 56.52% (Passed 120 times, Failed 156 times)

Stack Traces | 7.11s run time
=== RUN   TestGPGKeyserverWorks/with_network_config
=== PAUSE TestGPGKeyserverWorks/with_network_config
=== CONT  TestGPGKeyserverWorks/with_network_config
    sandbox_network_out_test.go:1029: Testing GPG keyserver connection to hkp://keyserver.ubuntu.com with network config defined...
Executing command gpg in sandbox i5amh1e38wv986fffupbh
    sandbox_network_out_test.go:1030: Command [gpg] output: event:{start:{pid:1309}}
    sandbox_network_out_test.go:1030: Command [gpg] output: event:{data:{stderr:"gpg: directory '............/home/user/.gnupg' created\n"}}
    sandbox_network_out_test.go:1030: Command [gpg] output: event:{data:{stderr:"gpg: keybox '............/home/user/.gnupg/pubring.kbx' created\n"}}
    sandbox_network_out_test.go:1030: Command [gpg] output: event:{data:{stderr:"gpg: keyserver receive failed: No data\n"}}
    sandbox_network_out_test.go:1030: Command [gpg] output: event:{end:{exit_code:2 exited:true status:"exit status 2" error:"exit status 2"}}
    sandbox_network_out_test.go:1033: 
        	Error Trace:	.../api/sandboxes/sandbox_network_out_test.go:1033
        	Error:      	Received unexpected error:
        	            	command gpg in sandbox i5amh1e38wv986fffupbh failed with exit code 2
        	Test:       	TestGPGKeyserverWorks/with_network_config
        	Messages:   	Expected GPG keyserver command to succeed, got error: command gpg in sandbox i5amh1e38wv986fffupbh failed with exit code 2, output: gpg: directory '............/home/user/.gnupg' created
        	            	gpg: keybox '............/home/user/.gnupg/pubring.kbx' created
        	            	gpg: keyserver receive failed: No data
--- FAIL: TestGPGKeyserverWorks/with_network_config (7.11s)
github.com/e2b-dev/infra/tests/integration/internal/tests/api/sandboxes::TestUpdateNetworkConfig

Flake rate in main: 76.80% (Passed 135 times, Failed 447 times)

Stack Traces | 214s run time
=== RUN   TestUpdateNetworkConfig
=== PAUSE TestUpdateNetworkConfig
=== CONT  TestUpdateNetworkConfig
--- FAIL: TestUpdateNetworkConfig (214.11s)
github.com/e2b-dev/infra/tests/integration/internal/tests/api/sandboxes::TestUpdateNetworkConfig/pause_resume_preserves_allow_internet_access_false

Flake rate in main: 77.23% (Passed 130 times, Failed 441 times)

Stack Traces | 7.83s run time
=== RUN   TestUpdateNetworkConfig/pause_resume_preserves_allow_internet_access_false
Executing command curl in sandbox ira102hj2qyy1rnp3rois
    sandbox_network_update_test.go:372: Command [curl] output: event:{start:{pid:1352}}
    sandbox_network_update_test.go:372: Command [curl] output: event:{end:{exit_code:35 exited:true status:"exit status 35" error:"exit status 35"}}
Executing command curl in sandbox ira102hj2qyy1rnp3rois
    sandbox_network_update_test.go:372: Command [curl] output: event:{start:{pid:1353}}
    sandbox_network_update_test.go:372: Command [curl] output: event:{end:{exit_code:35 exited:true status:"exit status 35" error:"exit status 35"}}
Executing command curl in sandbox ira102hj2qyy1rnp3rois
    sandbox_network_update_test.go:391: Command [curl] output: event:{start:{pid:1354}}
    sandbox_network_update_test.go:391: Command [curl] output: event:{data:{stdout:"HTTP/2 302 \r\nx-content-type-options: nosniff\r\nlocation: https://dns.google/\r\ndate: Mon, 11 May 2026 23:58:44 GMT\r\ncontent-type: text/html; charset=UTF-8\r\nserver: HTTP server (unknown)\r\ncontent-length: 216\r\nx-xss-protection: 0\r\nx-frame-options: SAMEORIGIN\r\nalt-svc: h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000\r\n\r\n"}}
    sandbox_network_update_test.go:391: Command [curl] output: event:{end:{exited:true status:"exit status 0"}}
    sandbox_network_update_test.go:391: Command [curl] completed successfully in sandbox ira102hj2qyy1rnp3rois
    sandbox_network_update_test.go:391: 
        	Error Trace:	.../api/sandboxes/sandbox_network_out_test.go:74
        	            				.../api/sandboxes/sandbox_network_update_test.go:60
        	            				.../api/sandboxes/sandbox_network_update_test.go:391
        	Error:      	An error is expected but got nil.
        	Test:       	TestUpdateNetworkConfig/pause_resume_preserves_allow_internet_access_false
        	Messages:   	https://8.8.8.8 should be blocked
--- FAIL: TestUpdateNetworkConfig/pause_resume_preserves_allow_internet_access_false (7.83s)
github.com/e2b-dev/infra/tests/integration/internal/tests/envd::TestBindLocalhost

Flake rate in main: 57.75% (Passed 229 times, Failed 313 times)

Stack Traces | 0s run time
=== RUN   TestBindLocalhost
=== PAUSE TestBindLocalhost
=== CONT  TestBindLocalhost
--- FAIL: TestBindLocalhost (0.00s)
github.com/e2b-dev/infra/tests/integration/internal/tests/envd::TestBindLocalhost/bind_0_0_0_0

Flake rate in main: 63.69% (Passed 126 times, Failed 221 times)

Stack Traces | 7.11s run time
=== RUN   TestBindLocalhost/bind_0_0_0_0
=== PAUSE TestBindLocalhost/bind_0_0_0_0
=== CONT  TestBindLocalhost/bind_0_0_0_0
    localhost_bind_test.go:69: Command [python] output: event:{start:{pid:1257}}
Executing command python in sandbox iw545wyty1og7yikftt4f
    localhost_bind_test.go:90: 
        	Error Trace:	.../tests/envd/localhost_bind_test.go:90
        	Error:      	Not equal: 
        	            	expected: 200
        	            	actual  : 502
        	Test:       	TestBindLocalhost/bind_0_0_0_0
        	Messages:   	Unexpected status code 502 for bind address 0.0.0.0
--- FAIL: TestBindLocalhost/bind_0_0_0_0 (7.11s)
github.com/e2b-dev/infra/tests/integration/internal/tests/envd::TestBindLocalhost/bind_::1

Flake rate in main: 65.48% (Passed 126 times, Failed 239 times)

Stack Traces | 7.41s run time
=== RUN   TestBindLocalhost/bind_::1
=== PAUSE TestBindLocalhost/bind_::1
=== CONT  TestBindLocalhost/bind_::1
    localhost_bind_test.go:69: Command [python] output: event:{start:{pid:1257}}
Executing command python in sandbox i62pvgp9upwx4ocik1owf
    localhost_bind_test.go:90: 
        	Error Trace:	.../tests/envd/localhost_bind_test.go:90
        	Error:      	Not equal: 
        	            	expected: 200
        	            	actual  : 502
        	Test:       	TestBindLocalhost/bind_::1
        	Messages:   	Unexpected status code 502 for bind address ::1
--- FAIL: TestBindLocalhost/bind_::1 (7.41s)
github.com/e2b-dev/infra/tests/integration/internal/tests/envd::TestBindLocalhost/bind_localhost

Flake rate in main: 65.57% (Passed 126 times, Failed 240 times)

Stack Traces | 7.7s run time
=== RUN   TestBindLocalhost/bind_localhost
=== PAUSE TestBindLocalhost/bind_localhost
=== CONT  TestBindLocalhost/bind_localhost
Executing command python in sandbox izlz65z1b30ay0j6p76lt
    localhost_bind_test.go:69: Command [python] output: event:{start:{pid:1257}}
    localhost_bind_test.go:90: 
        	Error Trace:	.../tests/envd/localhost_bind_test.go:90
        	Error:      	Not equal: 
        	            	expected: 200
        	            	actual  : 502
        	Test:       	TestBindLocalhost/bind_localhost
        	Messages:   	Unexpected status code 502 for bind address localhost
--- FAIL: TestBindLocalhost/bind_localhost (7.70s)
github.com/e2b-dev/infra/tests/integration/internal/tests/orchestrator::TestSandboxMemoryIntegrity

Flake rate in main: 66.58% (Passed 136 times, Failed 271 times)

Stack Traces | 85.4s run time
=== RUN   TestSandboxMemoryIntegrity
=== PAUSE TestSandboxMemoryIntegrity
=== CONT  TestSandboxMemoryIntegrity
    sandbox_memory_integrity_test.go:26: Build completed successfully
--- FAIL: TestSandboxMemoryIntegrity (85.43s)
github.com/e2b-dev/infra/tests/integration/internal/tests/orchestrator::TestSandboxMemoryIntegrity/tmpfs_hash

Flake rate in main: 67.77% (Passed 126 times, Failed 265 times)

Stack Traces | 25.7s run time
=== RUN   TestSandboxMemoryIntegrity/tmpfs_hash
=== PAUSE TestSandboxMemoryIntegrity/tmpfs_hash
=== CONT  TestSandboxMemoryIntegrity/tmpfs_hash
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{start:{pid:1269}}
Executing command bash in sandbox ia1feg1jz13kuryyvmeby (user: root)
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stdout:"Total memory: 985 MB\n"}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stdout:"Used memory before tmpfs mount: 186 MB\nFree memory before tmpfs mount: 798 MB\nMemory to use in integrity test (80% of free, min 64MB): 638 MB\n"}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stderr:"638+0 records in\n638+0 records out\n668991488 bytes (669 MB, 638 MiB) copied, 3.44304 s, 194 MB/s\n"}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stderr:"\t"}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stderr:"C"}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stderr:"o"}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stderr:"m"}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stderr:"m"}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stderr:"a"}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stderr:"n"}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stderr:"d"}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stderr:" "}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stderr:"b"}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stderr:"e"}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stderr:"i"}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stderr:"n"}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stderr:"g timed: \"dd if=/dev/urandom of=/mnt/testfile bs=1M count=638\"\n\tUser time (seconds): 0.00\n\tSystem time (seconds): 3.41\n\tPercent of CPU this job got: 99%\n\tElapsed (wall clock) time (h:mm:ss or m:ss): 0:03.44\n\tAverage shared text size (kbyte"}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stderr:"s): 0\n\tAverage unsh"}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stderr:"ared data siz"}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stderr:"e ("}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stderr:"kbytes): 0\n\tAverage stack size (kbytes): 0\n\tAverage total size (kbytes): 0\n\tMaximum resident set size (kbytes): 2704\n\tAverage resident set size (kbytes): 0\n\tMajor (requiring I/O) page faults: 3\n\tMinor (reclaiming a frame) page faults: 343\n\tVoluntary context switches: 4\n\tInvoluntary context switches: 21\n\tSwaps: 0\n\tFile system inputs: 176\n\tFile system outputs: 0\n\tSocket messages sent: 0\n\tSocket messages received: 0\n\tSignals delivered: 0\n\tPage size (bytes): 4096\n\tExit status: 0\n"}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{data:{stdout:"Used memory after tmpfs mount and file fill: 833 MB\n"}}
    sandbox_memory_integrity_test.go:70: Command [bash] output: event:{end:{exited:true  status:"exit status 0"}}
    sandbox_memory_integrity_test.go:70: Command [bash] completed successfully in sandbox ignw2kun389ibwrd521pd
Executing command bash in sandbox ignw2kun389ibwrd521pd (user: root)
    sandbox_memory_integrity_test.go:74: Command [bash] output: event:{start:{pid:1285}}
    sandbox_memory_integrity_test.go:74: Command [bash] output: event:{data:{stdout:"a98ef8226c8c8eade06c1df57334987cb002399df66c2526a72b11570b3ffb0d\n"}}
    sandbox_memory_integrity_test.go:74: Command [bash] output: event:{end:{exited:true  status:"exit status 0"}}
    sandbox_memory_integrity_test.go:74: Command [bash] completed successfully in sandbox ignw2kun389ibwrd521pd
Executing command bash in sandbox ignw2kun389ibwrd521pd (user: root)
    sandbox_memory_integrity_test.go:99: Command [bash] output: event:{start:{pid:1288}}
    sandbox_memory_integrity_test.go:100: 
        	Error Trace:	.../tests/orchestrator/sandbox_memory_integrity_test.go:100
        	Error:      	Received unexpected error:
        	            	failed to execute command bash in sandbox ignw2kun389ibwrd521pd: invalid_argument: protocol error: incomplete envelope: unexpected EOF
        	Test:       	TestSandboxMemoryIntegrity/tmpfs_hash
--- FAIL: TestSandboxMemoryIntegrity/tmpfs_hash (25.69s)
github.com/e2b-dev/infra/tests/integration/internal/tests/proxies::TestSandboxWithTrafficAccessTokenAutoResumeViaProxy

Flake rate in main: 55.56% (Passed 128 times, Failed 160 times)

Stack Traces | 19.7s run time
=== RUN   TestSandboxWithTrafficAccessTokenAutoResumeViaProxy
=== PAUSE TestSandboxWithTrafficAccessTokenAutoResumeViaProxy
=== CONT  TestSandboxWithTrafficAccessTokenAutoResumeViaProxy
    traffic_access_token_test.go:263: [Status code: 502] Response body: {"sandboxId":"ivze8w2dbj939xs3rnkal","message":"The sandbox is running but port is not open","port":8080,"code":502}
Executing command ls in sandbox i9crfnfwf6qv7qza06qis
    traffic_access_token_test.go:263: [Status code: 502] Response body: {"sandboxId":"ivze8w2dbj939xs3rnkal","message":"The sandbox is running but port is not open","port":8080,"code":502}
    traffic_access_token_test.go:263: [Status code: 502] Response body: {"sandboxId":"ivze8w2dbj939xs3rnkal","message":"The sandbox is running but port is not open","port":8080,"code":502}
    traffic_access_token_test.go:292: 
        	Error Trace:	.../tests/proxies/traffic_access_token_test.go:292
        	Error:      	Received unexpected error:
        	            	Get "http://localhost:3002": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
        	Test:       	TestSandboxWithTrafficAccessTokenAutoResumeViaProxy
--- FAIL: TestSandboxWithTrafficAccessTokenAutoResumeViaProxy (19.73s)

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The otel_router_http_port variable is missing from the nomad module call in the AWS provider, creating an inconsistency with the GCP implementation and preventing the log router port from being correctly propagated. The otel_router_http_port variable is defined in the GCP nomad module but is not passed to the logs_collector module, leaving the variable unused and the log teeing port unconfigured.

Comment thread iac/provider-aws/main.tf
Comment thread iac/provider-gcp/nomad/variables.tf
Comment thread iac/provider-aws/variables.tf
@wj-e2b wj-e2b force-pushed the wj-metrics-routing branch 2 times, most recently from be089b5 to ed716fe Compare May 9, 2026 04:13
@wj-e2b
Copy link
Copy Markdown
Contributor Author

wj-e2b commented May 9, 2026

@claude review

Copy link
Copy Markdown

@claude claude Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good — prior feedback was addressed by removing the unused HTTP port variable, and the metrics flag/port are now correctly wired end-to-end and symmetrically across both providers.

Extended reasoning...

Overview

This PR adds an enable_otel_router_metrics feature flag (plus otel_router_grpc_port, default 4320) that conditionally adds a second metrics/external pipeline to the otel-collector config, exporting external customer metrics to a local otel-router via OTLP/gRPC. Touches 12 files across iac/modules/job-otel-collector, both provider Makefiles, and provider-aws / provider-gcp Terraform variables and module wiring.

Security risks

None identified. This is pure observability/config plumbing: a Boolean toggle (default false) and a port number, both gated behind %{ if enable_otel_router_metrics } template blocks. The new OTLP exporter targets 127.0.0.1 with tls.insecure: true, which is appropriate for a co-located router on the same node.

Level of scrutiny

Low. This is mechanical IaC config wiring — no logic, no auth/crypto, no data path changes by default. The chain is symmetric across AWS and GCP and follows the same pattern as the existing enable_otel_router_logs flag.

Other factors

The author addressed my prior review by removing the never-consumed otel_router_http_port variable that I flagged as misleading config surface. The current diff is tight and the new variables flow correctly: env → Makefile tfvar → root variables.tfmodule.nomad → nomad/variables.tfmodule.otel_collectorotel-collector.yaml template, with matching wiring on both providers.

The bug-hunter inline comment about ENABLE_OTEL_ROUTER_LOGS being absent from the Makefiles is correctly tagged as pre-existing (introduced in PR #2577). It is a one-line cleanup that an operator can work around via .terraform.<env>.tfvars, so I do not consider it a blocker for this PR.

Comment thread iac/provider-aws/Makefile
Copy link
Copy Markdown
Contributor

@dobrac dobrac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, one nit

Comment thread iac/modules/job-otel-collector/configs/otel-collector.yaml
@cursor
Copy link
Copy Markdown

cursor Bot commented May 11, 2026

PR Summary

Medium Risk
Medium risk because it changes observability routing and introduces a new conditional exporter/path that can silently drop or duplicate metrics if the local otel-router endpoint/port is misconfigured or unavailable.

Overview
The new enable_otel_router_metrics path assumes otel-router is reachable on 127.0.0.1:${otel_router_grpc_port}; if it is not co-located or the port is wrong, the added exporter can cause export failures or backpressure in the metrics/external pipeline. Enabling this may duplicate external metrics (ClickHouse plus otel-router) and increase load/egress unexpectedly, and the new Makefile TF_VAR wiring relies on env var naming/casing being set correctly or the flag will be ignored.

Reviewed by Cursor Bugbot for commit 374ea2f. Bugbot is set up for automated code reviews on this repo. Configure here.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 575bc513ef

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread iac/modules/job-otel-collector/configs/otel-collector.yaml
@wj-e2b wj-e2b force-pushed the wj-metrics-routing branch from 575bc51 to b261c67 Compare May 11, 2026 23:30
@wj-e2b wj-e2b enabled auto-merge (squash) May 11, 2026 23:31
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: b261c67276

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread iac/modules/job-otel-collector/configs/otel-collector.yaml
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 374ea2fc0e

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread iac/modules/job-otel-collector/configs/otel-collector.yaml
@wj-e2b wj-e2b force-pushed the wj-metrics-routing branch from 374ea2f to b261c67 Compare May 11, 2026 23:46
@wj-e2b wj-e2b merged commit df123b0 into main May 12, 2026
90 of 94 checks passed
@wj-e2b wj-e2b deleted the wj-metrics-routing branch May 12, 2026 00:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants