Skip to content

Commit bc23ca6

Browse files
theakshaypantclaude
andcommitted
docs(profiling): update guide for OTel migration
Replace the obsolete profiling.enable ConfigMap key with runtime-profiling (enabled/disabled). Remove the K_METRICS_CONFIG controller section since the controller now uses ConfigMap-based observability via the eventing adapter. Document that controller profiling requires a pod restart as the adapter reads config once at startup. Add CONFIG_OBSERVABILITY_NAME prerequisite for the webhook. Fixes #2633 Signed-off-by: Akshay Pant <akpant@redhat.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
1 parent 60d8e0c commit bc23ca6

1 file changed

Lines changed: 31 additions & 73 deletions

File tree

docs/content/docs/operations/profiling.md

Lines changed: 31 additions & 73 deletions
Original file line numberDiff line numberDiff line change
@@ -31,98 +31,69 @@ When profiling is disabled the server still listens but returns `404` for every
3131

3232
## Enabling Profiling
3333

34-
### Watcher
35-
36-
The **watcher** (`pipelines-as-code-watcher`) uses Knative's `sharedmain` framework,
37-
which watches the `config-observability` ConfigMap and toggles profiling **without a
38-
restart**.
39-
40-
**`PAC_DISABLE_HEALTH_PROBE=true` must be set on the watcher, otherwise a port conflict
41-
on 8080 will cause the profiling server to shut down:**
42-
43-
```bash
44-
kubectl set env deployment/pipelines-as-code-watcher \
45-
-n pipelines-as-code \
46-
PAC_DISABLE_HEALTH_PROBE=true
47-
```
48-
49-
Then enable profiling via the ConfigMap:
34+
All components read profiling configuration from the same ConfigMap:
5035

5136
```bash
37+
# Enable
5238
kubectl patch configmap pipelines-as-code-config-observability \
5339
-n pipelines-as-code \
5440
--type merge \
55-
-p '{"data":{"profiling.enable":"true"}}'
56-
```
41+
-p '{"data":{"runtime-profiling":"enabled"}}'
5742

58-
To disable profiling:
59-
60-
```bash
43+
# Disable
6144
kubectl patch configmap pipelines-as-code-config-observability \
6245
-n pipelines-as-code \
6346
--type merge \
64-
-p '{"data":{"profiling.enable":"false"}}'
47+
-p '{"data":{"runtime-profiling":"disabled"}}'
6548
```
6649

67-
The watcher picks up the ConfigMap change immediately without a restart.
68-
69-
### Webhook
50+
### Component-specific prerequisites
7051

71-
The **webhook** (`pipelines-as-code-webhook`) also uses `sharedmain` and supports
72-
dynamic toggling via the same ConfigMap. Unlike the watcher, the webhook does not run
73-
its own health probe server, so `PAC_DISABLE_HEALTH_PROBE` is not required.
52+
| Component | Extra step required |
53+
| --- | --- |
54+
| **watcher** | Set `PAC_DISABLE_HEALTH_PROBE=true` — otherwise a port conflict on 8080 causes the profiling server to shut down (see below). Picks up ConfigMap changes without a restart. |
55+
| **controller** | Profiling must be enabled in the ConfigMap **before** the pod starts — a pod restart is required after any change. The eventing adapter framework reads the profiling config once at startup and does not watch for ConfigMap updates. |
56+
| **webhook** | Set `CONFIG_OBSERVABILITY_NAME=pipelines-as-code-config-observability` — the webhook Deployment does not set this by default and falls back to `config-observability`, which does not exist in the PAC namespace. Picks up ConfigMap changes without a restart. |
7457

75-
The webhook deployment does not set `CONFIG_OBSERVABILITY_NAME` by default, so it
76-
falls back to looking for a ConfigMap named `config-observability`, which does not
77-
exist in the PAC namespace. Set the environment variable first:
58+
For the watcher:
7859

7960
```bash
80-
kubectl set env deployment/pipelines-as-code-webhook \
61+
kubectl set env deployment/pipelines-as-code-watcher \
8162
-n pipelines-as-code \
82-
CONFIG_OBSERVABILITY_NAME=pipelines-as-code-config-observability
63+
PAC_DISABLE_HEALTH_PROBE=true
8364
```
8465

85-
Then use the same `kubectl patch` on the ConfigMap above to enable or disable profiling.
86-
87-
### Controller
88-
89-
The **controller** (`pipelines-as-code-controller`) uses the Knative eventing adapter
90-
framework. Profiling is configured at startup from the `K_METRICS_CONFIG` environment
91-
variable and is **not** dynamically reloaded; a pod restart is required after any change.
92-
93-
The `K_METRICS_CONFIG` variable contains a JSON object whose `ConfigMap` field holds
94-
inline key/value configuration data. To enable profiling, add `"profiling.enable":"true"`
95-
inside that `ConfigMap` object:
66+
For the webhook:
9667

9768
```bash
98-
# Read the current value first
99-
kubectl get deployment pipelines-as-code-controller \
69+
kubectl set env deployment/pipelines-as-code-webhook \
10070
-n pipelines-as-code \
101-
-o jsonpath='{.spec.template.spec.containers[0].env[?(@.name=="K_METRICS_CONFIG")].value}'
71+
CONFIG_OBSERVABILITY_NAME=pipelines-as-code-config-observability
10272
```
10373

104-
Then patch the Deployment with `profiling.enable` added to the `ConfigMap` field, for example:
74+
## Accessing Profiles
75+
76+
The profiling server listens on port **8008** by default. If that conflicts with another
77+
service, set `PROFILING_PORT` on the relevant Deployment(s) before proceeding:
10578

10679
```bash
107-
kubectl set env deployment/pipelines-as-code-controller \
80+
kubectl set env deployment/pipelines-as-code-watcher \
81+
deployment/pipelines-as-code-controller \
82+
deployment/pipelines-as-code-webhook \
10883
-n pipelines-as-code \
109-
'K_METRICS_CONFIG={"Domain":"pipelinesascode.tekton.dev/controller","Component":"pac_controller","PrometheusPort":9090,"ConfigMap":{"name":"pipelines-as-code-config-observability","profiling.enable":"true"}}'
84+
PROFILING_PORT=8090
11085
```
11186

112-
This triggers a rolling restart of the controller pod. Remove `"profiling.enable":"true"`
113-
(or set it to `"false"`) and re-apply to disable.
114-
115-
## Accessing Profiles
116-
117-
Port 8008 is not declared in the container spec by default. To make it reachable, patch
118-
the target Deployment(s) to add the port:
87+
Port 8008 (or your chosen port) is not declared in the container spec by default. Patch
88+
the target Deployment(s) to expose it — substituting the port number if you changed it:
11989

12090
```bash
91+
PROFILING_PORT=8008 # change if you set a custom port above
12192
for deploy in pipelines-as-code-watcher pipelines-as-code-controller pipelines-as-code-webhook; do
12293
kubectl patch deployment "$deploy" \
12394
-n pipelines-as-code \
12495
--type json \
125-
-p '[{"op":"add","path":"/spec/template/spec/containers/0/ports/-","value":{"name":"profiling","containerPort":8008,"protocol":"TCP"}}]'
96+
-p "[{\"op\":\"add\",\"path\":\"/spec/template/spec/containers/0/ports/-\",\"value\":{\"name\":\"profiling\",\"containerPort\":${PROFILING_PORT},\"protocol\":\"TCP\"}}]"
12697
done
12798
```
12899

@@ -155,27 +126,14 @@ export POD_NAME=$(kubectl get pods -n pipelines-as-code \
155126
-o jsonpath='{.items[0].metadata.name}')
156127
```
157128

158-
Then, forward a local port to the pod's profiling port:
129+
Then, forward a local port to the pod's profiling port (adjust if you changed `PROFILING_PORT`):
159130

160131
```bash
161132
kubectl port-forward -n pipelines-as-code $POD_NAME 8008:8008
162133
```
163134

164135
The pprof index is now available at `http://localhost:8008/debug/pprof/`.
165136

166-
### Changing the profiling port
167-
168-
If port 8008 conflicts with another service, set the `PROFILING_PORT` environment
169-
variable on the Deployment to use a different port:
170-
171-
```bash
172-
kubectl set env deployment/pipelines-as-code-watcher \
173-
-n pipelines-as-code \
174-
PROFILING_PORT=8090
175-
```
176-
177-
Update the `containerPort` in the patch above and your port-forward command to match.
178-
179137
### Capturing profiles with `go tool pprof`
180138

181139
With `kubectl port-forward` running, use `go tool pprof` to analyze profiles directly:
@@ -214,4 +172,4 @@ in the container spec by default, access requires an explicit Deployment patch,
214172
it to users with `deployments/patch` permission in the `pipelines-as-code` namespace.
215173

216174
Do not expose port 8008 via a Service or Ingress in production environments. Disable
217-
profiling (`profiling.enable: "false"`) when not actively investigating an issue.
175+
profiling (`runtime-profiling: "disabled"`) when not actively investigating an issue.

0 commit comments

Comments
 (0)