Skip to content

Commit 54bce76

Browse files
theakshaypantclaude
andcommitted
docs(profiling): update guide for OTel migration
Replace the obsolete profiling.enable ConfigMap key with runtime-profiling (enabled/disabled). Remove the K_METRICS_CONFIG controller section since the controller now uses ConfigMap-based observability via the eventing adapter. Document that controller profiling requires a pod restart as the adapter reads config once at startup. Add CONFIG_OBSERVABILITY_NAME prerequisite for the webhook. Fixes #2633 Signed-off-by: Akshay Pant <akpant@redhat.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
1 parent 60d8e0c commit 54bce76

1 file changed

Lines changed: 60 additions & 94 deletions

File tree

docs/content/docs/operations/profiling.md

Lines changed: 60 additions & 94 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ BookToC: true
88

99
Pipelines-as-Code components embed the [Knative profiling server](https://pkg.go.dev/knative.dev/pkg/profiling),
1010
which exposes Go runtime profiling data via the standard `net/http/pprof` endpoints.
11-
Profiling is useful for diagnosing CPU hot-spots, memory growth, goroutine leaks, and
11+
This is useful for diagnosing CPU hot-spots, memory growth, goroutine leaks, and
1212
other performance issues.
1313

1414
## How It Works
@@ -31,112 +31,88 @@ When profiling is disabled the server still listens but returns `404` for every
3131

3232
## Enabling Profiling
3333

34-
### Watcher
35-
36-
The **watcher** (`pipelines-as-code-watcher`) uses Knative's `sharedmain` framework,
37-
which watches the `config-observability` ConfigMap and toggles profiling **without a
38-
restart**.
39-
40-
**`PAC_DISABLE_HEALTH_PROBE=true` must be set on the watcher, otherwise a port conflict
41-
on 8080 will cause the profiling server to shut down:**
42-
43-
```bash
44-
kubectl set env deployment/pipelines-as-code-watcher \
45-
-n pipelines-as-code \
46-
PAC_DISABLE_HEALTH_PROBE=true
47-
```
48-
49-
Then enable profiling via the ConfigMap:
34+
All components read profiling configuration from the same ConfigMap:
5035

5136
```bash
37+
# Enable
5238
kubectl patch configmap pipelines-as-code-config-observability \
5339
-n pipelines-as-code \
5440
--type merge \
55-
-p '{"data":{"profiling.enable":"true"}}'
56-
```
41+
-p '{"data":{"runtime-profiling":"enabled"}}'
5742

58-
To disable profiling:
59-
60-
```bash
43+
# Disable
6144
kubectl patch configmap pipelines-as-code-config-observability \
6245
-n pipelines-as-code \
6346
--type merge \
64-
-p '{"data":{"profiling.enable":"false"}}'
47+
-p '{"data":{"runtime-profiling":"disabled"}}'
6548
```
6649

67-
The watcher picks up the ConfigMap change immediately without a restart.
50+
### Component-specific prerequisites
6851

69-
### Webhook
52+
The **controller** does not pick up ConfigMap changes at runtime. Its eventing adapter
53+
reads profiling config once at startup, so you must enable profiling in the ConfigMap
54+
**before** the pod starts and restart after any change:
7055

71-
The **webhook** (`pipelines-as-code-webhook`) also uses `sharedmain` and supports
72-
dynamic toggling via the same ConfigMap. Unlike the watcher, the webhook does not run
73-
its own health probe server, so `PAC_DISABLE_HEALTH_PROBE` is not required.
56+
```bash
57+
kubectl rollout restart deployment/pipelines-as-code-controller \
58+
-n pipelines-as-code
59+
```
7460

75-
The webhook deployment does not set `CONFIG_OBSERVABILITY_NAME` by default, so it
76-
falls back to looking for a ConfigMap named `config-observability`, which does not
77-
exist in the PAC namespace. Set the environment variable first:
61+
The **watcher** needs `PAC_DISABLE_HEALTH_PROBE=true`, otherwise a port conflict on
62+
8080 causes the profiling server to shut down. The watcher picks up ConfigMap changes
63+
without a restart.
7864

7965
```bash
80-
kubectl set env deployment/pipelines-as-code-webhook \
66+
kubectl set env deployment/pipelines-as-code-watcher \
8167
-n pipelines-as-code \
82-
CONFIG_OBSERVABILITY_NAME=pipelines-as-code-config-observability
68+
PAC_DISABLE_HEALTH_PROBE=true
8369
```
8470

85-
Then use the same `kubectl patch` on the ConfigMap above to enable or disable profiling.
86-
87-
### Controller
88-
89-
The **controller** (`pipelines-as-code-controller`) uses the Knative eventing adapter
90-
framework. Profiling is configured at startup from the `K_METRICS_CONFIG` environment
91-
variable and is **not** dynamically reloaded; a pod restart is required after any change.
92-
93-
The `K_METRICS_CONFIG` variable contains a JSON object whose `ConfigMap` field holds
94-
inline key/value configuration data. To enable profiling, add `"profiling.enable":"true"`
95-
inside that `ConfigMap` object:
71+
The **webhook** needs `CONFIG_OBSERVABILITY_NAME` set explicitly. Without it, the webhook
72+
looks for a ConfigMap called `config-observability`, which does not exist in the PAC
73+
namespace. The webhook picks up ConfigMap changes without a restart.
9674

9775
```bash
98-
# Read the current value first
99-
kubectl get deployment pipelines-as-code-controller \
76+
kubectl set env deployment/pipelines-as-code-webhook \
10077
-n pipelines-as-code \
101-
-o jsonpath='{.spec.template.spec.containers[0].env[?(@.name=="K_METRICS_CONFIG")].value}'
78+
CONFIG_OBSERVABILITY_NAME=pipelines-as-code-config-observability
10279
```
10380

104-
Then patch the Deployment with `profiling.enable` added to the `ConfigMap` field, for example:
81+
## Accessing Profiles
82+
83+
The profiling server listens on port **8008** by default. If that conflicts with another
84+
service, set `PROFILING_PORT` on the relevant Deployment(s):
10585

10686
```bash
107-
kubectl set env deployment/pipelines-as-code-controller \
87+
kubectl set env deployment/pipelines-as-code-watcher \
88+
deployment/pipelines-as-code-controller \
89+
deployment/pipelines-as-code-webhook \
10890
-n pipelines-as-code \
109-
'K_METRICS_CONFIG={"Domain":"pipelinesascode.tekton.dev/controller","Component":"pac_controller","PrometheusPort":9090,"ConfigMap":{"name":"pipelines-as-code-config-observability","profiling.enable":"true"}}'
91+
PROFILING_PORT=8090
11092
```
11193

112-
This triggers a rolling restart of the controller pod. Remove `"profiling.enable":"true"`
113-
(or set it to `"false"`) and re-apply to disable.
114-
115-
## Accessing Profiles
116-
117-
Port 8008 is not declared in the container spec by default. To make it reachable, patch
118-
the target Deployment(s) to add the port:
94+
Port 8008 (or your chosen port) is not declared in the container spec by default, so
95+
you need to patch the target Deployment(s) to expose it:
11996

12097
```bash
98+
PROFILING_PORT="${PROFILING_PORT:-8008}"
12199
for deploy in pipelines-as-code-watcher pipelines-as-code-controller pipelines-as-code-webhook; do
122100
kubectl patch deployment "$deploy" \
123101
-n pipelines-as-code \
124102
--type json \
125-
-p '[{"op":"add","path":"/spec/template/spec/containers/0/ports/-","value":{"name":"profiling","containerPort":8008,"protocol":"TCP"}}]'
103+
-p "[{\"op\":\"add\",\"path\":\"/spec/template/spec/containers/0/ports/-\",\"value\":{\"name\":\"profiling\",\"containerPort\":${PROFILING_PORT},\"protocol\":\"TCP\"}}]"
126104
done
127105
```
128106

129-
This triggers a rolling restart of the pod. Once the pod is running, you can access
130-
the pprof endpoints.
107+
This triggers a rolling restart. Once the pod is running you can access the pprof
108+
endpoints.
131109

132110
### Using `kubectl port-forward`
133111

134-
The recommended way to access the profiling server is with `kubectl port-forward`. This
135-
forwards a local port on your machine to the port on the pod, without exposing it to the
136-
cluster network.
112+
The simplest way to reach the profiling server is `kubectl port-forward`. This forwards
113+
a local port to the pod without exposing it to the cluster network.
137114

138-
First, get the name of the pod you want to profile. Choose the label that matches the
139-
component:
115+
First, grab the pod name for the component you want to profile:
140116

141117
```bash
142118
# Watcher
@@ -155,63 +131,53 @@ export POD_NAME=$(kubectl get pods -n pipelines-as-code \
155131
-o jsonpath='{.items[0].metadata.name}')
156132
```
157133

158-
Then, forward a local port to the pod's profiling port:
134+
Then forward the port:
159135

160136
```bash
161-
kubectl port-forward -n pipelines-as-code $POD_NAME 8008:8008
162-
```
163-
164-
The pprof index is now available at `http://localhost:8008/debug/pprof/`.
165-
166-
### Changing the profiling port
167-
168-
If port 8008 conflicts with another service, set the `PROFILING_PORT` environment
169-
variable on the Deployment to use a different port:
170-
171-
```bash
172-
kubectl set env deployment/pipelines-as-code-watcher \
173-
-n pipelines-as-code \
174-
PROFILING_PORT=8090
137+
PROFILING_PORT="${PROFILING_PORT:-8008}"
138+
kubectl port-forward -n pipelines-as-code $POD_NAME ${PROFILING_PORT}:${PROFILING_PORT}
175139
```
176140

177-
Update the `containerPort` in the patch above and your port-forward command to match.
141+
The pprof index is now at `http://localhost:${PROFILING_PORT}/debug/pprof/`.
178142

179143
### Capturing profiles with `go tool pprof`
180144

181-
With `kubectl port-forward` running, use `go tool pprof` to analyze profiles directly:
145+
With `kubectl port-forward` running, you can analyze profiles directly:
182146

183147
```bash
184148
# Heap profile
185-
go tool pprof http://localhost:8008/debug/pprof/heap
149+
go tool pprof http://localhost:${PROFILING_PORT}/debug/pprof/heap
186150

187151
# 30-second CPU profile
188-
go tool pprof http://localhost:8008/debug/pprof/profile
152+
go tool pprof http://localhost:${PROFILING_PORT}/debug/pprof/profile
189153

190154
# Goroutine dump
191-
go tool pprof http://localhost:8008/debug/pprof/goroutine
155+
go tool pprof http://localhost:${PROFILING_PORT}/debug/pprof/goroutine
192156
```
193157

194158
### Saving profiles to disk
195159

196-
You can also save profiles to disk for later analysis using `curl`:
160+
You can save profiles for later analysis with `curl`:
197161

198162
```bash
199163
# Save a heap profile
200164
curl -o heap-$(date +%Y%m%d-%H%M%S).pb.gz \
201-
http://localhost:8008/debug/pprof/heap
165+
http://localhost:${PROFILING_PORT}/debug/pprof/heap
202166

203-
# Analyze later - CLI
167+
# Analyze later
204168
go tool pprof heap-<timestamp>.pb.gz
205169

206-
# Analyze later - interactive web UI (opens browser at http://localhost:8009)
170+
# Or open the interactive web UI (starts a browser at http://localhost:8009)
207171
go tool pprof -http=:8009 heap-<timestamp>.pb.gz
208172
```
209173

210174
## Security Considerations
211175

212-
The profiling server exposes internal runtime data. Because port 8008 is not declared
213-
in the container spec by default, access requires an explicit Deployment patch, limiting
214-
it to users with `deployments/patch` permission in the `pipelines-as-code` namespace.
176+
The profiling server exposes internal runtime data. Because the profiling port is not
177+
declared in the container spec by default, access requires an explicit Deployment patch,
178+
limiting it to users with `deployments/patch` permission in the `pipelines-as-code`
179+
namespace.
215180

216-
Do not expose port 8008 via a Service or Ingress in production environments. Disable
217-
profiling (`profiling.enable: "false"`) when not actively investigating an issue.
181+
Do not expose the profiling port via a Service or Ingress in production. Disable
182+
profiling (`runtime-profiling: "disabled"`) when you are not actively investigating
183+
an issue.

0 commit comments

Comments
 (0)