| title | Webhooks |
|---|
This document describes the webhook functionality in the Dynamo Operator, including validation webhooks, certificate management, and troubleshooting.
- Overview
- Architecture
- Configuration
- Certificate Management
- Multi-Operator Deployments
- Troubleshooting
The Dynamo Operator uses Kubernetes admission webhooks to provide real-time validation and mutation of custom resources. Currently, the operator implements validation webhooks that ensure invalid configurations are rejected immediately at the API server level, providing faster feedback to users compared to controller-based validation.
All webhook types (validating, mutating, conversion, etc.) share the same webhook server and TLS certificate infrastructure, making certificate management consistent across all webhook operations.
- ✅ Always enabled - Webhooks are a required component of the operator
- ✅ Shared certificate infrastructure - All webhook types use the same TLS certificates
- ✅ Automatic certificate generation - No manual certificate management required
- ✅ cert-manager integration - Optional integration for automated certificate lifecycle
- ✅ Multi-operator support - Lease-based coordination for cluster-wide and namespace-restricted deployments
- ✅ Immutability enforcement - Critical fields protected via CEL validation rules
- Validating Webhooks: Validate custom resource specifications before persistence
DynamoComponentDeploymentvalidationDynamoGraphDeploymentvalidationDynamoModelvalidationDynamoGraphDeploymentRequestvalidation
- Mutating Webhooks: Apply default values to resources on creation
DynamoGraphDeploymentdefaulting
Note: All webhook types use the same certificate infrastructure described in this document.
┌─────────────────────────────────────────────────────────────────┐
│ API Server │
│ 1. User submits CR (kubectl apply) │
│ 2. API server calls MutatingWebhookConfiguration │
└────────────────────────┬────────────────────────────────────────┘
│ HTTPS (TLS required)
▼
┌─────────────────────────────────────────────────────────────────┐
│ Webhook Server (in Operator Pod) │
│ 3. Applies defaults (e.g., operator version annotation) │
│ 4. Returns mutated CR │
└────────────────────────┬────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ API Server │
│ 5. API server calls ValidatingWebhookConfiguration │
└────────────────────────┬────────────────────────────────────────┘
│ HTTPS (TLS required)
▼
┌─────────────────────────────────────────────────────────────────┐
│ Webhook Server (in Operator Pod) │
│ 6. Validates CR against business rules │
│ 7. Returns admit/deny decision + warnings │
└────────────────────────┬────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ API Server │
│ 8. If admitted: Persist CR to etcd │
│ 9. If denied: Return error to user │
└─────────────────────────────────────────────────────────────────┘
- Mutating webhooks: Apply defaults and transformations before validation
- Validating webhooks: Validate the (possibly mutated) CR against business rules
- CEL validation: Kubernetes-native immutability checks (always active)
The webhook.enabled Helm value has been removed. Webhooks are now a required component of the operator and are always active. If you previously ran with webhook.enabled: false, take the following steps before upgrading:
- Remove
webhook.enabledfrom any custom values files. Helm will ignore the unknown key, but it should be cleaned up to avoid confusion. - Ensure port 9443 is reachable from the Kubernetes API server to the operator pod. If you have
NetworkPolicyrules or firewall configurations restricting traffic, add an ingress rule allowing the API server to reach the webhook server on port 9443. - Ensure webhook TLS certificates are available. By default, Helm hooks generate self-signed certificates automatically during
helm upgrade— no action needed. If you use cert-manager or externally managed certificates, verify your configuration is in place before upgrading.
The operator supports three certificate management modes:
| Mode | Description | Use Case |
|---|---|---|
| Automatic (Default) | Helm hooks generate self-signed certificates | Testing and development environments |
| cert-manager | Integrate with cert-manager for automated lifecycle | Production deployments with cert-manager |
| External | Bring your own certificates | Production deployments with custom PKI |
dynamo-operator:
webhook:
# Certificate management
certManager:
enabled: false
issuerRef:
kind: Issuer
name: selfsigned-issuer
# Certificate secret configuration
certificateSecret:
name: webhook-server-cert
external: false
# Certificate validity period (automatic generation only)
certificateValidity: 3650 # 10 years
# Certificate generator image (automatic generation only)
certGenerator:
image:
repository: bitnami/kubectl
tag: latest
# Webhook behavior configuration
failurePolicy: Fail # Fail (reject on error) or Ignore (allow on error)
timeoutSeconds: 10 # Webhook timeout
# Namespace filtering (advanced)
namespaceSelector: {} # Kubernetes label selector for namespaces# Fail: Reject resources if webhook is unavailable (recommended for production)
webhook:
failurePolicy: Fail
# Ignore: Allow resources if webhook is unavailable (use with caution)
webhook:
failurePolicy: IgnoreRecommendation: Use Fail in production to ensure validation is always enforced. Only use Ignore if you need high availability and can tolerate occasional invalid resources.
Control which namespaces are validated (applies to cluster-wide operator only):
# Only validate resources in namespaces with specific labels
webhook:
namespaceSelector:
matchLabels:
dynamo-validation: enabled
# Or exclude specific namespaces
webhook:
namespaceSelector:
matchExpressions:
- key: dynamo-validation
operator: NotIn
values: ["disabled"]Note: For namespace-restricted operators, the namespace selector is automatically set to validate only the operator's namespace. This configuration is ignored in namespace-restricted mode.
Zero configuration required! Certificates are automatically generated during helm install and helm upgrade.
-
Pre-install/pre-upgrade hook: Generates self-signed TLS certificates
- Root CA (valid 10 years)
- Server certificate (valid 10 years)
- Stores in Secret:
<release>-webhook-server-cert
-
Post-install/post-upgrade hook: Injects CA bundle into
ValidatingWebhookConfiguration- Reads
ca.crtfrom Secret - Patches
ValidatingWebhookConfigurationwith base64-encoded CA bundle
- Reads
-
Operator pod: Mounts certificate secret and serves webhook on port 9443
- Root CA: 10 years
- Server Certificate: 10 years (same as Root CA)
- Automatic rotation: Certificates are re-generated on every
helm upgrade
The certificate generation hook is intelligent:
- ✅ Checks existing certificates before generating new ones
- ✅ Skips generation if valid certificates exist (valid for 30+ days with correct SANs)
- ✅ Regenerates only when needed (missing, expiring soon, or incorrect SANs)
This means:
- Fast
helm upgradeoperations (no unnecessary cert generation) - Safe to run
helm upgradefrequently - Certificates persist across reinstalls (stored in Secret)
If you need to rotate certificates manually:
# Delete the certificate secret
kubectl delete secret <release>-webhook-server-cert -n <namespace>
# Upgrade the release to regenerate certificates
helm upgrade <release> dynamo-platform -n <namespace>For clusters with cert-manager installed, you can enable automated certificate lifecycle management.
- cert-manager installed (v1.0+)
- CA issuer configured (e.g.,
selfsigned-issuer)
dynamo-operator:
webhook:
certManager:
enabled: true
issuerRef:
kind: Issuer # Or ClusterIssuer
name: selfsigned-issuer # Your issuer name- Helm creates Certificate resource: Requests TLS certificate from cert-manager
- cert-manager generates certificate: Based on configured issuer
- cert-manager stores in Secret:
<release>-webhook-server-cert - cert-manager ca-injector: Automatically injects CA bundle into
ValidatingWebhookConfiguration - Operator pod: Mounts certificate secret and serves webhook
- ✅ Automated rotation: cert-manager renews certificates before expiration
- ✅ Custom validity periods: Configure certificate lifetime
- ✅ CA rotation support: ca-injector handles CA updates automatically
- ✅ Integration with existing PKI: Use your organization's certificate infrastructure
With cert-manager, certificate rotation is fully automated:
-
Leaf certificate rotation (default: every year)
- cert-manager auto-renews before expiration
- controller-runtime auto-reloads new certificate
- No pod restart required
- No caBundle update required (same Root CA)
-
Root CA rotation (every 10 years)
- cert-manager rotates Root CA
- ca-injector auto-updates caBundle in
ValidatingWebhookConfiguration - No manual intervention required
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: selfsigned-issuer
namespace: dynamo-system
spec:
selfSigned: {}
---
# Enable in platform values.yaml
dynamo-operator:
webhook:
certManager:
enabled: true
issuerRef:
kind: Issuer
name: selfsigned-issuerBring your own certificates for custom PKI requirements.
- Create certificate secret manually:
kubectl create secret tls <release>-webhook-server-cert \
--cert=tls.crt \
--key=tls.key \
-n <namespace>
# Also add ca.crt to the secret
kubectl patch secret <release>-webhook-server-cert -n <namespace> \
--type='json' \
-p='[{"op": "add", "path": "/data/ca.crt", "value": "'$(base64 -w0 < ca.crt)'"}]'- Configure operator to use external secret:
dynamo-operator:
webhook:
certificateSecret:
external: true
caBundle: <base64-encoded-ca-cert> # Must manually specify- Deploy operator:
helm install dynamo-platform . -n <namespace> -f values.yaml- Secret name: Must match
webhook.certificateSecret.name(default:webhook-server-cert) - Secret keys:
tls.crt,tls.key,ca.crt - Certificate SAN: Must include
<service-name>.<namespace>.svc- Example:
dynamo-platform-dynamo-operator-webhook-service.dynamo-system.svc
- Example:
The operator supports running both cluster-wide and namespace-restricted instances simultaneously using a lease-based coordination mechanism.
Cluster:
├─ Operator A (cluster-wide, namespace: platform-system)
│ └─ Validates all namespaces EXCEPT team-a
└─ Operator B (namespace-restricted, namespace: team-a)
└─ Validates only team-a namespace
- Namespace-restricted operator creates a Lease in its namespace
- Cluster-wide operator watches for Leases named
dynamo-operator-ns-lock - Cluster-wide operator skips validation for namespaces with active Leases
- Namespace-restricted operator validates resources in its namespace
The lease mechanism is automatically configured based on deployment mode:
# Cluster-wide operator (default)
namespaceRestriction:
enabled: false
# → Watches for leases in all namespaces
# → Skips validation for namespaces with active leases
# Namespace-restricted operator
namespaceRestriction:
enabled: true
namespace: team-a
# → Creates lease in team-a namespace
# → Does NOT check for leases (no cluster permissions)# 1. Deploy cluster-wide operator
helm install platform-operator dynamo-platform \
-n platform-system \
--set namespaceRestriction.enabled=false
# 2. Deploy namespace-restricted operator for team-a
helm install team-a-operator dynamo-platform \
-n team-a \
--set namespaceRestriction.enabled=true \
--set namespaceRestriction.namespace=team-aThe webhook configuration name reflects the deployment mode:
- Cluster-wide:
<release>-validating - Namespace-restricted:
<release>-validating-<namespace>
Example:
# Cluster-wide
platform-operator-validating
# Namespace-restricted (team-a)
team-a-operator-validating-team-aThis allows multiple webhook configurations to coexist without conflicts.
If the namespace-restricted operator is deleted or becomes unhealthy:
- Lease expires after
leaseDuration + gracePeriod(default: ~30 seconds) - Cluster-wide operator automatically resumes validation for that namespace
Symptoms:
- Invalid resources are accepted
- No validation errors in logs
Checks:
- Verify webhook configuration exists:
kubectl get validatingwebhookconfiguration | grep dynamo- Check webhook configuration:
kubectl get validatingwebhookconfiguration <name> -o yaml
# Verify:
# - caBundle is present and non-empty
# - clientConfig.service points to correct service
# - webhooks[].namespaceSelector matches your namespace- Verify webhook service exists:
kubectl get service -n <namespace> | grep webhook- Check operator logs for webhook startup:
kubectl logs -n <namespace> deployment/<release>-dynamo-operator | grep webhook
# Should see: "Registering validation webhooks"
# Should see: "Starting webhook server"Symptoms:
Error from server (InternalError): Internal error occurred: failed calling webhook:
Post "https://...webhook-service...:443/validate-...": dial tcp ...:443: connect: connection refused
Checks:
- Verify operator pod is running:
kubectl get pods -n <namespace> -l app.kubernetes.io/name=dynamo-operator- Check webhook server is listening:
# Port-forward to pod
kubectl port-forward -n <namespace> pod/<operator-pod> 9443:9443
# In another terminal, test connection
curl -k https://localhost:9443/validate-nvidia-com-v1alpha1-dynamocomponentdeployment
# Should NOT get "connection refused"- Verify webhook port in deployment:
kubectl get deployment -n <namespace> <release>-dynamo-operator -o yaml | grep -A5 "containerPort: 9443"- Check for webhook initialization errors:
kubectl logs -n <namespace> deployment/<release>-dynamo-operator | grep -i errorSymptoms:
Error from server (InternalError): Internal error occurred: failed calling webhook:
x509: certificate signed by unknown authority
Checks:
- Verify caBundle is present:
kubectl get validatingwebhookconfiguration <name> -o jsonpath='{.webhooks[0].clientConfig.caBundle}' | base64 -d
# Should output a valid PEM certificate- Verify certificate secret exists:
kubectl get secret -n <namespace> <release>-webhook-server-cert- Check certificate validity:
kubectl get secret -n <namespace> <release>-webhook-server-cert -o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -noout -text
# Check:
# - Not expired
# - SAN includes: <service-name>.<namespace>.svc- Check CA injection job logs:
kubectl logs -n <namespace> job/<release>-webhook-ca-inject-<revision>Symptoms:
helm installorhelm upgradehangs or fails- Certificate generation errors
Checks:
- List hook jobs:
kubectl get jobs -n <namespace> | grep webhook- Check job logs:
# Certificate generation
kubectl logs -n <namespace> job/<release>-webhook-cert-gen-<revision>
# CA injection
kubectl logs -n <namespace> job/<release>-webhook-ca-inject-<revision>- Check RBAC permissions:
# Verify ServiceAccount exists
kubectl get sa -n <namespace> <release>-webhook-ca-inject
# Verify ClusterRole and ClusterRoleBinding exist
kubectl get clusterrole <release>-webhook-ca-inject
kubectl get clusterrolebinding <release>-webhook-ca-inject- Manual cleanup:
# Delete failed jobs
kubectl delete job -n <namespace> <release>-webhook-cert-gen-<revision>
kubectl delete job -n <namespace> <release>-webhook-ca-inject-<revision>
# Retry helm upgrade
helm upgrade <release> dynamo-platform -n <namespace>Symptoms:
- Webhook rejects resource but error message is unclear
Solution:
Check operator logs for detailed validation errors:
kubectl logs -n <namespace> deployment/<release>-dynamo-operator | grep "validate create\|validate update"Webhook logs include:
- Resource name and namespace
- Validation errors with context
- Warnings for immutable field changes
Symptoms:
- Resource stuck in "Terminating" state
- Webhook blocks finalizer removal
Solution:
The webhook automatically skips validation for resources being deleted. If stuck:
- Check if webhook is blocking:
kubectl describe <resource-type> <name> -n <namespace>
# Look for events mentioning webhook errors- Temporarily work around the webhook:
# Option 1: Set failurePolicy to Ignore
kubectl patch validatingwebhookconfiguration <name> \
--type='json' \
-p='[{"op": "replace", "path": "/webhooks/0/failurePolicy", "value": "Ignore"}]'
# Option 2 (last resort): Delete ValidatingWebhookConfiguration
kubectl delete validatingwebhookconfiguration <name>- Delete resource again:
kubectl delete <resource-type> <name> -n <namespace>- Restore webhook configuration:
helm upgrade <release> dynamo-platform -n <namespace>- ✅ Use
failurePolicy: Fail(default) to ensure validation is enforced - ✅ Monitor webhook latency - Validation adds ~10-50ms per resource operation
- ✅ Use cert-manager for automated certificate lifecycle in large deployments
- ✅ Test webhook configuration in staging before production
- ✅ Use
failurePolicy: Ignoreif webhook availability is problematic during development - ✅ Keep automatic certificates (simpler than cert-manager for dev)
- ✅ Deploy one cluster-wide operator for platform-wide validation
- ✅ Deploy namespace-restricted operators for tenant-specific namespaces
- ✅ Monitor lease health to ensure coordination works correctly
- ✅ Use unique release names per namespace to avoid naming conflicts
- Kubernetes Admission Webhooks
- cert-manager Documentation
- Kubebuilder Webhook Tutorial
- CEL Validation Rules
For issues or questions:
- Check Troubleshooting section
- Review operator logs:
kubectl logs -n <namespace> deployment/<release>-dynamo-operator - Open an issue on GitHub