Add Pipeline CRD for Redpanda Connect pipeline management#1337
Add Pipeline CRD for Redpanda Connect pipeline management#1337
Conversation
|
This PR is stale because it has been open 5 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
andrewstucki
left a comment
There was a problem hiding this comment.
Not sure if this is just a movement of connect pipeline reconcilers over from another repo, but would definitely want to change a chunk of the design around how this reconciliation works to be more inline with the patterns that this repo has before merging anything like this. Could we just add this in as part of a roadmap rather than trying to generate it? It shouldn't take more than a day or two to implement properly once we actually pull it in. But as is, there are a number of issues I see immediately with this PR that need changing:
- we try to use SSA semantics whenever possible, so the
CreateOrPatchandUpdatecalls are out-of-place. - not a huge fan of swallowing the status
Updateerrors on the reconcile calls, and it appears inconsistent -- some times it looks like we're returning the update error, sometimes swallowing it - we generally try and externalize our sub-resource definitions to some sort of "render" package to avoid having to inline everything
- this should likely use the
kube.Ctlsynchronization primitives - I'm assuming we'd probably want to run some of the secret stuff through cloud-secret materialization?
- would we want any of the configuration around Redpanda sources to somehow be pluggable with our clusterRef-style specification?
- this appears to not have created the RBAC policies in the proper place as it needs to be copies over to the helm chart itself
- the tests should actually test the reconciler, here they just do license validation
- I'd prefer to use some sort of enum/typed status information for the pipeline conditions, because what they are/do are basically undocumented right now
- at least one rendering test in the helm chart should test the enabling flag
- the CRD itself also needs to be added to the CRD installation process subcommand in order for this to ever work.
- for a new CRD type we should have at least one acceptance test that excercises the feature.
|
Moving back to draft mode. Thanks for taking a look. |
|
This PR is stale because it has been open 5 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
Introduces the Connect custom resource (shortName: rpcn) for managing Redpanda Connect pipelines via the Redpanda Operator. Each Connect CR declaratively specifies a pipeline configuration in YAML, and the controller reconciles the desired state by managing a Deployment and ConfigMap. Enterprise license gating: the controller validates a Redpanda enterprise license (v1 format from common-go/license) on every reconciliation. The license must include the CONNECT product and be unexpired. The license is read from a Kubernetes Secret referenced by spec.licenseSecretRef. Key components: - CRD types: Connect, ConnectSpec, ConnectStatus in v1alpha2 - Controller: creates/patches ConfigMap + Deployment, updates status - RBAC: ClusterRole permissions for connects, deployments, configmaps, secrets - CRD manifest: cluster.redpanda.com_connects.yaml - Gated behind --enable-connect flag (default: false) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Update generated files to match what CI's controller-gen v0.20.1 and code generators produce: - Move Connect deepcopy functions to correct alphabetical position (after Configurator, before ConnectorMonitoring) - Regenerate CRD YAML with full OpenAPI schema from controller-gen - Update crd-docs.adoc with Connect type documentation - Add Connect deprecation test case - Update RBAC role.yaml to match controller-gen output - Add missing common-go/license go.sum entries in acceptance/ and gen/ - Fix whitespace in run.go Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Fix TestCRDS by adding connects.cluster.redpanda.com to the expected CRD list and adding a Connect() helper function. Add Cloud-compatible fields to ConnectSpec for smooth migration to Redpanda Cloud managed Connect: - displayName: human-readable pipeline name - description: pipeline description - tags: key-value pairs for filtering/organization - configFiles: additional config files mounted at /config The controller now includes configFiles entries in the ConfigMap alongside connect.yaml, with a guard against key collision. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add displayName, description, tags, and configFiles documentation to the ConnectSpec section of the generated CRD docs. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add scheduling fields to ConnectSpec for spreading pipeline pods
across availability zones:
- zones: list of AZs to constrain and spread pods across. When set,
the controller auto-generates a node affinity (restrict to listed
zones) and a topology spread constraint (even distribution with
maxSkew=1, ScheduleAnyway) using topology.kubernetes.io/zone.
- tolerations: standard k8s tolerations for tainted nodes
- nodeSelector: label-based node selection
- topologySpreadConstraints: explicit spread constraints that
override the auto-generated zone constraint when provided
Example usage:
spec:
zones: ["us-east-1a", "us-east-1b", "us-east-1c"]
replicas: 3
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Update connects CRD YAML with full TopologySpreadConstraint schema instead of x-kubernetes-preserve-unknown-fields, expand toleration descriptions, fix field ordering (nodeSelector before paused), and update crd-docs.adoc descriptions to match Go struct comments. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The Connect controller is now enabled by default (--enable-connect=true). Users can disable it via the operator helm chart value: helm install redpanda-operator ... --set connectController.enabled=false Individual Connect pipeline CRs still require an enterprise license with the CONNECT product — enabling the controller alone does not grant enterprise functionality. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Update README, template, schema, partial types, and golden files to include the new connectController chart value. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Make spec.licenseSecretRef optional on Connect CRs. When not set, the controller falls back to the operator-level enterprise license configured via enterprise.licenseSecretRef in the operator Helm chart values. This avoids requiring users to specify the license on every Connect pipeline CR. The operator-level license is passed via --license-file-path and mounted from the chart's enterprise.licenseSecretRef secret. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Remove spec.licenseSecretRef from Connect CRD entirely. License is now only configured at the operator level via enterprise.licenseSecretRef in the operator Helm chart values. - Set connectController.enabled to false by default (opt-in). - Simplify controller license validation to only read from the operator-level license file path. - Add unit tests for license validation covering: no license configured, invalid file, expired license, open source license, V0 enterprise license with all products, V1 enterprise with/without CONNECT product, V1 trial license, and V1 expired enterprise license. - Fix values.schema.json alphabetical ordering (connectController before crds). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The connect container image has the binary at /redpanda-connect (root), not in $PATH. Use the absolute path in the pod command to match the image layout. Also bump the default image tag to 4.87.0. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
# Conflicts: # acceptance/go.sum # gen/go.sum
The merge from main introduced new RBAC rules (endpoints, endpointslices, serviceexports, serviceimports) that were not reflected in the golden test file. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds spec.annotations to the Pipeline CRD, applied only to the pod template (not ConfigMaps or Deployments). Per-pipeline annotations are merged with commonAnnotations, with per-pipeline values taking precedence. This enables Datadog autodiscovery and similar pod-level integrations without polluting other resources. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add a lint init container to Pipeline Deployments that runs `redpanda-connect lint` before the main container starts. If the pipeline config is invalid, the init container fails and the pod never runs. The controller now detects init container failures by listing pods and checking their init container statuses, surfacing the result as a new ConfigValid condition on the Pipeline status. This gives users immediate feedback when their pipeline config has syntax errors. New acceptance test scenarios: - Delete a Pipeline (create → running → delete → verify gone) - Update a Pipeline config (change configYaml → verify still running) - Stop a Pipeline (set paused:true → verify stopped) - Resume a stopped Pipeline (pause → unpause → verify running) - Invalid config detection (bad configYaml → verify ConfigValid=False) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add detailed instructions for adding RBAC permissions: manually updating itemized RBAC files, ensuring they're in the k8s.yml file list, and regenerating golden files after RBAC changes. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Align the Pipeline controller with the conventions established by the Console controller (PR #1113): 1. Finalizer key: Use shared `operator.redpanda.com/finalizer` instead of unique `pipeline.redpanda.com/finalizer` 2. Namespace filtering: Store namespace param and filter in Reconcile to respect operator namespace scoping 3. Owns() all types: Iterate Types() and register Owns() for each managed resource type (Deployment, ConfigMap, PodMonitor) 4. Unexported finalizerKey: Match Console's private constant style 5. Golden file tests: Add txtar-based golden file snapshot tests and deletion GC verification following Console's test pattern 6. PodMonitor CRD check: Skip PodMonitor watch if the CRD is not installed, matching Console's ServiceMonitor pattern Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Run task generate (which includes gci lint-fix) to ensure generated files have the correct import ordering that CI expects. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Document the correct order for regeneration and linting: use `task generate` (not `task k8s:generate`) as the final step before committing, since it includes `lint-fix`/`gci` import ordering. Explains the common mistake of running k8s:generate which reverts gci fixes on generated files. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Changes since last commentBug fixes
New features
Documentation
Maintenance
|
… detection - Change finalizer addition from Apply/SSA to Update to avoid taking ownership of spec fields, which caused SSA conflicts when users updated .spec.configYaml via kubectl apply --server-side - Use shorter requeue interval (15s) during provisioning/pending phases so init-container lint failures are detected within seconds instead of waiting the full 5-minute requeue cycle - Check LastTerminationState in addition to current State when inspecting the lint init container, catching failures between container restarts Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…nection
When a Pipeline references a Redpanda cluster via spec.cluster.clusterRef,
the operator resolves the cluster's connection details (brokers, TLS, SASL)
and injects them as environment variables and volume mounts into the Connect
pod. This allows pipeline configs to use ${RPK_BROKERS}, ${RPK_TLS_ENABLED},
${RPK_TLS_ROOT_CAS_FILE}, ${RPK_SASL_MECHANISM}, ${RPK_SASL_USER}, and
${RPK_SASL_PASSWORD} to connect to operator-managed Redpanda clusters.
Changes:
- Add cluster.go with resolution logic using ConvertV2ToRenderState
- Inject cluster env vars and TLS CA cert volume in render.go
- Watch referenced Redpanda clusters via field index for re-reconciliation
- Add ClusterRef condition to Pipeline status
- Add RBAC for reading Redpanda CRs
- Add acceptance tests for produce/consume via clusterRef
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add spec.credentials to Pipeline CRD allowing users to specify custom SASL credentials (mechanism, username, passwordSecretRef) instead of defaulting to the cluster's bootstrap admin user. When credentials is set alongside a clusterRef, the explicit credentials take precedence. This enables pairing a Pipeline with a dedicated User CRD for least-privilege access. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Change credentials.password from corev1.SecretKeySelector to ValueSource, matching the pattern used by all other CRDs in this operator. This adds support for: - Kubernetes Secrets (secretKeyRef) - ConfigMaps (configMapKeyRef) - Inline values (inline) - External secret providers via the externalSecretRef field (AWS Secrets Manager, GCP Secret Manager, Azure Key Vault) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ials When credentials come from spec.credentials (dedicated user), use RPK_CREDENTIALS_SASL_MECHANISM, RPK_CREDENTIALS_SASL_USER, and RPK_CREDENTIALS_SASL_PASSWORD. When credentials come from the cluster's bootstrap admin user, use RPK_SASL_MECHANISM, RPK_SASL_USER, and RPK_SASL_PASSWORD. This makes the credential source explicit in the pipeline configuration. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Document how ClusterRef resolution works (ConvertV2ToRenderState + AsStaticConfigSource pattern), how controllers watch referenced clusters (multicluster vs single-cluster patterns), and the ValueSource type for secrets including external secret provider support (AWS, GCP, Azure). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…alignment Add DeepCopyInto/DeepCopy for PipelineSASLCredentials and add Credentials field handling to PipelineSpec.DeepCopyInto. Fix struct field alignment in render.go lint init container. Note: CRD YAML, CRD docs, and RBAC formatting still require `nix develop -c task generate` to fully regenerate. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Run `task generate` to regenerate: - CRD schema with credentials field (ValueSource-based password) - CRD reference docs with PipelineSASLCredentials type - RBAC ClusterRole with controller-gen formatting Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Document that CRD YAML, deepcopy, CRD docs, RBAC, and Helm templates must always be regenerated via `nix develop -c task generate`, never hand-edited or reconstructed from CI diffs. Also note the fallback nix binary path at /nix/var/nix/profiles/default/bin/nix. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove the PipelineSASLCredentials type and spec.credentials field. Users who need non-admin SASL credentials should use spec.secretRef or spec.env to inject custom username/password env vars, and configure the SASL mechanism directly in their pipeline configYaml. This simplifies the CRD surface — clusterRef provides broker addresses, TLS, and bootstrap SASL by default. Custom credentials are handled through the existing secret injection mechanisms. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Changes since last commentBug fixes
New features
Refactoring
Documentation
|
…ifests
The PatchManifest function in acceptance tests expands ${KEY} patterns
as test template variables. Pipeline configYaml contains ${RPK_BROKERS},
${RPK_TLS_ENABLED}, etc. which are Redpanda Connect runtime env var
interpolations resolved inside the container, not test framework vars.
Pass through any ${RPK_*} pattern without expansion so these reach
Kubernetes as literal text for Connect to interpolate at runtime.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The lint init container runs /redpanda-connect lint which needs env vars
like RPK_BROKERS, RPK_TLS_ENABLED, RPK_TLS_ROOT_CAS_FILE to resolve
${...} interpolations in the pipeline config. Without the env vars, the
linter sees literal strings where it expects typed values (e.g.,
"${RPK_TLS_ENABLED}" instead of a boolean), causing lint to fail and
the pod to CrashLoopBackOff.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
ac78d3a to
026d073
Compare
The Pipeline_produces_to_Redpanda_via_clusterRef acceptance test failed consistently because Redpanda defaults auto_create_topics_enabled to false. The producer pipeline could not auto-create the target topic, so no messages were ever delivered. - Pre-create pipeline-produce-test topic before running the producer pipeline, matching the pattern used by the consumer scenario - Remove misleading "Found topic" logs from ExpectTopic/ExpectNoTopic that printed unconditionally even when the topic was not found - Increase checkTopic timeout from 10s to 30s for CI stability - Handle NotFound/Conflict errors during finalizer removal to avoid noisy UID precondition errors when pipelines are deleted concurrently Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add spec.budget field to Pipeline with maxUnavailable/minAvailable options, following the convention used by Strimzi and Prometheus Operator. The PDB is rendered by the Syncer alongside the Deployment and ConfigMap, so it is automatically garbage-collected on CR deletion. CRD validation enforces exactly one of maxUnavailable or minAvailable via CEL rule. RBAC updated for policy/poddisruptionbudgets. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
4dc1932 to
5ca7350
Compare
Changes since last commentNew feature
Bug fixes
PR description updates
|
Summary
Introduces the
Pipelinecustom resource (shortName: rpcn) for managing Redpanda Connect pipelines via the Redpanda Operator. This enables declarative pipeline lifecycle management through Kubernetes CRDs, gated behind an enterprise license for RPCN.What's included
CRD (
Pipeline):spec.configYaml— Redpanda Connect pipeline configuration in YAMLspec.replicas— number of pipeline replicas (default: 1)spec.image— container image override (default:redpandadata/connect:4.87.0)spec.paused— scales replicas to 0 when truespec.resources— compute resource requirementsspec.env— additional environment variablesspec.secretRef— Kubernetes Secrets to inject as environment variablesspec.cluster— optional ClusterSource reference to a Redpanda clusterspec.zones— availability zones for pod spreadingspec.annotations— pod-level annotations (e.g., for Datadog autodiscovery), merged withcommonAnnotationsspec.tolerations/spec.nodeSelector/spec.topologySpreadConstraints— scheduling controlsspec.displayName/spec.description/spec.tags/spec.configFiles— Cloud migration-compatible metadataClusterRef — Connect Pipelines to Redpanda Clusters:
When a Pipeline references a Redpanda cluster via
spec.cluster.clusterRef, the operator automatically:/etc/tls/certs/ca/This enables seamless connectivity to operator-managed Redpanda clusters using the
redpandainput,redpandaoutput,redpanda_migratorinput, andredpanda_migratoroutput.RPK_BROKERSRPK_TLS_ENABLEDtrueorfalseRPK_TLS_ROOT_CAS_FILERPK_SASL_MECHANISMRPK_SASL_USERRPK_SASL_PASSWORDController:
PipelineCRs usingkube.Ctland server-side apply (SSA) semanticskube.Syncerfor child resource lifecycle management (ConfigMap, Deployment)renderstruct implementingkube.Rendererutils.StatusConditionConfigshelper — no swallowed errorscommon-go/licensev1CONNECTproduct, allow enterprise features, and be unexpired--enable-connectflag (default:false)Typed Status Conditions:
PipelinePhaseis a typed enum:Pending,Provisioning,Running,Stopped,UnknownReady,ConfigValid,ClusterRefRunning,Provisioning,Paused,LicenseInvalid,Failed,ConfigValid,ConfigInvalid,ClusterRefResolved,ClusterRefInvalidPrometheus Monitoring (PodMonitor):
connectController.monitoring.enabledis true/metricsendpoint on port 4195Configuration Lint Validation:
lintinit container that runs/redpanda-connect lintbefore the main container startsLastTerminationState) and surfaces lint errors via aConfigValidconditionHelm Chart Integration:
pipelines,redpandas(for clusterRef),deployments,configmaps,pods,secrets, andpodmonitorsconnectController.enabledin the chart valuesReference implementation
Based on the pipeline controller in
cloudv2/apps/redpanda-connect-api, adapted to operator patterns (SSA, kube.Ctl, kube.Syncer, render package, typed conditions, RBAC in helm chart).CLAUDE.md
Added a "Creating a New CRD" section documenting the conventions to follow for future CRD additions.
Try it out
A pre-built operator image is available at
yongshin/redpanda-operator:pipeline-crd(linux/arm64).Step 1: Check out the branch and install CRDs
Step 2: Create a license Secret
Step 3: Deploy the operator with the pre-built image
Step 4: Deploy a Connect pipeline
Step 5: Verify the pipeline is running
Clean up
Usage guide
Prerequisites
--enable-connect(disabled by default).enterprise.licenseSecretRefin the operator Helm chart values.Configure the license and enable the Connect controller
Create a Connect pipeline
Monitor the pipeline
Status phases:
PendingProvisioningRunningStoppedspec.paused: true)Pause / resume a pipeline
Spread pods across availability zones
Connect to a Redpanda cluster via clusterRef
Reference an operator-managed Redpanda cluster. The operator resolves broker addresses, TLS, and bootstrap SASL credentials automatically:
Using dedicated user credentials (non-admin) with clusterRef
By default,
clusterRefinjects the cluster's bootstrap (admin) SASL credentials viaRPK_SASL_*env vars. For least-privilege access, create a dedicatedUserCRD and store both its username and password in a Secret. Then reference that Secret viaspec.secretReforspec.envand configure the SASL mechanism directly in your pipeline config:Alternatively, use
spec.envto map specific Secret keys to custom env var names:Passing secrets to a Pipeline
Pipelines often need credentials (e.g., Kafka passwords, API keys). There are two approaches:
Option A: Reference an entire Secret (
spec.secretRef)All key-value pairs in each referenced Secret are injected as environment variables. The pipeline config can reference them using
${VAR_NAME}interpolation.Option B: Reference individual Secret keys (
spec.env)spec.secretRefspec.envwithsecretKeyRefCommon annotations for Gatekeeper compliance
Monitoring Pipeline metrics with Prometheus
Add Prometheus metrics to your pipeline config:
Monitoring Pipeline metrics with Datadog
Configuration lint validation
License validation and troubleshooting
LicenseInvalidno license configured: set enterprise.licenseSecretRef...enterprise.licenseSecretRefin operator Helm valuesLicenseInvalidfailed to read licenseLicenseInvalidlicense expiredLicenseInvalidlicense does not allow enterprise featuresLicenseInvalidlicense does not include Redpanda ConnectFailure modes and recovery
The Pipeline controller creates a standard Kubernetes Deployment, so most failure recovery is handled by Kubernetes built-in controllers rather than the operator itself. The operator does not implement custom node-failure detection or pod rescheduling — it delegates that entirely to the Deployment abstraction and observes the resulting state on each reconciliation cycle.
Node failure (detailed walkthrough)
When a node running pipeline pods fails:
node-monitor-grace-period(default 40s), the node controller marks the node asNotReady.pod-eviction-timeout(default 5 minutes), the node controller taints the node withnode.kubernetes.io/unreachable:NoExecute. Pods without a matching toleration are evicted.Provisioningwhile new pods start, thenRunningonce ready).If
spec.zonesis configured, replacement pods respect the zone node affinity and topology spread constraint (ScheduleAnyway), so they will prefer spreading across zones but will not block scheduling if a zone is entirely unavailable. Ifspec.budgetis configured, the PDB limits how many pods can be simultaneously evicted during voluntary disruptions (node drains), but does not affect involuntary evictions from node failures.NotReady→ eviction timeout → pods rescheduled on healthy nodesCrashLoopBackOff)/readyon port 4195) gates traffic. Operator updates status conditions on next reconcile.lintinit container exits non-zero → pod stays inInit:Error→ operator setsConfigValid=Falsecondition with lint outputReady=FalsewithLicenseInvalidreasonClusterRef=Falsecondition with error details, skips Deployment reconciliationRecreatestrategy kills all old pods before creating new ones — if new pods fail readiness, rollout stallsspec.paused: true)Stoppedphasespec.paused: false.Key design decisions:
spec.budget(PodDisruptionBudget) — configurable via the CRD withmaxUnavailableorminAvailable. Protects against voluntary disruptions (node drain, cluster autoscaler) but does not affect involuntary evictions. The PDB is rendered by the Syncer alongside the Deployment and ConfigMap, so it is automatically garbage-collected on CR deletion. CEL validation enforces exactly one of the two fields.Recreatestrategy — chosen overRollingUpdateto avoid running two pipeline instances concurrently that might double-process messages. This means updates cause brief downtime. Note that a PDB withmaxUnavailable: 1will slow down voluntary drains but does not conflict with the Recreate strategy (which is the operator's own update path, not a voluntary disruption).Test plan
go build ./...passes inoperator/andacceptance/connectController.enabled, monitoring, and common annotationstask lint)🤖 Generated with Claude Code