Skip to content

Commit 7f86fa0

Browse files
L3n41cclaude
andauthored
kubectl-datadog: redistribute autoscaling cluster packages by concern (#3012)
* kubectl-datadog: extract EKS helpers to common/eks/ Continue the redistribution of `guess/` started by PRs #2980 and #3000: move every helper that operates on an EKS cluster object or the EKS API into a dedicated `common/eks/` package. After this commit, `common/eks/` contains: - authmode.go (was guess/clusterauthmode.go) - podidentityagent.go (was guess/ekspodidentityagent.go) - oidcprovider.go (was guess/oidcprovider.go) - privatesubnets.go (was guess/privatesubnets.go) Import alias `commoneks` is used to avoid colliding with the existing `github.com/aws/aws-sdk-go-v2/service/eks` (aliased `eks`). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * kubectl-datadog: consolidate aws-auth into common/awsauth/ `common/aws/aws-auth.go` was misnamed: it manipulates a Kubernetes ConfigMap (`kube-system/aws-auth`), not an AWS-SDK resource. Together with `guess/aws-auth.go` (read-only presence check), it belongs in a dedicated `common/awsauth/` package: both files operate on the same ConfigMap. Function names lose the now-redundant `AwsAuth` prefix: - `IsAwsAuthConfigMapPresent` -> `IsConfigMapPresent` - `EnsureAwsAuthRole` -> `EnsureRole` - `RemoveAwsAuthRole` -> `RemoveRole` `common/aws/` now contains only cloudformation.go (pure AWS-SDK). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * kubectl-datadog: fold clustername into common/clients/ The only caller of `guess.GetClusterNameFromKubeconfig` is the wrapper `clients.GetClusterNameFromKubeconfig` in `common/clients/clients.go`. Move the helper next to its caller as an unexported function `clusterNameFromKubeconfig`. The public API of `common/clients/` is unchanged. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * kubectl-datadog: carve out karpenter package Consolidate everything Karpenter-shaped into `common/karpenter/`, alongside the existing detection helper: - Intermediate model: NodePoolsSet, EC2NodeClass, NodePool, MetadataOptions, BlockDeviceMapping (from guess/nodepoolsset.go) - Producers from EKS node groups: GetNodeGroupsProperties (from guess/nodegroupproperties.go -> fromnodegroups.go) - Producers from running k8s Nodes: GetNodesProperties (from guess/nodesproperties.go -> fromnodes.go) - CR builders: CreateOrUpdateEC2NodeClass, CreateOrUpdateNodePool (from cluster/k8s/{ec2nodeclass,nodepool}.go) `cluster/k8s/` is gone (it was misnamed: those files build Karpenter CRDs, they aren't generic k8s primitives). `common/k8s/` keeps its narrow scope: generic object CRUD + deployment-finder. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * kubectl-datadog: awsauth: extract ConfigMap name/namespace constants `kube-system` and `aws-auth` each appeared 5 times across the three functions. Promote them to package-level constants so the package's single-ConfigMap scope is explicit at a glance. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
1 parent b9f5936 commit 7f86fa0

23 files changed

Lines changed: 78 additions & 86 deletions

cmd/kubectl-datadog/autoscaling/cluster/apply/run.go

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -27,14 +27,14 @@ import (
2727
"sigs.k8s.io/controller-runtime/pkg/log/zap"
2828

2929
"github.com/DataDog/datadog-operator/cmd/kubectl-datadog/autoscaling/cluster/common/aws"
30+
"github.com/DataDog/datadog-operator/cmd/kubectl-datadog/autoscaling/cluster/common/awsauth"
3031
"github.com/DataDog/datadog-operator/cmd/kubectl-datadog/autoscaling/cluster/common/clients"
3132
"github.com/DataDog/datadog-operator/cmd/kubectl-datadog/autoscaling/cluster/common/clusterinfo"
3233
"github.com/DataDog/datadog-operator/cmd/kubectl-datadog/autoscaling/cluster/common/display"
34+
commoneks "github.com/DataDog/datadog-operator/cmd/kubectl-datadog/autoscaling/cluster/common/eks"
3335
"github.com/DataDog/datadog-operator/cmd/kubectl-datadog/autoscaling/cluster/common/eksautomode"
3436
"github.com/DataDog/datadog-operator/cmd/kubectl-datadog/autoscaling/cluster/common/helm"
3537
"github.com/DataDog/datadog-operator/cmd/kubectl-datadog/autoscaling/cluster/common/karpenter"
36-
"github.com/DataDog/datadog-operator/cmd/kubectl-datadog/autoscaling/cluster/guess"
37-
"github.com/DataDog/datadog-operator/cmd/kubectl-datadog/autoscaling/cluster/k8s"
3838
"github.com/DataDog/datadog-operator/pkg/plugin/common"
3939
"github.com/DataDog/datadog-operator/pkg/version"
4040

@@ -178,7 +178,7 @@ func createCloudFormationStacks(ctx context.Context, cli *clients.Clients, opts
178178
return "", fmt.Errorf("failed to describe cluster %s: %w", opts.ClusterName, err)
179179
}
180180
cluster := describeOut.Cluster
181-
supportsAPIAuth := guess.SupportsAPIAuthenticationMode(cluster)
181+
supportsAPIAuth := commoneks.SupportsAPIAuthenticationMode(cluster)
182182

183183
ddStackName := DDKarpenterStackName(opts.ClusterName)
184184
ddStack, err := aws.GetStack(ctx, cli.CloudFormation, ddStackName)
@@ -192,7 +192,7 @@ func createCloudFormationStacks(ctx context.Context, cli *clients.Clients, opts
192192

193193
switch opts.InstallMode {
194194
case InstallModeExistingNodes:
195-
isUnmanagedEKSPIAInstalled, err := guess.IsThereUnmanagedEKSPodIdentityAgentInstalled(ctx, cli.EKS, opts.ClusterName)
195+
isUnmanagedEKSPIAInstalled, err := commoneks.IsThereUnmanagedEKSPodIdentityAgentInstalled(ctx, cli.EKS, opts.ClusterName)
196196
if err != nil {
197197
return "", fmt.Errorf("failed to check if EKS pod identity agent is installed: %w", err)
198198
}
@@ -207,18 +207,18 @@ func createCloudFormationStacks(ctx context.Context, cli *clients.Clients, opts
207207
return "", nil
208208

209209
case InstallModeFargate:
210-
issuerURL, err := guess.GetClusterOIDCIssuerURL(cluster)
210+
issuerURL, err := commoneks.GetClusterOIDCIssuerURL(cluster)
211211
if err != nil {
212212
return "", fmt.Errorf("failed to get cluster OIDC issuer URL: %w", err)
213213
}
214-
oidcArn, err := guess.EnsureOIDCProvider(ctx, cli.IAM, issuerURL)
214+
oidcArn, err := commoneks.EnsureOIDCProvider(ctx, cli.IAM, issuerURL)
215215
if err != nil {
216216
return "", fmt.Errorf("failed to ensure OIDC provider: %w", err)
217217
}
218218

219219
subnets := opts.FargateSubnets
220220
if len(subnets) == 0 {
221-
subnets, err = guess.GetClusterPrivateSubnets(ctx, cli.EC2, cluster)
221+
subnets, err = commoneks.GetClusterPrivateSubnets(ctx, cli.EC2, cluster)
222222
if err != nil {
223223
return "", fmt.Errorf("failed to discover private subnets: %w", err)
224224
}
@@ -298,7 +298,7 @@ func checkFargateStackImmutability(stack *aws.Stack, namespace string, subnets [
298298
}
299299

300300
func updateAwsAuthConfigMap(ctx context.Context, cli *clients.Clients, clusterName string) error {
301-
awsAuthConfigMapPresent, err := guess.IsAwsAuthConfigMapPresent(ctx, cli.K8sClientset)
301+
awsAuthConfigMapPresent, err := awsauth.IsConfigMapPresent(ctx, cli.K8sClientset)
302302
if err != nil {
303303
return fmt.Errorf("failed to check if aws-auth ConfigMap is present: %w", err)
304304
}
@@ -314,7 +314,7 @@ func updateAwsAuthConfigMap(ctx context.Context, cli *clients.Clients, clusterNa
314314
}
315315

316316
// Add role mapping in the `aws-auth` ConfigMap
317-
if err = aws.EnsureAwsAuthRole(ctx, cli.K8sClientset, aws.RoleMapping{
317+
if err = awsauth.EnsureRole(ctx, cli.K8sClientset, awsauth.RoleMapping{
318318
RoleArn: "arn:aws:iam::" + accountID + ":role/KarpenterNodeRole-" + clusterName,
319319
Username: "system:node:{{EC2PrivateDNSName}}",
320320
Groups: []string{"system:bootstrappers", "system:nodes"},
@@ -423,18 +423,18 @@ func createNodePoolResources(ctx context.Context, streams genericclioptions.IOSt
423423
return nil
424424
}
425425

426-
var nodePoolsSet *guess.NodePoolsSet
426+
var nodePoolsSet *karpenter.NodePoolsSet
427427
var err error
428428

429429
switch opts.InferenceMethod {
430430
case InferenceMethodNodes:
431-
nodePoolsSet, err = guess.GetNodesProperties(ctx, cli.K8sClientset, cli.EC2)
431+
nodePoolsSet, err = karpenter.GetNodesProperties(ctx, cli.K8sClientset, cli.EC2)
432432
if err != nil {
433433
return fmt.Errorf("failed to gather nodes properties: %w", err)
434434
}
435435

436436
case InferenceMethodNodeGroups:
437-
nodePoolsSet, err = guess.GetNodeGroupsProperties(ctx, cli.EKS, cli.EC2, opts.ClusterName)
437+
nodePoolsSet, err = karpenter.GetNodeGroupsProperties(ctx, cli.EKS, cli.EC2, opts.ClusterName)
438438
if err != nil {
439439
return fmt.Errorf("failed to gather node groups properties: %w", err)
440440
}
@@ -446,15 +446,15 @@ func createNodePoolResources(ctx context.Context, streams genericclioptions.IOSt
446446

447447
if opts.CreateKarpenterResources == CreateKarpenterResourcesEC2NodeClass || opts.CreateKarpenterResources == CreateKarpenterResourcesAll {
448448
for _, nc := range nodePoolsSet.GetEC2NodeClasses() {
449-
if err = k8s.CreateOrUpdateEC2NodeClass(ctx, cli.K8sClient, opts.ClusterName, nc); err != nil {
449+
if err = karpenter.CreateOrUpdateEC2NodeClass(ctx, cli.K8sClient, opts.ClusterName, nc); err != nil {
450450
return fmt.Errorf("failed to create or update EC2NodeClass %s: %w", nc.GetName(), err)
451451
}
452452
}
453453
}
454454

455455
if opts.CreateKarpenterResources == CreateKarpenterResourcesAll {
456456
for _, np := range nodePoolsSet.GetNodePools() {
457-
if err = k8s.CreateOrUpdateNodePool(ctx, cli.K8sClient, np); err != nil {
457+
if err = karpenter.CreateOrUpdateNodePool(ctx, cli.K8sClient, np); err != nil {
458458
return fmt.Errorf("failed to create or update NodePool %s: %w", np.GetName(), err)
459459
}
460460
}

cmd/kubectl-datadog/autoscaling/cluster/common/aws/aws-auth.go renamed to cmd/kubectl-datadog/autoscaling/cluster/common/awsauth/awsauth.go

Lines changed: 29 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,8 @@
1-
package aws
1+
// Package awsauth reads and mutates the kube-system/aws-auth ConfigMap that
2+
// EKS clusters use (in CONFIG_MAP authentication mode) to grant IAM principals
3+
// access to the Kubernetes API. Despite the AWS-flavored name, this is purely
4+
// a Kubernetes operation — the ConfigMap lives in the cluster, not in AWS.
5+
package awsauth
26

37
import (
48
"context"
@@ -7,18 +11,36 @@ import (
711
"slices"
812

913
"gopkg.in/yaml.v3"
14+
apierrors "k8s.io/apimachinery/pkg/api/errors"
1015
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
1116
"k8s.io/client-go/kubernetes"
1217
)
1318

19+
const (
20+
configMapNamespace = "kube-system"
21+
configMapName = "aws-auth"
22+
)
23+
1424
type RoleMapping struct {
1525
RoleArn string `yaml:"rolearn"`
1626
Username string `yaml:"username"`
1727
Groups []string `yaml:"groups"`
1828
}
1929

20-
func EnsureAwsAuthRole(ctx context.Context, clientset kubernetes.Interface, roleMapping RoleMapping) error {
21-
cm, err := clientset.CoreV1().ConfigMaps("kube-system").Get(ctx, "aws-auth", metav1.GetOptions{})
30+
// IsConfigMapPresent checks if the aws-auth ConfigMap exists in the kube-system namespace.
31+
func IsConfigMapPresent(ctx context.Context, clientset *kubernetes.Clientset) (bool, error) {
32+
if _, err := clientset.CoreV1().ConfigMaps(configMapNamespace).Get(ctx, configMapName, metav1.GetOptions{}); err != nil {
33+
if apierrors.IsNotFound(err) {
34+
return false, nil
35+
}
36+
return false, fmt.Errorf("failed to get aws-auth ConfigMap: %w", err)
37+
}
38+
39+
return true, nil
40+
}
41+
42+
func EnsureRole(ctx context.Context, clientset kubernetes.Interface, roleMapping RoleMapping) error {
43+
cm, err := clientset.CoreV1().ConfigMaps(configMapNamespace).Get(ctx, configMapName, metav1.GetOptions{})
2244
if err != nil {
2345
return fmt.Errorf("failed to get aws-auth ConfigMap: %w", err)
2446
}
@@ -48,7 +70,7 @@ func EnsureAwsAuthRole(ctx context.Context, clientset kubernetes.Interface, role
4870

4971
cm.Data["mapRoles"] = string(updated)
5072

51-
if _, err := clientset.CoreV1().ConfigMaps("kube-system").Update(ctx, cm, metav1.UpdateOptions{}); err != nil {
73+
if _, err := clientset.CoreV1().ConfigMaps(configMapNamespace).Update(ctx, cm, metav1.UpdateOptions{}); err != nil {
5274
return fmt.Errorf("failed to update aws-auth ConfigMap: %w", err)
5375
}
5476

@@ -57,8 +79,8 @@ func EnsureAwsAuthRole(ctx context.Context, clientset kubernetes.Interface, role
5779
return nil
5880
}
5981

60-
func RemoveAwsAuthRole(ctx context.Context, clientset kubernetes.Interface, roleArn string) error {
61-
cm, err := clientset.CoreV1().ConfigMaps("kube-system").Get(ctx, "aws-auth", metav1.GetOptions{})
82+
func RemoveRole(ctx context.Context, clientset kubernetes.Interface, roleArn string) error {
83+
cm, err := clientset.CoreV1().ConfigMaps(configMapNamespace).Get(ctx, configMapName, metav1.GetOptions{})
6284
if err != nil {
6385
return fmt.Errorf("failed to get aws-auth ConfigMap: %w", err)
6486
}
@@ -90,7 +112,7 @@ func RemoveAwsAuthRole(ctx context.Context, clientset kubernetes.Interface, role
90112

91113
cm.Data["mapRoles"] = string(updated)
92114

93-
if _, err := clientset.CoreV1().ConfigMaps("kube-system").Update(ctx, cm, metav1.UpdateOptions{}); err != nil {
115+
if _, err := clientset.CoreV1().ConfigMaps(configMapNamespace).Update(ctx, cm, metav1.UpdateOptions{}); err != nil {
94116
return fmt.Errorf("failed to update aws-auth ConfigMap: %w", err)
95117
}
96118

cmd/kubectl-datadog/autoscaling/cluster/common/aws/aws-auth_test.go renamed to cmd/kubectl-datadog/autoscaling/cluster/common/awsauth/awsauth_test.go

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
package aws
1+
package awsauth
22

33
import (
44
"testing"
@@ -11,7 +11,7 @@ import (
1111
"k8s.io/client-go/kubernetes/fake"
1212
)
1313

14-
func TestEnsureAwsAuthRole(t *testing.T) {
14+
func TestEnsureRole(t *testing.T) {
1515
for _, tc := range []struct {
1616
name string
1717
existingRoles []RoleMapping
@@ -90,7 +90,7 @@ func TestEnsureAwsAuthRole(t *testing.T) {
9090
_, err = clientset.CoreV1().ConfigMaps("kube-system").Create(t.Context(), cm, metav1.CreateOptions{})
9191
require.NoError(t, err)
9292

93-
err = EnsureAwsAuthRole(t.Context(), clientset, tc.newRole)
93+
err = EnsureRole(t.Context(), clientset, tc.newRole)
9494

9595
if tc.expectError {
9696
assert.Error(t, err)
@@ -123,7 +123,7 @@ func TestEnsureAwsAuthRole(t *testing.T) {
123123
}
124124
}
125125

126-
func TestEnsureAwsAuthRole_ConfigMapNotFound(t *testing.T) {
126+
func TestEnsureRole_ConfigMapNotFound(t *testing.T) {
127127
clientset := fake.NewSimpleClientset()
128128

129129
roleMapping := RoleMapping{
@@ -132,7 +132,7 @@ func TestEnsureAwsAuthRole_ConfigMapNotFound(t *testing.T) {
132132
Groups: []string{"system:bootstrappers", "system:nodes"},
133133
}
134134

135-
err := EnsureAwsAuthRole(t.Context(), clientset, roleMapping)
135+
err := EnsureRole(t.Context(), clientset, roleMapping)
136136
assert.Error(t, err)
137137
assert.Contains(t, err.Error(), "failed to get aws-auth ConfigMap")
138138
}

cmd/kubectl-datadog/autoscaling/cluster/common/clients/clients.go

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -30,8 +30,6 @@ import (
3030
"sigs.k8s.io/controller-runtime/pkg/client"
3131
"sigs.k8s.io/controller-runtime/pkg/client/apiutil"
3232
karpv1 "sigs.k8s.io/karpenter/pkg/apis/v1"
33-
34-
"github.com/DataDog/datadog-operator/cmd/kubectl-datadog/autoscaling/cluster/guess"
3533
)
3634

3735
// Clients holds all AWS and Kubernetes client instances needed for
@@ -139,7 +137,7 @@ func GetClusterNameFromKubeconfig(configFlags *genericclioptions.ConfigFlags) (s
139137
return "", err
140138
}
141139

142-
return guess.GetClusterNameFromKubeconfig(kubeRawConfig, kubeContext), nil
140+
return clusterNameFromKubeconfig(kubeRawConfig, kubeContext), nil
143141
}
144142

145143
// ResolveClusterName returns explicit when non-empty, otherwise infers the

cmd/kubectl-datadog/autoscaling/cluster/guess/clustername.go renamed to cmd/kubectl-datadog/autoscaling/cluster/common/clients/clustername.go

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,14 @@
1-
package guess
1+
package clients
22

33
import (
44
"strings"
55

66
"k8s.io/client-go/tools/clientcmd/api"
77
)
88

9-
// GetClusterNameFromKubeconfig attempts to extract the EKS cluster name from the current kubectl context
10-
func GetClusterNameFromKubeconfig(rawConfig api.Config, kubeContext string) (clusterName string) {
9+
// clusterNameFromKubeconfig attempts to extract the EKS cluster name from the
10+
// given kubectl context.
11+
func clusterNameFromKubeconfig(rawConfig api.Config, kubeContext string) (clusterName string) {
1112
if kubeContext == "" {
1213
kubeContext = rawConfig.CurrentContext
1314
}

cmd/kubectl-datadog/autoscaling/cluster/guess/clustername_test.go renamed to cmd/kubectl-datadog/autoscaling/cluster/common/clients/clustername_test.go

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
package guess
1+
package clients
22

33
import (
44
"testing"
@@ -8,7 +8,7 @@ import (
88
"k8s.io/client-go/tools/clientcmd"
99
)
1010

11-
func TestGetClusterNameFromKubeConfig(t *testing.T) {
11+
func TestClusterNameFromKubeConfig(t *testing.T) {
1212
testcases := []struct {
1313
name string
1414
kubeConfig string
@@ -37,7 +37,7 @@ func TestGetClusterNameFromKubeConfig(t *testing.T) {
3737
)
3838
rawConfig, err := clientConfig.RawConfig()
3939
require.NoError(t, err)
40-
assert.Equal(t, tc.clusterName, GetClusterNameFromKubeconfig(rawConfig, tc.kubeContext))
40+
assert.Equal(t, tc.clusterName, clusterNameFromKubeconfig(rawConfig, tc.kubeContext))
4141
})
4242
}
4343
}

cmd/kubectl-datadog/autoscaling/cluster/guess/testdata/kubeconfig-eks.yaml renamed to cmd/kubectl-datadog/autoscaling/cluster/common/clients/testdata/kubeconfig-eks.yaml

File renamed without changes.

cmd/kubectl-datadog/autoscaling/cluster/guess/clusterauthmode.go renamed to cmd/kubectl-datadog/autoscaling/cluster/common/eks/authmode.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
package guess
1+
package eks
22

33
import (
44
ekstypes "github.com/aws/aws-sdk-go-v2/service/eks/types"

cmd/kubectl-datadog/autoscaling/cluster/guess/oidcprovider.go renamed to cmd/kubectl-datadog/autoscaling/cluster/common/eks/oidcprovider.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
package guess
1+
package eks
22

33
import (
44
"context"

cmd/kubectl-datadog/autoscaling/cluster/guess/oidcprovider_test.go renamed to cmd/kubectl-datadog/autoscaling/cluster/common/eks/oidcprovider_test.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
package guess
1+
package eks
22

33
import (
44
"context"

0 commit comments

Comments
 (0)