-
Notifications
You must be signed in to change notification settings - Fork 3
terraform: add aws-eks-operator #44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 10 commits
59b3f84
9bb7067
dc27e4b
1b49a5b
386d6c6
849beaa
b4741ac
42dded2
110e277
657078d
431d846
9c9b5c9
e51384a
6ac0452
0fadf5b
ea8dc02
d0b1672
d0fdb41
e3a880c
835bf8f
2982944
3ac53c2
473ecfc
c0dbf1c
2c00fca
dcc6e08
31e21c7
d456cb3
6022c67
9bd1ea3
7b0b9e6
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,82 @@ | ||
| # aws-eks-operator | ||
|
|
||
| This example creates the following: | ||
|
|
||
| - a VPC and related resources including a NAT Gateway | ||
| - an EKS cluster with a managed node group | ||
| - a Kubernetes namespace for the [Tailscale operator](https://tailscale.com/kb/1236/kubernetes-operator) | ||
| - the Tailscale Kubernetes Operator deployed via [Helm](https://tailscale.com/kb/1236/kubernetes-operator#helm) | ||
|
|
||
| ## Considerations | ||
|
|
||
| - The EKS cluster is configured with both public and private API server access for flexibility | ||
| - The Tailscale operator is deployed in a dedicated `tailscale` namespace | ||
| - The operator will create a Tailscale device for API server proxy access | ||
| - Any additional Tailscale resources (like ingress controllers) created by the operator will appear in your Tailnet | ||
|
|
||
| ## Prerequisites | ||
|
|
||
| - Create a [Tailscale OAuth Client](https://tailscale.com/kb/1215/oauth-clients#setting-up-an-oauth-client) with appropriate scopes | ||
| - Ensure you have AWS CLI configured with appropriate permissions for EKS | ||
| - Install `kubectl` for cluster access after deployment | ||
|
clstokes marked this conversation as resolved.
Outdated
|
||
|
|
||
| ## To use | ||
|
|
||
| Follow the documentation to configure the Terraform providers: | ||
|
|
||
| - [AWS](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) | ||
| - [Kubernetes](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs) | ||
| - [Helm](https://registry.terraform.io/providers/hashicorp/helm/latest/docs) | ||
|
|
||
| ### Configure variables | ||
|
|
||
| Create a `terraform.tfvars` file with your Tailscale OAuth credentials: | ||
|
|
||
| ```hcl | ||
| tailscale_oauth_client_id = "your-oauth-client-id" | ||
| tailscale_oauth_client_secret = "your-oauth-client-secret" | ||
| ``` | ||
|
|
||
| ### Deploy | ||
|
|
||
| ```shell | ||
| terraform init | ||
| terraform apply | ||
|
|
||
| # execute the output from `terraform output cmd_kubectl_ha_proxy_apply` to deploy the HA proxy | ||
| ``` | ||
|
|
||
| #### Verify deployment | ||
|
|
||
| After deployment, configure kubectl to access your cluster: | ||
|
|
||
| ```shell | ||
| aws eks update-kubeconfig --region $AWS_REGION --name $(terraform output -raw cluster_name) | ||
| ``` | ||
|
|
||
| Check that the Tailscale operator is running: | ||
|
|
||
| ```shell | ||
| kubectl get pods -n tailscale | ||
| kubectl logs -n tailscale -l app.kubernetes.io/name=tailscale-operator | ||
| ``` | ||
|
|
||
| #### Verify connectivity via the [API server proxy](https://tailscale.com/kb/1437/kubernetes-operator-api-server-proxy) | ||
|
|
||
| After deployment, configure kubectl to access your cluster using Tailscale: | ||
|
|
||
| ```shell | ||
| tailscale configure kubeconfig ${terraform output -raw operator_name} | ||
|
clstokes marked this conversation as resolved.
Outdated
rajsinghtech marked this conversation as resolved.
Outdated
|
||
| ``` | ||
|
|
||
| ```shell | ||
| kubectl get pods -n tailscale | ||
| ``` | ||
|
|
||
| ## To destroy | ||
|
|
||
| ```shell | ||
| # execute the output from `terraform output cmd_kubectl_ha_proxy_delete` to delete the HA proxy | ||
|
|
||
| terraform destroy | ||
| ``` | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1 @@ | ||
| data "aws_region" "current" {} |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,122 @@ | ||
| locals { | ||
| name = "example-${basename(path.cwd)}" | ||
|
|
||
| aws_tags = { | ||
| Name = local.name | ||
| } | ||
|
|
||
| # Modify these to use your own VPC | ||
| vpc_id = module.vpc.vpc_id | ||
| subnet_ids = module.vpc.private_subnets | ||
|
|
||
| # EKS cluster configuration | ||
| cluster_version = "1.34" # TODO: omit this? | ||
|
clstokes marked this conversation as resolved.
Outdated
|
||
| node_instance_type = "t3.medium" | ||
| desired_size = 2 | ||
| max_size = 2 | ||
| min_size = 1 | ||
|
|
||
| # Tailscale Operator configuration | ||
| namespace_name = "tailscale" | ||
| operator_name = local.name | ||
| operator_version = "1.92.4" | ||
| tailscale_oauth_client_id = var.tailscale_oauth_client_id | ||
| tailscale_oauth_client_secret = var.tailscale_oauth_client_secret | ||
| } | ||
|
|
||
| # Remove this to use your own VPC. | ||
| module "vpc" { | ||
| source = "../internal-modules/aws-vpc" | ||
|
|
||
| name = local.name | ||
| tags = local.aws_tags | ||
| } | ||
|
|
||
| module "eks" { | ||
| source = "terraform-aws-modules/eks/aws" | ||
| version = ">= 21.0, < 22.0" | ||
|
|
||
| name = local.name | ||
| kubernetes_version = local.cluster_version | ||
|
|
||
| addons = { | ||
| coredns = {} | ||
| eks-pod-identity-agent = { | ||
| before_compute = true | ||
| } | ||
| kube-proxy = {} | ||
| vpc-cni = { | ||
| before_compute = true | ||
| } | ||
| } | ||
|
|
||
| # Once the Tailscale operator is installed, `endpoint_public_access` can be disabled. | ||
| # This is left enabled for the sake of easy adoption. | ||
| endpoint_public_access = true | ||
|
|
||
| # Optional: Adds the current caller identity as an administrator via cluster access entry | ||
| enable_cluster_creator_admin_permissions = true | ||
|
|
||
| vpc_id = local.vpc_id | ||
| subnet_ids = local.subnet_ids | ||
|
|
||
| eks_managed_node_groups = { | ||
| main = { | ||
|
clstokes marked this conversation as resolved.
|
||
| # Starting on 1.30, AL2023 is the default AMI type for EKS managed node groups | ||
| # ami_type = "AL2023_x86_64_STANDARD" | ||
| instance_types = [local.node_instance_type] | ||
|
|
||
| desired_size = local.desired_size | ||
| max_size = local.max_size | ||
| min_size = local.min_size | ||
| } | ||
| } | ||
|
|
||
| tags = local.aws_tags | ||
| } | ||
|
|
||
| # Kubernetes namespace for Tailscale operator | ||
| resource "kubernetes_namespace_v1" "tailscale_operator" { | ||
| metadata { | ||
| name = local.namespace_name | ||
| labels = { | ||
| "pod-security.kubernetes.io/enforce" = "privileged" | ||
| } | ||
| } | ||
| } | ||
|
|
||
| resource "helm_release" "tailscale_operator" { | ||
| name = local.operator_name | ||
| namespace = kubernetes_namespace_v1.tailscale_operator.metadata[0].name | ||
|
|
||
| repository = "https://pkgs.tailscale.com/helmcharts" | ||
| chart = "tailscale-operator" | ||
| version = local.operator_version | ||
|
|
||
| values = [ | ||
| yamlencode({ | ||
| operatorConfig = { | ||
| image = { | ||
| repo = "tailscale/k8s-operator" | ||
|
clstokes marked this conversation as resolved.
Outdated
|
||
| tag = "v${local.operator_version}" | ||
| } | ||
| hostname = local.operator_name | ||
| } | ||
| apiServerProxyConfig = { | ||
| mode = true | ||
|
clstokes marked this conversation as resolved.
Outdated
|
||
| tags = "tag:k8s-operator,tag:k8s-api-server" | ||
|
clstokes marked this conversation as resolved.
Outdated
|
||
| } | ||
| }) | ||
| ] | ||
|
|
||
| set_sensitive = [ | ||
| { | ||
| name = "oauth.clientId" | ||
| value = local.tailscale_oauth_client_id | ||
| }, | ||
| { | ||
| name = "oauth.clientSecret" | ||
| value = local.tailscale_oauth_client_secret | ||
| }, | ||
| ] | ||
| } | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,34 @@ | ||
| output "vpc_id" { | ||
| description = "VPC ID where the EKS cluster is deployed" | ||
| value = module.vpc.vpc_id | ||
| } | ||
|
|
||
| output "cluster_name" { | ||
| description = "EKS cluster name" | ||
| value = module.eks.cluster_name | ||
| } | ||
|
|
||
| output "tailscale_operator_namespace" { | ||
| description = "Kubernetes namespace where Tailscale operator is deployed" | ||
| value = kubernetes_namespace_v1.tailscale_operator.metadata[0].name | ||
| } | ||
|
|
||
| output "cmd_kubeconfig_tailscale" { | ||
| description = "Command to configure kubeconfig for Tailscale access to the EKS cluster" | ||
| value = "tailscale configure kubeconfig ${helm_release.tailscale_operator.name}" | ||
| } | ||
|
|
||
| output "cmd_kubeconfig_aws" { | ||
| description = "Command to configure kubeconfig for public access to the EKS cluster" | ||
| value = "aws eks update-kubeconfig --region ${data.aws_region.current.region} --name ${module.eks.cluster_name}" | ||
| } | ||
|
|
||
| output "cmd_kubectl_ha_proxy_apply" { | ||
| description = "Command to deploy the Tailscale high availability API server proxy - https://tailscale.com/kb/1437/kubernetes-operator-api-server-proxy#configuring-a-high-availability-api-server-proxy" | ||
| value = "OPERATOR_NAME=${helm_release.tailscale_operator.name} envsubst < tailscale-api-server-ha-proxy.yaml | kubectl apply -f -" | ||
| } | ||
|
|
||
| output "cmd_kubectl_ha_proxy_delete" { | ||
| description = "Command to delete the Tailscale high availability API server proxy - https://tailscale.com/kb/1437/kubernetes-operator-api-server-proxy#configuring-a-high-availability-api-server-proxy" | ||
|
clstokes marked this conversation as resolved.
Outdated
|
||
| value = "OPERATOR_NAME=${helm_release.tailscale_operator.name} envsubst < tailscale-api-server-ha-proxy.yaml | kubectl delete -f -" | ||
|
clstokes marked this conversation as resolved.
Outdated
clstokes marked this conversation as resolved.
Outdated
|
||
| } | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,10 @@ | ||
| apiVersion: tailscale.com/v1alpha1 | ||
| kind: ProxyGroup | ||
| metadata: | ||
| name: ${OPERATOR_NAME}-ha | ||
| spec: | ||
| type: kube-apiserver | ||
| replicas: 2 | ||
| tags: ["tag:k8s"] | ||
| kubeAPIServer: | ||
| mode: auth |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,21 @@ | ||
| variable "tailscale_oauth_client_id" { | ||
| description = "Tailscale OAuth client ID" | ||
| type = string | ||
| sensitive = true | ||
|
|
||
| validation { | ||
| condition = length(var.tailscale_oauth_client_id) > 0 | ||
| error_message = "Tailscale OAuth client ID must not be empty." | ||
|
clstokes marked this conversation as resolved.
|
||
| } | ||
| } | ||
|
|
||
| variable "tailscale_oauth_client_secret" { | ||
| description = "Tailscale OAuth client secret" | ||
| type = string | ||
| sensitive = true | ||
|
|
||
| validation { | ||
| condition = length(var.tailscale_oauth_client_secret) > 0 | ||
| error_message = "Tailscale OAuth client secret must not be empty." | ||
|
clstokes marked this conversation as resolved.
|
||
| } | ||
| } | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,42 @@ | ||
| terraform { | ||
| required_version = ">= 1.0" | ||
|
|
||
| required_providers { | ||
| aws = { | ||
| source = "hashicorp/aws" | ||
| version = ">= 6.0, < 7.0" | ||
| } | ||
| kubernetes = { | ||
| source = "hashicorp/kubernetes" | ||
| version = ">= 3.0.1, < 4.0" | ||
| } | ||
| helm = { | ||
| source = "hashicorp/helm" | ||
| version = ">= 3.1.1, < 4.0" | ||
| } | ||
| } | ||
| } | ||
|
|
||
| provider "kubernetes" { | ||
| host = module.eks.cluster_endpoint | ||
| cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) | ||
|
|
||
| exec { | ||
| api_version = "client.authentication.k8s.io/v1beta1" | ||
|
clstokes marked this conversation as resolved.
Outdated
|
||
| command = "aws" | ||
| args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name] | ||
| } | ||
| } | ||
|
|
||
| provider "helm" { | ||
| kubernetes = { | ||
| host = module.eks.cluster_endpoint | ||
| cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) | ||
|
|
||
| exec = { | ||
| api_version = "client.authentication.k8s.io/v1beta1" | ||
|
clstokes marked this conversation as resolved.
Outdated
|
||
| command = "aws" | ||
| args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name] | ||
| } | ||
| } | ||
| } | ||
Uh oh!
There was an error while loading. Please reload this page.