IMPORTANT: Support for adding Google as an identity provider in Command is only officially supported with Keyfactor Command 25.1.2+ and 25.2.1+. If you are on an older version of Command, please contact Keyfactor Customer Support for assistance on adding Google as an identity provider.
This documentation covers the various ways to configure GKE workload identity for your workload to use ambient credentials with Keyfactor Command. Please refer to the official Google documentation for workload identity federation for the most up-to-date information regarding workload identity with GKE. For more information about what workload identity is and how it works in GKE, please refer here.
GKE workloads can authenticate to external services like Keyfactor Command by obtaining ID tokens from the GKE metadata server. There are two approaches to configure this:
- Workload Identity Federation for GKE with Service Account Impersonation (Recommended) - Kubernetes ServiceAccounts are bound to Google Service Accounts, allowing fine-grained, per-workload identity management. The GKE metadata server uses the bound Google Service Account to generate ID tokens.
- Compute Engine Default Service Account (Not recommended for production) - Workloads use a shared node-level service account; all workloads on the same node inherit these credentials with no isolation.
This guide covers both approaches, but Workload Identity Federation for GKE with Service Account Impersonation is the recommended method for new deployments due to its improved security model and workload isolation.
Important: For the GKE metadata server to generate ID tokens, a Google Service Account must be available. In Option 1, you explicitly create and bind a GSA to your Kubernetes ServiceAccount. In Option 2, the Compute Engine default service account is used implicitly.
For more information on alternatives to Workload Identity Federation for GKE (and security compromises associated with these alternatives), please refer to this list.
For more information about service accounts in GKE, please refer to this link.
Before configuring ambient credentials with GKE, ensure you have met the requirements specified in Google's GKE guide in addition to the following:
- A GKE cluster (version 1.12 or later recommended; 1.24+ for all Workload Identity Federation features)
gcloudCLI installed and authenticatedkubectlconfigured to access your cluster- Appropriate IAM permissions:
roles/container.admin(for cluster configuration)roles/iam.serviceAccountAdmin(for service account management)roles/iam.securityAdmin(for IAM policy binding)
- Keyfactor Command 25.1.2+ or 25.2.1+ with Google OIDC provider configured (how to configure)
Workload Identity Federation for GKE with Service Account impersonation is the most secure method to grant your workloads the ability to obtain ID tokens for authentication. This approach:
- Creates a Google Service Account (GSA) specifically for your workload
- Binds your Kubernetes ServiceAccount (KSA) to the GSA through IAM policy
- Annotates the KSA to indicate which GSA to use
- Allows the GKE metadata server to generate ID tokens using the GSA's identity
The GKE metadata server endpoint (metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity) requires a Google Service Account to generate ID tokens. Without a GSA bound to your KSA:
- The metadata server has no identity to issue tokens for
- Token generation requests will fail with "service account not defined" errors
- Your workload cannot authenticate to external services
The KSA annotation (iam.gke.io/gcp-service-account) tells the metadata server which GSA to use when generating tokens for pods using that KSA.
- Better Security: Fine-grained, per-workload identity without shared credentials
- Workload Isolation: Each workload can have its own dedicated GSA with specific permissions
- Audit Trail: Clear mapping between Kubernetes workloads and Google Service Accounts
- Principle of Least Privilege: Grant only the minimum required permissions to each workload
For the below steps, configure your environment variables.
# Get project-level metadata
export PROJECT_ID=$(gcloud config get project) # use "gcloud projects list" to get a list of projects and "gcloud config set project <PROJECT_ID>" to set the project
export PROJECT_NUMBER=$(gcloud projects describe ${PROJECT_ID} \
--format="value(projectNumber)")
export CLUSTER_NAME="cluster-name-here" # The name of your GKE cluster
export REGION="cluster-region" # The region your GKE cluster is deployed to (i.e. us-east1)
export DEPLOYMENT_NAME="command-issuer" # The Helm chart deployment name
export KSA_NAMESPACE="command-issuer-system" # The namespace your command-cert-manager-issuer is deployed to (change if different than defined in root README)
export KSA_NAME="command-issuer" # This is the Kubernetes ServiceAccount that is automatically created when command-cert-manager-issuer is deployed with Helm
export GSA_NAME="command-cert-manager-issuer-gsa" # Google Service Account that will be created to grant the KSA permissions to assume its identity
export NODEPOOL_NAME="gke-wi-nodepool" # The nodepool that will have the GKE metadata server enabled on itFor existing clusters, enable Workload Identity Federation:
# Enable Workload Identity Federation on the cluster
gcloud container clusters update ${CLUSTER_NAME} \
--location=${REGION} \
--workload-pool=${PROJECT_ID}.svc.id.googFor new clusters, create with Workload Identity Federation enabled:
# Create cluster with Workload Identity Federation
gcloud container clusters create ${CLUSTER_NAME} \
--region=${REGION} \
--workload-pool=${PROJECT_ID}.svc.id.googNote: If your cluster was created after May 30, 2024 (Standard) or June 18, 2024 (Autopilot), Workload Identity is enabled by default. You can verify this with:
gcloud container clusters describe ${CLUSTER_NAME} \ --location=${REGION} \ --format="value(workloadIdentityConfig.workloadPool)"
Check if your node pools have the GKE metadata server enabled:
# Check the workload metadata configuration
gcloud container node-pools describe \
--cluster=${CLUSTER_NAME} \
--location=${REGION} \
--format="value(config.workloadMetadataConfig.mode)"If the output is GKE_METADATA, you can skip this step. If it's GCE_METADATA or empty, create a new node pool or update existing pools:
# Option A: Create a new node pool with GKE_METADATA
gcloud container node-pools create ${NODEPOOL_NAME} \
--cluster=${CLUSTER_NAME} \
--location=${REGION} \
--workload-metadata=GKE_METADATA
# Option B: Update existing node pool (requires recreation of nodes)
gcloud container node-pools update \
--cluster=${CLUSTER_NAME} \
--location=${REGION} \
--workload-metadata=GKE_METADATANote: Clusters created after the dates mentioned in Step 1 have
GKE_METADATAenabled by default on all node pools.
Create a Google Service Account that will be used to generate ID tokens:
# Create the Google Service Account
gcloud iam service-accounts create ${GSA_NAME} \
--display-name="command-cert-manager-issuer Service Account" \
--project=${PROJECT_ID}Important: This GSA doesn't need any GCP API permissions unless your workload needs to access other Google Cloud services. For ID token generation alone, the service account just needs to exist.
# Get cluster credentials
gcloud container clusters get-credentials ${CLUSTER_NAME} \
--region=${REGION}
# Create namespace if it doesn't already exist
kubectl create namespace ${KSA_NAMESPACE} 2>/dev/null || true
# Create Kubernetes ServiceAccount if it doesn't already exist
kubectl create serviceaccount ${KSA_NAME} \
--namespace=${KSA_NAMESPACE} 2>/dev/null || trueBind the Kubernetes ServiceAccount to the Google Service Account, allowing the KSA to impersonate the GSA:
# Allow the KSA to impersonate the GSA
gcloud iam service-accounts add-iam-policy-binding ${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:${PROJECT_ID}.svc.id.goog[${KSA_NAMESPACE}/${KSA_NAME}]"This grants the roles/iam.workloadIdentityUser role to the Kubernetes ServiceAccount, allowing it to act as the Google Service Account.
Annotate the KSA to specify which GSA it should use:
# Annotate the KSA with the GSA email
kubectl annotate serviceaccount ${KSA_NAME} \
--namespace ${KSA_NAMESPACE} \
iam.gke.io/gcp-service-account=${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.comThis annotation is critical - it tells the GKE metadata server which Google Service Account to use when generating ID tokens for pods using this KSA.
If you created a new node pool with GKE_METADATA enabled, update your deployment to schedule pods on those nodes:
If command-cert-manager-issuer was deployed using Helm:
helm upgrade ${DEPLOYMENT_NAME} deploy/charts/command-cert-manager-issuer \
--namespace ${KSA_NAMESPACE} \
--reuse-values \
--set-string "nodeSelector.iam\.gke\.io/gke-metadata-server-enabled=true"If deployed without Helm, edit the Deployment directly:
kubectl edit deployment ${DEPLOYMENT_NAME} -n ${KSA_NAMESPACE}Add the nodeSelector under spec.template.spec:
spec:
template:
spec:
nodeSelector:
iam.gke.io/gke-metadata-server-enabled: "true"Then restart the deployment:
kubectl rollout restart deployment ${DEPLOYMENT_NAME} -n ${KSA_NAMESPACE}Note: If all your node pools have
GKE_METADATAenabled, you can skip the nodeSelector configuration.
Get the OAuth Client ID (unique ID) of the Google Service Account:
gcloud iam service-accounts describe ${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com \
--format="value(oauth2ClientId)"This ID will be used to create a security claim in Keyfactor Command for your identity provider.
SECURITY WARNING: All pods on the same node share the same service account, which violates the principle of least privilege. This approach is provided for reference only and is strongly discouraged for production use.
When creating a GKE cluster without specifying a custom service account, nodes automatically use the Compute Engine default service account (<project-number>-compute@developer.gserviceaccount.com). This service account can be used by the GKE metadata server to generate ID tokens.
- By default, the Compute Engine service account has the Editor role, which is overly permissive
- All pods on the same node share this identity with no isolation
- No per-workload credential management
- Violates the principle of least privilege
- Increases blast radius in case of pod compromise
- Cannot distinguish between different workloads in audit logs
For production environments, use Option 1 instead.
For the below steps, configure your environment variables:
# Get project-level metadata
export PROJECT_ID=$(gcloud config get project) # use "gcloud projects list" to get a list of projects and "gcloud config set project <PROJECT_ID>" to set the project
export PROJECT_NUMBER=$(gcloud projects describe ${PROJECT_ID} \
--format="value(projectNumber)")
export CLUSTER_NAME="cluster-name-here" # The name of your GKE cluster
export REGION="cluster-region" # The region your GKE cluster is deployed to (i.e. us-east1)Verify that your cluster is using the default node service account:
# Check if Workload Identity Federation is enabled
gcloud container clusters describe ${CLUSTER_NAME} \
--region=${REGION} \
--format="value(workloadIdentityConfig.workloadPool)"
# If empty, Workload Identity Federation is NOT enabled
# Check node pool service account
gcloud container node-pools describe default-pool \
--cluster=${CLUSTER_NAME} \
--region=${REGION} \
--format="value(config.serviceAccount)"
# If "default", you're using the Compute Engine default service accountGet the OAuth Client ID (unique ID) of the Compute Engine default service account:
# Get the unique ID (sub claim)
gcloud iam service-accounts describe \
${PROJECT_NUMBER}-compute@developer.gserviceaccount.com \
--format='value(oauth2ClientId)'This ID will be used to create a security claim in Keyfactor Command for your identity provider.
After configuring your GKE workload identity, you need to set up Google as an identity provider in Keyfactor Command.
- Log in to Keyfactor Command
- Navigate to Settings > Identity Providers
- Click Add
Use Google's standard OIDC discovery endpoint:
https://accounts.google.com/.well-known/openid-configuration
This endpoint provides the necessary configuration for Google's identity provider, including the issuer URL, token endpoints, and supported claims.
Configure the following claim mappings:
- Name Claim Type (OAuth Subject):
sub - Unique Claim Type (OAuth Object ID):
azp(orsub, depending on your token format) - Display Name: Google GKE (or your preferred name)
Note: For programmatic API access, Command requires you to fill in Client ID and Client Secret fields, but these values are not actually used for workload identity authentication. You can use any placeholder values for these fields.
- Click Save to create the identity provider
- Test the configuration by retrieving a token from your workload
- Verify the token is accepted by Keyfactor Command
After saving the identity provider:
- Navigate to Security > Security Roles
- Select or create a security role for your workload
- Add a security claim with the appropriate identifier:
- For Option 1 (Workload Identity with SA impersonation): Use the OAuth Client ID of your Google Service Account (from Step 8 above)
- For Option 2 (Compute Engine default SA): Use the OAuth Client ID of the Compute Engine default service account
- Configure the appropriate permissions for certificate operations
The security claim format in Command should be:
- Claim Type: OAuth Subject (or similar, depending on your token's
subclaim) - Claim Value: The numeric OAuth Client ID retrieved in the setup steps
For any issues not covered below, check out the root README's troubleshooting section.
Cause: The KSA annotation is missing or incorrect, or the workload identity binding is not configured
Solution:
- Verify the KSA annotation exists:
kubectl get serviceaccount ${KSA_NAME} -n ${KSA_NAMESPACE} -o yaml | grep iam.gke.io/gcp-service-account
- Verify the workload identity binding:
gcloud iam service-accounts get-iam-policy ${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com
- Ensure pods are restarted after adding the annotation:
kubectl rollout restart deployment ${DEPLOYMENT_NAME} -n ${KSA_NAMESPACE}
Cause: IAM permissions not correctly configured
Solution:
- Verify the workload identity binding is correct:
gcloud iam service-accounts get-iam-policy ${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com
- Ensure the binding includes
roles/iam.workloadIdentityUserfor the correct KSA - Check that the workload pool is correctly configured on the cluster
Cause: Issuer URL mismatch or incorrect claim mapping
Solution:
- Verify the issuer URL in Keyfactor matches the token's
issclaim (https://accounts.google.com) - Check that the security claim in Keyfactor Command matches the token's
subclaim (should be the OAuth Client ID) - Ensure the token audience matches what Keyfactor Command expects
- Verify the identity provider discovery document was imported correctly
Cause: Workload Identity not enabled on cluster or node pool metadata incorrect
Solution:
# Verify Workload Identity is enabled on cluster
gcloud container clusters describe ${CLUSTER_NAME} \
--location=${REGION} \
--format="value(workloadIdentityConfig.workloadPool)"
# Should output: .svc.id.goog
# Check node pool metadata configuration
gcloud container node-pools describe \
--cluster=${CLUSTER_NAME} \
--location=${REGION} \
--format="value(config.workloadMetadataConfig.mode)"
# Should output: GKE_METADATA
# If not correct, update the cluster:
gcloud container clusters update ${CLUSTER_NAME} \
--location=${REGION} \
--workload-pool=${PROJECT_ID}.svc.id.goog
# And update/create node pool:
gcloud container node-pools create ${NODEPOOL_NAME} \
--cluster=${CLUSTER_NAME} \
--location=${REGION} \
--workload-metadata=GKE_METADATA