Cloud

GKE Workload Identity Federation: The Complete Guide (Direct Access and Legacy Modes, Tested on Autopilot 1.35)

The first time you run a workload on GKE that needs to pull a secret from Google Secret Manager, your instinct is probably the same instinct everyone had in 2018: download a JSON service account key, mount it as a Kubernetes Secret, set GOOGLE_APPLICATION_CREDENTIALS, ship it. That path still works. It is also how most GCP key leaks on GitHub happen. Workload Identity Federation for GKE removes the key file from the equation entirely. Pods authenticate with a short-lived federated token tied to their Kubernetes ServiceAccount, Google exchanges it for the access token your code actually needs, and no long-lived credential ever touches the container.

Original content from computingforgeeks.com - post 165679

This guide walks through what Workload Identity Federation for GKE actually is, how the token exchange works under the hood, how to set it up on an Autopilot cluster in under ten minutes, both the legacy service-account impersonation model and the newer direct resource access model, a tested production example that pulls a secret from Secret Manager without any keys, a Terraform version that mirrors the manual setup, the five errors you will hit during rollout, and how the GKE mechanism compares to AWS IRSA for readers who know EKS and want the shortcut. Every command was run on a live GKE Autopilot cluster in europe-west1, not copied from docs.

Tested April 2026 on GKE Autopilot 1.35.1-gke.1396002 (Regular channel) with google-cloud-cli 521, kubectl 1.35, and Secret Manager API

What Workload Identity Federation for GKE Actually Is

Google renamed the feature in 2024. The old name was “GKE Workload Identity.” The current official name is Workload Identity Federation for GKE. Same feature, clearer name, still widely referred to as “Workload Identity” in blog posts and tooling. If you see either name in 2026, they are the same thing.

The mechanism is a workload identity pool that GKE automatically creates for every project where it has been enabled. The pool name is always PROJECT_ID.svc.id.goog. Every Kubernetes ServiceAccount in every cluster in that project becomes a federated principal inside that pool. IAM policies anywhere in the project can then reference the ServiceAccount directly, and the pod running under that ServiceAccount will have the corresponding permissions. No JSON key, no environment variable, no mounted secret.

  • The pod’s Google client library (Python, Go, Java, Node, whatever) asks the node-local metadata server for credentials
  • On GKE, a DaemonSet called gke-metadata-server intercepts that call from pods with Workload Identity enabled
  • The DaemonSet trades the pod’s Kubernetes projected service account token for a federated Google OAuth token via the Security Token Service (STS)
  • The client library receives a short-lived access token scoped to whatever IAM permissions the ServiceAccount was granted
  • The token auto-refreshes, the pod never sees a key, and the audit trail in Cloud Audit Logs shows the federated principal performing the action

The whole exchange is transparent to your application code. If you are using any official Google client library you do not need to change a single line. The library already knows how to call the metadata server and handles token refresh. That is the whole reason Workload Identity exists: make the transition from a JSON key to a federated identity a pure ops change with zero code change.

GKE Workload Identity vs AWS IRSA: Quick Comparison

If you came here because you already know IRSA on EKS, this table is the fast answer. Everything else maps to familiar concepts.

ConceptGKE Workload Identity FederationAWS IRSA
OIDC provider setupAutomatic. Google creates the pool PROJECT_ID.svc.id.goog when you enable WI on any cluster in the project. Never deleted, even if you delete every cluster.Manual. You run eksctl utils associate-iam-oidc-provider or create an aws_iam_openid_connect_provider per cluster.
Identity modelTwo modes: legacy GSA impersonation (annotate KSA, grant workloadIdentityUser) or direct resource access (bind KSA principal directly to a resource role). Direct access GA’d April 2024.Single mode: IAM role with a trust policy that matches the ServiceAccount subject claim.
Principal format (direct access)principal://iam.googleapis.com/projects/NUMBER/locations/global/workloadIdentityPools/ID.svc.id.goog/subject/ns/NS/sa/KSAIAM role ARN trusted via OIDC subject system:serviceaccount:NS:KSA
Token source inside podMetadata server at metadata.google.internal, intercepted by gke-metadata-server DaemonSetProjected SA token file at /var/run/secrets/eks.amazonaws.com/serviceaccount/token read via AWS_WEB_IDENTITY_TOKEN_FILE
Audience claimManaged by Google, not user-configuredMust be sts.amazonaws.com
Quota / limitsNo per-project pool limit worth mentioning. gke-metadata-server scales with KSA count; documented limit is ~3000 KSAs before OOMKilled riskHard limit of 100 IAM OIDC providers per AWS account — real problem for fleets of clusters
Default on managed clusterAlways enabled on Autopilot, cannot be disabledOpt-in even on EKS managed node groups

The biggest practical difference is the OIDC provider cap. On AWS you hit the 100-provider limit in any real multi-tenant setup and have to work around it. On GCP there is one pool per project, permanently, managed by Google. This alone is why teams running both clouds tend to find GKE WI simpler after the first few clusters. For the AWS side of the same story, our IRSA guide and EKS Pod Identity guide cover both the legacy and newer mechanisms.

Prerequisites

The guide assumes a normal GCP working environment. Nothing exotic.

  • A GCP project with billing enabled (tested on project my-project)
  • gcloud CLI 520+ installed and authenticated (tested on v521.0.0)
  • kubectl 1.29+ (tested on v1.35.0)
  • The container.googleapis.com, iam.googleapis.com, iamcredentials.googleapis.com, and secretmanager.googleapis.com APIs enabled
  • Outbound internet from your cluster nodes to *.googleapis.com and metadata.google.internal (default for every GKE cluster; only relevant if you have custom egress firewalls)
  • Permission to create clusters, bind IAM roles, and create secrets in the project

Enable the APIs in one shot if this is a fresh project:

gcloud services enable \
  container.googleapis.com \
  iam.googleapis.com \
  iamcredentials.googleapis.com \
  secretmanager.googleapis.com \
  --project=PROJECT_ID

The iamcredentials API is the one people forget. It powers the generateAccessToken call that the metadata server uses under the hood to issue short-lived access tokens. Without it the pods work locally but fail the moment they touch any Google API.

Create an Autopilot Cluster with Workload Identity Always On

The fastest path from zero to a working Workload Identity setup is a GKE Autopilot cluster because WI is non-optional on Autopilot. You do not toggle a flag, you do not run an update command, you just create the cluster and it is on. Standard clusters require the --workload-pool flag which is shown further down. Autopilot is the recommended starting point for anything new in 2026.

gcloud container clusters create-auto cfg-lab-gke \
  --project=PROJECT_ID \
  --region=europe-west1 \
  --release-channel=regular

The create call blocks until the cluster is reachable. On a fresh project this takes roughly six minutes while Google provisions the regional control plane, the managed node pool, and the metadata server DaemonSet. Fetch credentials afterward:

gcloud container clusters get-credentials cfg-lab-gke \
  --region=europe-west1 \
  --project=PROJECT_ID

Confirm the cluster reports Workload Identity enabled. For Autopilot the field is always present, but it is worth checking once so you know where to look when troubleshooting a Standard cluster later:

gcloud container clusters describe cfg-lab-gke \
  --region=europe-west1 \
  --project=PROJECT_ID \
  --format="value(workloadIdentityConfig.workloadPool)"

The expected output is the project’s workload pool name:

PROJECT_ID.svc.id.goog

If you are using an existing Standard cluster, enable Workload Identity with a single update command (skip this step entirely on Autopilot):

gcloud container clusters update CLUSTER_NAME \
  --location=LOCATION \
  --workload-pool=PROJECT_ID.svc.id.goog

On Standard you also need to enable it per node pool:

gcloud container node-pools update NODE_POOL_NAME \
  --cluster=CLUSTER_NAME \
  --location=LOCATION \
  --workload-metadata=GKE_METADATA

GKE_METADATA is the flag that tells the node pool to block the raw GCE metadata endpoint and route metadata requests through the gke-metadata-server DaemonSet instead. Without it, pods can still reach the underlying node service account, which defeats the whole point of Workload Identity.

Direct resource access is the newer of the two impersonation modes and is the one you want to use for anything new. You bind an IAM role directly to the Kubernetes ServiceAccount principal. No Google Service Account in the middle. No annotation. Less to configure and fewer places for permissions to drift.

The principal format is strict. The pool path uses the project number, the pool name uses the project ID, and the subject uses the ns/NAMESPACE/sa/KSA_NAME form. Mixing the two project identifiers is the single most common mistake and it produces a cryptic “permission denied” that takes an hour to diagnose if you do not know where to look.

principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/PROJECT_ID.svc.id.goog/subject/ns/NAMESPACE/sa/KSA_NAME

Fetch both values in one shot so there is no room for typos:

PROJECT_ID=$(gcloud config get-value project)
PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")
echo "ID: $PROJECT_ID"
echo "NUMBER: $PROJECT_NUMBER"

Create a namespace and ServiceAccount for the demo workload. The KSA does not need any annotation for direct access mode, which is part of what makes this mode nicer to work with:

kubectl create namespace demo
kubectl -n demo create serviceaccount demo-app

Now grant the KSA direct IAM access to a resource. For this demo the target is Secret Manager’s secretAccessor role at the project level, which is a realistic use case. The --condition=None flag is required because the gcloud CLI otherwise prompts interactively for a condition:

gcloud projects add-iam-policy-binding $PROJECT_ID \
  --role=roles/secretmanager.secretAccessor \
  --member="principal://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/demo/sa/demo-app" \
  --condition=None

Create a test secret the pod will read:

echo -n "super-secret-value-42" | gcloud secrets create demo-secret \
  --data-file=- \
  --replication-policy=automatic

Now deploy a pod using the demo-app ServiceAccount and have it fetch the secret. The pod image just needs curl and the ability to call the metadata server for a token, then call the Secret Manager REST API. No Google SDK required for the demo, which makes the moving parts visible:

vim demo-pod.yaml

Paste the following manifest. It pins to the demo-app KSA (required for the identity binding to apply), sleeps forever, and gives you a pod you can exec into to run the actual test commands interactively:

apiVersion: v1
kind: Pod
metadata:
  name: wi-demo
  namespace: demo
spec:
  serviceAccountName: demo-app
  containers:
  - name: shell
    image: google/cloud-sdk:slim
    command: ["sleep", "infinity"]

Apply and wait for the pod to be Running:

kubectl apply -f demo-pod.yaml
kubectl -n demo wait --for=condition=Ready pod/wi-demo --timeout=180s

On Autopilot the pod creation emits two harmless warnings you should expect to see. The first is the resource defaulting mutator adding CPU requests, and the second is a BCID constraint notice:

Warning: autopilot-default-resources-mutator:Autopilot updated Pod demo/wi-demo: defaulted unspecified 'cpu' resource for containers [shell] (see http://g.co/gke/autopilot-defaults).
Warning: BCID failed open: BCID Constraint disabled by Giraffe
pod/wi-demo created
pod/wi-demo condition met

Exec into the pod and verify the identity. The metadata server reports the federated principal directly in direct-access mode, which is the signal that no GSA is in the loop:

kubectl -n demo exec -it wi-demo -- bash

From inside the pod, query the metadata server. The Metadata-Flavor: Google header is mandatory — calls without it are rejected:

curl -s -H "Metadata-Flavor: Google" \
  "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/email"

In direct-access mode the metadata server reports the workload pool name itself rather than a GSA email, because there is no GSA in the loop. This is the first clear signal that you are running in direct-access mode:

PROJECT_ID.svc.id.goog

Now read the secret. The gcloud binary inside the google/cloud-sdk image automatically uses Application Default Credentials, which on GKE resolves to the metadata server, which issues a federated token scoped to the KSA’s IAM permissions:

gcloud secrets versions access latest --secret=demo-secret

The secret prints cleanly:

super-secret-value-42

That is the whole loop: KSA → federated principal → IAM binding → Secret Manager. No JSON key anywhere in the chain.

Mode 2: Google Service Account Impersonation (Legacy)

The older mode puts a Google Service Account between the KSA and the resource. The KSA is annotated with the GSA email, the GSA is granted roles/iam.workloadIdentityUser with the KSA as a member, and then you grant normal IAM roles to the GSA as if it were a regular identity. This mode is still fully supported and is the only option if the Google API you are targeting does not yet honor direct KSA principals in its IAM conditions. A handful of legacy services fall into that category. When in doubt, try direct access first and fall back to GSA impersonation only if you hit a resource that rejects the principal:// member format.

Create the Google Service Account:

gcloud iam service-accounts create demo-legacy \
  --display-name="Demo legacy GSA for WI article"

Grant it secretAccessor on the project, same as the direct-access version:

gcloud projects add-iam-policy-binding $PROJECT_ID \
  --role=roles/secretmanager.secretAccessor \
  --member="serviceAccount:demo-legacy@$PROJECT_ID.iam.gserviceaccount.com" \
  --condition=None

Now allow the demo-app KSA to impersonate the GSA. This is the key binding that turns a KSA into a federated identity that can act as the GSA. The member format uses the PROJECT_ID.svc.id.goog[NAMESPACE/KSA] form, which is specific to legacy mode:

gcloud iam service-accounts add-iam-policy-binding \
  demo-legacy@$PROJECT_ID.iam.gserviceaccount.com \
  --role=roles/iam.workloadIdentityUser \
  --member="serviceAccount:$PROJECT_ID.svc.id.goog[demo/demo-app-legacy]"

Create a second KSA (so the two modes are isolated) and annotate it with the GSA email. The annotation is what the gke-metadata-server looks for when a pod using this KSA asks for credentials:

kubectl -n demo create serviceaccount demo-app-legacy
kubectl -n demo annotate serviceaccount demo-app-legacy \
  iam.gke.io/gcp-service-account=demo-legacy@$PROJECT_ID.iam.gserviceaccount.com

Run a second test pod using the legacy KSA and verify it resolves to the GSA email:

kubectl -n demo run wi-legacy --rm -it \
  --image=google/cloud-sdk:slim \
  --overrides='{"spec":{"serviceAccountName":"demo-app-legacy"}}' \
  -- bash

Inside the pod:

curl -s -H "Metadata-Flavor: Google" \
  "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/email"

This time the metadata server returns the annotated Google Service Account email, not the workload pool name:

demo-legacy@PROJECT_ID.iam.gserviceaccount.com

Read the secret the same way as before:

gcloud secrets versions access latest --secret=demo-secret

The output matches the direct-access run because both modes end up with the same effective IAM permissions on the target secret:

super-secret-value-42

Under the hood the metadata server first exchanges the Kubernetes ServiceAccount token for a federated token, then calls generateAccessToken on the GSA to mint the final access token. Two hops instead of one, hence the slight preference for direct access when you can use it.

Terraform Version

The full infrastructure plus both identity modes in one Terraform file. This is the copy-paste version for teams who want everything in code. Split into modules in production, but one file is clearer for learning.

vim gke-wi-demo.tf

Paste the following. Three things to notice: the cluster has no workload_identity_config block because Autopilot implies it, the direct-access binding uses google_project_iam_member with the principal:// member, and the legacy binding uses google_service_account_iam_member on the GSA itself:

terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "~> 6.20"
    }
  }
}

provider "google" {
  project = var.project_id
  region  = "europe-west1"
}

variable "project_id" {
  type = string
}

data "google_project" "current" {}

resource "google_container_cluster" "autopilot" {
  name             = "cfg-lab-gke"
  location         = "europe-west1"
  enable_autopilot = true

  release_channel {
    channel = "REGULAR"
  }

  deletion_protection = false
}

resource "google_secret_manager_secret" "demo" {
  secret_id = "demo-secret"
  replication {
    auto {}
  }
}

resource "google_secret_manager_secret_version" "demo" {
  secret      = google_secret_manager_secret.demo.id
  secret_data = "super-secret-value-42"
}

# Direct resource access: bind the KSA principal directly
resource "google_project_iam_member" "ksa_secret_accessor" {
  project = var.project_id
  role    = "roles/secretmanager.secretAccessor"
  member  = "principal://iam.googleapis.com/projects/${data.google_project.current.number}/locations/global/workloadIdentityPools/${var.project_id}.svc.id.goog/subject/ns/demo/sa/demo-app"
}

# Legacy mode: Google Service Account
resource "google_service_account" "legacy" {
  account_id   = "demo-legacy"
  display_name = "Demo legacy GSA for WI"
}

resource "google_project_iam_member" "legacy_secret_accessor" {
  project = var.project_id
  role    = "roles/secretmanager.secretAccessor"
  member  = "serviceAccount:${google_service_account.legacy.email}"
}

resource "google_service_account_iam_member" "legacy_ksa_binding" {
  service_account_id = google_service_account.legacy.name
  role               = "roles/iam.workloadIdentityUser"
  member             = "serviceAccount:${var.project_id}.svc.id.goog[demo/demo-app-legacy]"
}

Apply it:

terraform init
terraform apply -var project_id=PROJECT_ID

The one resource that is not in this file and tends to confuse people is google_iam_workload_identity_pool. That resource exists for external Workload Identity Federation scenarios like GitHub Actions or AWS workloads calling GCP. The GKE pool at PROJECT_ID.svc.id.goog is managed entirely by Google and should not be declared in Terraform. If you add it anyway, the apply fails with “pool already exists” because Google created it the moment you enabled container.googleapis.com.

Troubleshooting

The exact error strings captured during rollout. Search engines love these because people paste errors verbatim into Google and blog posts that return the exact text rank highest.

Error: “Permission ‘iam.serviceAccounts.getAccessToken’ denied on resource”

Full error:

HTTP/403: generic::permission_denied: loading: GenerateAccessToken("GSA@PROJECT_ID.iam.gserviceaccount.com", ""): googleapi: Error 403: Permission 'iam.serviceAccounts.getAccessToken' denied on resource (or it may not exist).

This is the single most common Workload Identity error and it almost always means one of two things. First, the KSA is not bound to the GSA via roles/iam.workloadIdentityUser. Second, you ran the binding command less than five minutes ago and IAM propagation has not caught up. IAM changes on GCP take two to seven minutes to apply globally. Wait, then retry. If the error persists beyond ten minutes, double-check the member format — the [NAMESPACE/KSA] portion is case-sensitive and the GSA email must match exactly.

Error: “ComputeEngineCredentials cannot find the metadata server”

Full error from a Python or Go client library:

ComputeEngineCredentials cannot find the metadata server. This is likely because code is not running on Google Compute Engine.

The pod cannot reach metadata.google.internal. Usually this happens when a NetworkPolicy blocks egress to 169.254.169.254 or when a custom DNS configuration refuses to resolve the hostname. Confirm by kubectl exec-ing into the pod and running curl -v http://metadata.google.internal. If DNS fails but the IP works, set the env var GCE_METADATA_HOST=169.254.169.254 in the pod spec to bypass DNS entirely. If both fail, inspect your NetworkPolicies for an egress rule that excludes the metadata IP range.

Error: “Invalid form of account ID test.svc.id.goog”

Full error:

Invalid form of account ID test_account.svc.id.goog. Should be [Gaia ID |Email |Unique ID |] of the account

Appears when migrating between direct-access and legacy modes, or when the client library expects a Gaia ID but receives the workload pool subject. The fix is an annotation on the KSA telling the metadata server to return the principal as an email form:

kubectl -n NAMESPACE annotate serviceaccount KSA_NAME \
  iam.gke.io/return-principal-id-as-email=true

gke-metadata-server OOMKilled

Symptom: the gke-metadata-server DaemonSet pods in kube-system flip to OOMKilled status and workload pods start failing with “metadata server unreachable” errors cluster-wide. Root cause is documented: the metadata server’s memory footprint scales with the number of Kubernetes ServiceAccounts in the cluster, and the practical ceiling is around 3000 KSAs. Past that, the daemon runs out of memory even with the default limit. Fix is either to consolidate KSAs (most namespaces do not need their own; share where reasonable) or split workloads across multiple clusters. GKE does not let you raise the memory limit on the managed DaemonSet.

Direct-access binding with wrong project identifier

The single most common direct-access mistake is swapping PROJECT_NUMBER and PROJECT_ID in the principal URL. The binding command succeeds (IAM does not validate the subject at bind time), the pod starts, and every API call returns a generic permission-denied with no hint that the identity format is wrong. If a new direct-access binding fails and the IAM logs show the principal being rejected, re-run the echo command from the prerequisites section and verify you used number in the pool path and ID in the pool name, not the other way around.

Production Considerations

Ten things worth knowing before you put Workload Identity Federation for GKE on anything that serves real traffic.

  1. Autopilot always has it on. You cannot disable Workload Identity on Autopilot, you cannot fall back to the node service account, and you cannot use the old GCE metadata endpoint from pods. This is a feature, not a limitation.
  2. Prefer direct resource access for anything new. Fewer moving parts, fewer IAM bindings to audit, no GSA to rotate or clean up. Reserve GSA impersonation for the shrinking list of services that still need it.
  3. IAM propagation is slow. New bindings take two to seven minutes to apply. Build this into your deployment scripts with a sleep or a retry loop so CI does not fail on the first attempt.
  4. The metadata server DaemonSet is a shared dependency. If it crashes, every pod in the cluster loses access to Google APIs at once. Monitor its pod status and memory usage with Cloud Monitoring alerts.
  5. Three thousand KSAs is the soft ceiling. Past that, the metadata server risks OOMKilled events. Plan namespace-to-KSA ratios accordingly.
  6. Cross-project access works. A KSA in project A can access resources in project B. Grant the principal:// binding on the project B resource. No additional federation setup required because the pool is tied to the cluster’s project, not the target resource’s project.
  7. Cost is zero. Workload Identity itself does not add any line item to the bill. You only pay for the Google APIs that the federated identity ends up calling.
  8. Audit logs show the federated principal. Cloud Audit Logs record the principal://... subject for direct access or the GSA for legacy mode. The KSA name and namespace are visible, which makes investigation easy.
  9. Workload Identity Federation for external workloads is a different feature. The pool and provider resources in Terraform (google_iam_workload_identity_pool, google_iam_workload_identity_pool_provider) are for GitHub Actions, AWS, Azure, and on-prem OIDC workloads authenticating to GCP. They have nothing to do with GKE WI. Do not mix them up when reading documentation or asking AI assistants.
  10. Key file deprecation is real. Google has been gradually restricting new service account key creation via organization policy defaults. Teams still using JSON keys on GKE are living on borrowed time. Migrate before the next org-wide policy refresh forces your hand.

Clean Up

GKE Autopilot clusters bill a flat $0.10 per hour for the cluster management fee plus whatever the pods request in CPU, memory, and ephemeral storage. A demo cluster that ran for an hour while you tested this guide costs a few cents, not dollars, but it adds up if you forget to delete it. Tear everything down:

kubectl delete namespace demo
gcloud secrets delete demo-secret --quiet
gcloud iam service-accounts delete demo-legacy@$PROJECT_ID.iam.gserviceaccount.com --quiet
gcloud container clusters delete cfg-lab-gke \
  --region=europe-west1 --quiet

The cluster delete takes four to five minutes. Verify nothing paid is left:

gcloud container clusters list
gcloud secrets list --filter="name:demo-secret"

Both should return empty output. The workload identity pool at PROJECT_ID.svc.id.goog remains because Google never deletes it. That is expected and costs nothing.

FAQ

Is Workload Identity Federation for GKE the same as Workload Identity Federation in GCP?

No. The names are confusingly similar. Workload Identity Federation for GKE is for Kubernetes ServiceAccounts inside a GKE cluster authenticating to Google APIs. The broader Workload Identity Federation product is for external workloads (GitHub Actions, AWS EC2 instances, Azure VMs, on-prem OIDC) authenticating to GCP. They share the underlying Security Token Service but the setup and resource model are different. Do not try to bind a google_iam_workload_identity_pool resource to a GKE cluster — that pool is managed automatically.

Do I need to change application code to use Workload Identity?

No. If your application uses any official Google client library (Python google-cloud-*, Go cloud.google.com/go/*, Java, Node, Ruby, C#), it already uses Application Default Credentials, which on GKE transparently resolves to the metadata server. You remove the JSON key mount and the GOOGLE_APPLICATION_CREDENTIALS environment variable, and that is the whole code change. For applications that manually loaded the key file, delete the loading code.

What is the difference between direct resource access and GSA impersonation?

Direct resource access binds IAM roles straight to the Kubernetes ServiceAccount principal using the principal://iam.googleapis.com/... member format. There is no intermediary Google Service Account. GSA impersonation has the KSA impersonate a regular GSA, which then holds the actual IAM permissions. Direct access is newer (GA April 2024), simpler to audit, and the recommended mode for anything new. Legacy GSA impersonation is still fully supported and is required for the handful of services that do not yet honor direct KSA principals in their IAM conditions.

Can a pod in one project access a resource in another project?

Yes. Workload Identity Federation for GKE supports cross-project access out of the box. Grant the IAM binding on the target project’s resource using the source project’s workload pool principal. The pod continues to use its local metadata server; the identity carries across projects because the Google Security Token Service handles the exchange. No additional federation or trust configuration is required on either side.

How does this compare to AWS IRSA?

The security model is nearly identical. A short-lived federated token is exchanged for cloud provider credentials without any static key in the container. The biggest operational differences are that GKE manages the OIDC pool automatically while AWS requires a manual IAM OIDC provider per cluster, and GCP does not have the 100-provider-per-account limit that bites IRSA users running fleets of clusters. For a full walkthrough of the AWS version, see our IRSA on EKS guide and the newer EKS Pod Identity guide.

Does Workload Identity work on Autopilot and Standard clusters the same way?

The IAM model is identical. The only differences are that Autopilot has it enabled by default and cannot turn it off, while Standard requires an explicit --workload-pool flag on the cluster and --workload-metadata=GKE_METADATA on each node pool. Once both are configured, every command and binding shown in this guide works the same on both cluster types.

How long does IAM propagation take?

Between two and seven minutes in practice. Google does not publish a guaranteed SLA. A new add-iam-policy-binding call almost always shows up in effective permissions within three minutes, but CI systems that run a pod immediately after the binding command will sometimes see a transient permission_denied. Build a short sleep or retry loop into your deployment pipeline and the problem goes away.

Where to Go Next

Workload Identity Federation for GKE is the foundation for every secure GCP integration in Kubernetes. The natural next steps are wiring it up to real workloads. For production patterns, look at the External Secrets Operator with Google Secret Manager (uses the direct-access mode shown above), Cloud SQL Auth Proxy running as a sidecar in GKE (same pattern, different target), and cross-project resource access for multi-project organizations. On the CI side, the companion feature is Workload Identity Federation for external workloads, which lets GitHub Actions and similar pipelines authenticate to GCP without a JSON key file, the same way GKE pods do. The broader Workload Identity Federation for GKE concepts guide and the external Workload Identity Federation docs are the two official reference documents worth bookmarking.

Related Articles

AlmaLinux Install OpenStack Yoga on Rocky Linux 8 / AlmaLinux 8 with Packstack Cloud Install Proxmox VE 7 on Hetzner root server AWS How To Mount AWS EFS File System on EC2 Instance Containers Install Kubernetes Cluster on Ubuntu 22.04 using kubeadm

Leave a Comment

Press ESC to close