The path from “we need to store this database password somewhere” to “we accidentally committed the database password to Git” is alarmingly short. Google Secret Manager is the managed store that closes that loop on GCP. Secrets live in an API-accessible vault, access is granted through IAM, versions are immutable, audit logs show exactly who read what and when, and the price per secret is low enough that nobody needs to cut corners. This guide walks through what Secret Manager actually is, how pricing really works (including the free tier most articles skip), the four IAM roles you need to know, how versioning and the latest alias actually behave under rollback (with a gotcha that surprises most teams), rotation via Pub/Sub, regional vs global secrets, CMEK with the automatic-replication trap, and a tested end-to-end External Secrets Operator integration on GKE that pulls secrets into Kubernetes without a single JSON key.
Every command was run on a live GCP project in europe-west1 with the real output captured, including a behavior check on the latest alias that contradicts what most blog posts claim. If you are coming from AWS, the final comparison section is a fast map from AWS Secrets Manager to GCP Secret Manager so you can skim the differences in thirty seconds.
Tested April 2026 on Google Cloud Secret Manager with gcloud 521, GKE Autopilot 1.35.1-gke.1396002, and External Secrets Operator 0.18
What Secret Manager Actually Is
Secret Manager is a managed key-value store scoped specifically to application secrets. Each secret is a resource with metadata (name, labels, replication policy, rotation config) and zero or more immutable versions holding the actual payload. The payload is binary, up to 64 KiB per version, and encrypted at rest with Google-managed keys by default or a CMEK key from Cloud KMS if you bring your own. Access is granted through IAM the same way you grant access to any other GCP resource, which means you can bind directly to a Kubernetes ServiceAccount via GKE Workload Identity, to a GitHub Actions pipeline via Workload Identity Federation, or to a Google Service Account for a Compute Engine VM.
Secrets are designed around four constraints that make them useful for production. First, versions are immutable. Once you create version 3 you cannot edit it, only disable or destroy it. Second, every access is logged via Cloud Audit Logs Data Access logs (opt-in but free for many workloads). Third, replication is configured at creation time and cannot be changed. Fourth, IAM is granular enough to grant different teams different roles on the same secret without duplicating the secret itself. If you think of it as “etcd for production credentials with IAM in front,” the mental model is accurate.
Pricing: What It Actually Costs
Secret Manager pricing is one of the cheapest line items on a real GCP bill, but the structure has a few sharp edges worth knowing.
| Item | Price | Free tier (always) |
|---|---|---|
| Active secret versions | $0.06 per version per location per month | 6 versions |
| Destroyed secret versions | Free | Unlimited |
| Access operations | $0.03 per 10,000 operations | 10,000 operations |
| Management operations | Free | Unlimited |
| Rotation notifications | $0.05 per rotation | 3 rotations |
The “per location” qualifier is where people get caught out. A secret with automatic replication bills as one location regardless of how many regions Google actually replicates it to. A secret with user-managed replication explicitly listing three regions bills as three locations, so a single version costs $0.18 per month instead of $0.06. If you need data residency in specific regions, user-managed replication is the right call. If you just want multi-region durability without thinking about it, automatic wins on cost.
The free tier covers six active versions and ten thousand access operations per month across the entire billing account. A small project with a handful of secrets that gets queried a few times an hour sits inside the free tier permanently. A production workload with dozens of secrets and continuous reads will pay a few dollars per month. It is almost never a meaningful line item unless somebody builds a bad loop that queries the same secret on every HTTP request instead of caching it in memory.
Prerequisites
Small set of prerequisites. Nothing exotic.
- A GCP project with billing enabled
gcloudCLI 520+ authenticated with an account that can manage Secret Manager (tested on v521.0.0)- Secret Manager API enabled:
gcloud services enable secretmanager.googleapis.com - For the GKE/ESO integration section: an existing GKE cluster with Workload Identity enabled. Autopilot clusters have it on by default
Create, Read, and Version a Secret
Create a secret with a first version in one shot. The --data-file=- flag reads the value from stdin, which keeps the plaintext off your shell history if you pipe from a generator:
echo -n "super-secret-value-42" | gcloud secrets create demo-secret \
--data-file=- \
--replication-policy=automatic
The -n matters. Without it, echo appends a newline to the secret payload and every future read gets a trailing newline your code has to strip. This is the single most common “why does my password not work” debugging session on GCP.
Read the secret back. The latest alias returns the most recent version (with an important caveat covered below):
gcloud secrets versions access latest --secret=demo-secret
The output is exactly what you piped in, without a trailing newline:
super-secret-value-42
Add more versions. Versions are monotonic integers starting at 1. Each versions add call creates a new immutable version and increments the counter:
echo -n "value-v2-rotated" | gcloud secrets versions add demo-secret --data-file=-
echo -n "value-v3-latest" | gcloud secrets versions add demo-secret --data-file=-
List the versions to see the state of each. Versions have three states: ENABLED (readable), DISABLED (not readable but payload still stored), and DESTROYED (payload permanently deleted, version number preserved forever as a tombstone):
gcloud secrets versions list demo-secret \
--format="table(name,state,createTime.date('%Y-%m-%d %H:%M:%S'))"
Three enabled versions in reverse creation order:
NAME STATE CREATED
3 enabled 2026-04-11 06:07:34
2 enabled 2026-04-11 06:07:31
1 enabled 2026-04-11 05:05:48
The “latest” Alias Gotcha Every Tutorial Gets Wrong
Popular belief (including some official-looking blog posts) says the latest alias resolves to the highest-numbered enabled version, and that disabling a broken new version automatically rolls the alias back to the previous good one. This is the rollback story everyone tells, and it does not match what GCP actually does. We tested it to be sure.
Disable version 3 and try to access latest:
gcloud secrets versions disable 3 --secret=demo-secret
gcloud secrets versions access latest --secret=demo-secret
The access call fails, with GCP explicitly reporting that version 3 is disabled:
ERROR: (gcloud.secrets.versions.access) FAILED_PRECONDITION: Secret Version [projects/PROJECT_NUMBER/secrets/demo-secret/versions/3] is in DISABLED state.
That confirms the actual behavior: latest maps to the highest version number regardless of state. Disabling the highest version does not move the alias, it breaks the alias. To roll back a bad secret you have three real options, in order of increasing regret.
- Pin your application to a specific version number instead of using
latest. Disable version 3, your app still points at version 2, nothing breaks. This is the production pattern Google docs recommend and the reason pinning is considered good practice. - Add a new version with the old value. Version 4 now holds the known-good payload and
latestresolves to it. Cost is $0.06 per month for the extra active version, which nobody notices. - Destroy version 3 permanently. The alias then resolves to version 2. This is irreversible and should be the last resort because auditors hate missing history.
Re-enable the version to restore the demo state:
gcloud secrets versions enable 3 --secret=demo-secret
IAM: The Four Roles Worth Knowing
Secret Manager ships five predefined roles but four of them cover every real use case. Treat the fifth (Viewer) as a read-metadata-only role for auditors.
| Role | Permission | Typical user |
|---|---|---|
roles/secretmanager.admin | Full CRUD on secrets, versions, and IAM | Platform team operators |
roles/secretmanager.secretVersionManager | Add, enable, disable, destroy versions on existing secrets | Rotation automation |
roles/secretmanager.secretVersionAdder | Add new versions only (no destroy) | CI deploy pipeline writing new secret versions |
roles/secretmanager.secretAccessor | Read secret payloads | Application workloads (the by-far most common grant) |
Grant at the secret level whenever you can. Granting secretAccessor at the project level means the identity can read every secret in the project, which is almost never what you want. The secret-level grant pattern looks like this:
gcloud secrets add-iam-policy-binding demo-secret \
--role=roles/secretmanager.secretAccessor \
--member="serviceAccount:app@PROJECT_ID.iam.gserviceaccount.com" \
--condition=None
The --condition=None flag is required to suppress the interactive prompt that gcloud otherwise opens asking if you want to add an IAM condition. IAM conditions are powerful (time-based access, resource attribute matching) but most deployments do not use them and the prompt is noise in scripts.
Regional Secrets: Data Residency Without Replication
Secret Manager has two service endpoints. The classic global endpoint (secretmanager.googleapis.com) requires a replication policy and stores data in multiple regions. The regional endpoint (secretmanager.REGION.rep.googleapis.com) pins a secret to exactly one region with no replication at all. Regional secrets exist for workloads with data residency requirements under GDPR, HIPAA, or similar regulatory frameworks where moving the bytes outside the region is a contractual violation.
Create a regional secret with the --location flag:
echo -n "eu-only-password" | gcloud secrets create eu-db-password \
--location=europe-west1 \
--data-file=-
Access it the same way as a global secret, but always specify the location. Cross-region access of regional secrets fails by design. If you are building a global application and you do not have a specific regulatory reason to pin, use global with automatic replication. If your legal or compliance team has a region-lock requirement, regional is the answer.
Rotation: Notification, Not Automatic Replacement
Rotation in GCP Secret Manager does not mean “GCP generates a new password for you and updates the downstream system.” That is AWS Secrets Manager’s model with the Lambda rotation templates. GCP does notification-based rotation: you configure a rotation period and a Pub/Sub topic, GCP publishes a SECRET_ROTATE message to the topic at the configured interval, and you run your own Cloud Function or Cloud Run service subscribed to the topic that does the actual work (call the database admin API, generate a new credential, add a new version to the secret, update the consumer). Google never touches the upstream system.
Configure a rotation schedule on an existing secret. Rotation periods are specified in seconds (minimum 3,600 = 1 hour, maximum 3,153,600,000 = 100 years). The next-rotation-time must be at least five minutes in the future:
gcloud pubsub topics create secret-rotation-events
gcloud secrets update demo-secret \
--next-rotation-time="2026-05-01T00:00:00Z" \
--rotation-period="2592000s" \
--topics=projects/PROJECT_ID/topics/secret-rotation-events
The 2592000s value is thirty days. The Secret Manager service agent needs roles/pubsub.publisher on the topic, otherwise the rotation notification fires into the void. Grant it with a one-liner:
PROJECT_NUMBER=$(gcloud projects describe PROJECT_ID --format="value(projectNumber)")
gcloud pubsub topics add-iam-policy-binding secret-rotation-events \
--role=roles/pubsub.publisher \
--member="serviceAccount:service-${PROJECT_NUMBER}@gcp-sa-secretmanager.iam.gserviceaccount.com"
Each rotation notification costs $0.05 and the free tier includes three per month. The Pub/Sub topic is free at this volume. The downstream Cloud Function that handles the event is where the real work (and the real cost) lives, and is application-specific. A minimal handler signature in Python:
import base64, json
from google.cloud import secretmanager
def handle_rotation(event, context):
attrs = event.get("attributes", {})
secret_id = attrs["secretId"]
event_type = attrs["eventType"]
if event_type != "SECRET_ROTATE":
return
new_value = generate_new_credential()
client = secretmanager.SecretManagerServiceClient()
client.add_secret_version(
request={"parent": secret_id, "payload": {"data": new_value.encode()}},
)
CMEK: Customer-Managed Encryption Keys
By default, Secret Manager encrypts versions with Google-managed keys. You can bring your own Cloud KMS symmetric key as the key encryption key for extra control and to satisfy compliance frameworks that require customer-managed encryption. The setup has one trap that bites teams who default to automatic replication.
CMEK keys are region-scoped. Automatic replication stores the secret across multiple Google-managed regions. To combine the two, you have to provide a CMEK policy with one key per replica location before creating the secret, and this policy is part of the secret’s replication config at creation time. You cannot retrofit CMEK onto an existing secret, and you cannot use one global CMEK key because the global key does not exist in KMS.
The simpler approach for most teams is user-managed replication with a single region and a single CMEK key, which matches how people deploy production workloads that need key control anyway. The tradeoff is losing the multi-region durability of automatic replication, but a user-managed replication to two chosen regions with two CMEK keys is still cheaper and simpler to reason about than automatic with a key-per-Google-region policy. If you destroy a CMEK key or disable all of its versions, every secret version encrypted with that key becomes permanently unreadable. There is no “oops, undo” path.
Audit Logging: Who Read What
Cloud Audit Logs captures every Secret Manager operation, but the default log type depends on whether the operation mutates state or just reads it. Admin Activity logs are always on and always free and cover every create, update, delete, enable, disable, destroy, and IAM change. Data Access logs cover reads (AccessSecretVersion, GetSecret, ListSecrets, GetIamPolicy) and must be explicitly enabled per service in the project’s IAM audit config, because they can generate large log volumes on busy projects and those logs are billable ingestion into Cloud Logging.
Enable Data Access logging for Secret Manager via the IAM audit policy:
gcloud projects get-iam-policy PROJECT_ID > /tmp/policy.yaml
# edit /tmp/policy.yaml and add:
# auditConfigs:
# - service: secretmanager.googleapis.com
# auditLogConfigs:
# - logType: DATA_READ
# - logType: ADMIN_READ
gcloud projects set-iam-policy PROJECT_ID /tmp/policy.yaml
Once enabled, every versions access call generates a log entry with the principal email, timestamp, secret resource name, caller IP, and authentication method. Query them in Logs Explorer:
gcloud logging read \
'protoPayload.serviceName="secretmanager.googleapis.com"
AND protoPayload.methodName="google.cloud.secretmanager.v1.SecretManagerService.AccessSecretVersion"' \
--limit=20 --format=json
The output is JSON with one log entry per access. Keep in mind Data Access logs are billed at standard Cloud Logging ingestion rates ($0.50 per GiB beyond the free tier), so enable them on high-traffic projects deliberately.
End-to-End: External Secrets Operator on GKE (Tested)
The production pattern for pulling Secret Manager secrets into a Kubernetes cluster is the External Secrets Operator. ESO is a Kubernetes-native controller that syncs secrets from external providers (GCP Secret Manager, AWS Secrets Manager, Vault, Azure Key Vault, and many more) into native Kubernetes Secret objects that pods can mount as volumes or env vars. The big win on GKE is that ESO authenticates via Workload Identity Federation, so the cluster never holds a JSON key for the Secret Manager API.
This section assumes you have a GKE cluster with Workload Identity enabled (Autopilot has it on by default) and a Kubernetes ServiceAccount already bound as a direct-access principal on the target secret. The GKE Workload Identity guide covers that setup in full. We reuse the same demo namespace, demo-app KSA, and demo-secret from there.
Install ESO via its Helm chart into its own namespace:
helm repo add external-secrets https://charts.external-secrets.io
helm repo update external-secrets
helm install external-secrets external-secrets/external-secrets \
--namespace external-secrets \
--create-namespace \
--set installCRDs=true \
--wait --timeout 5m
Three ESO pods should come up in the external-secrets namespace (controller, webhook, cert-controller):
kubectl -n external-secrets get pods
Expected output, all three Running:
NAME READY STATUS RESTARTS AGE
external-secrets-6449b64b4c-9z8t5 1/1 Running 0 2m3s
external-secrets-cert-controller-59b6f778d9-6pghv 1/1 Running 0 2m3s
external-secrets-webhook-d9ccd5985-s79jn 1/1 Running 0 2m3s
Create a SecretStore in the demo namespace. The workloadIdentity auth block tells ESO to use the annotated Kubernetes ServiceAccount’s identity rather than a static key file. The ESO API version is external-secrets.io/v1 on 0.15+, which matches the current stable chart version as of April 2026:
vim secret-store.yaml
Paste the following manifest. The clusterLocation and clusterName fields are required for ESO to construct the correct federated token exchange against GKE’s metadata server:
apiVersion: external-secrets.io/v1
kind: SecretStore
metadata:
name: gcp-demo-store
namespace: demo
spec:
provider:
gcpsm:
projectID: PROJECT_ID
auth:
workloadIdentity:
clusterLocation: europe-west1
clusterName: cfg-lab-gke
serviceAccountRef:
name: demo-app
Apply it and confirm ESO reports the store as Valid:
kubectl apply -f secret-store.yaml
kubectl -n demo get secretstore gcp-demo-store
The store flips to Ready within a few seconds once ESO validates it can authenticate to GCP:
NAME AGE STATUS CAPABILITIES READY
gcp-demo-store 33s Valid ReadWrite True
Now create an ExternalSecret that references the store and tells ESO to copy demo-secret version 3 into a Kubernetes Secret called demo-k8s-secret. The refreshInterval controls how often ESO re-pulls from the upstream, and version can be a specific number or the literal string latest:
vim external-secret.yaml
The manifest:
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: demo-external-secret
namespace: demo
spec:
refreshInterval: 1m
secretStoreRef:
kind: SecretStore
name: gcp-demo-store
target:
name: demo-k8s-secret
creationPolicy: Owner
data:
- secretKey: password
remoteRef:
key: demo-secret
version: "3"
Apply and check:
kubectl apply -f external-secret.yaml
kubectl -n demo get externalsecret,secret demo-k8s-secret
Within thirty seconds ESO reports SecretSynced and the native Kubernetes Secret exists:
NAME STORETYPE STORE REFRESH INTERVAL STATUS READY LAST SYNC
externalsecret.external-secrets.io/demo-external-secret SecretStore gcp-demo-store 1m SecretSynced True 31s
NAME TYPE DATA AGE
secret/demo-k8s-secret Opaque 1 32s
Confirm the synced value matches what is in Secret Manager:
kubectl -n demo get secret demo-k8s-secret -o jsonpath='{.data.password}' | base64 -d
The value matches version 3 of the upstream secret:
value-v3-latest
From here, mount demo-k8s-secret into a pod the same way you mount any other Kubernetes Secret. The app never knows or cares that the data came from GCP Secret Manager, the cluster never holds a JSON key file, and the refresh interval controls how quickly a rotation upstream propagates into the cluster. Every access is logged in Cloud Audit Logs on the Secret Manager side and in ESO’s own controller logs on the cluster side. This is the production pattern for GCP secret delivery to Kubernetes workloads and it is what you should use for anything new.
Troubleshooting
Error: “Permission ‘secretmanager.versions.access’ denied”
Full error:
PERMISSION_DENIED: Permission 'secretmanager.versions.access' denied for resource 'projects/PROJECT_ID/secrets/demo-secret/versions/latest' (or it may not exist).
The identity making the call is missing roles/secretmanager.secretAccessor on either the specific secret or the parent project. Grant the role at the secret level (preferred) and wait two to seven minutes for IAM propagation before retrying. If the identity is a GKE Workload Identity principal, verify the principal:// URL uses the project number in the pool path and the project ID in the pool name.
Error: “Secret Version […] is in DISABLED state”
FAILED_PRECONDITION: Secret Version [projects/PROJECT_NUMBER/secrets/demo-secret/versions/3] is in DISABLED state.
The latest alias or a specific version call hit a disabled version. See the “latest alias gotcha” section for context. Either re-enable the version, pin to a known-good version number, or add a new version with the correct value.
Error: “Secret already has a replication policy”
FAILED_PRECONDITION: Secret [projects/PROJECT_ID/secrets/demo-secret] already has a replication policy
Replication is immutable. If you created a secret with automatic replication and need to change it to user-managed, you have to create a new secret, copy the versions over, and delete the old one. There is no in-place update. This is also why CMEK retrofit is impossible: the CMEK policy is part of the replication config.
ESO SecretStore stuck in “InvalidProviderConfig”
Almost always caused by one of three things: (a) the clusterLocation does not match the real GKE region, (b) the referenced Kubernetes ServiceAccount lacks the IAM binding on Secret Manager, or (c) on clusters with both GKE WI and ESO’s older auth modes configured, ESO picks the wrong provider. Check the ESO controller logs with kubectl -n external-secrets logs deploy/external-secrets and look for the exact reason. The fix for the IAM case is the same as the first troubleshooting entry: add the binding and wait for propagation.
GCP Secret Manager vs AWS Secrets Manager
If you already know AWS Secrets Manager, this table is the fast answer. Everything else maps to familiar concepts. For the deep dive on the AWS side, see our AWS Secrets Manager tutorial.
| Aspect | GCP Secret Manager | AWS Secrets Manager |
|---|---|---|
| Price per version | $0.06/version/location/month | $0.40/secret/month regardless of version count |
| Access pricing | $0.03 per 10,000 calls | $0.05 per 10,000 calls |
| Free tier | 6 versions + 10k ops + 3 rotations always | 30-day trial only |
| Rotation model | Pub/Sub notification. You write the rotation function | Lambda-based. AWS ships rotation templates for RDS, Redshift, DocumentDB |
| Rotation bounds | 1 hour to 100 years | 4 hours to 1000 days |
| Replication | Automatic (multi-region, 1 location billing) or user-managed (explicit regions, N location billing). Immutable after creation | Primary region plus on-demand replica regions, mutable. Each replica billed as full secret |
| Versioning | Monotonic integer + latest alias + custom aliases. Disable to soft-delete | Staging labels (AWSCURRENT, AWSPREVIOUS, AWSPENDING). Move labels to roll back |
| Encryption | Google-managed or CMEK (KMS, EKM). CMEK with auto-replication requires key-per-replica policy at creation | AWS-managed KMS or customer CMK. Per-region |
| IAM | 5 predefined roles, secret-level IAM supported | IAM identity policies + secret resource policies |
| Data residency | Regional secrets endpoint, no replication | Single-region secret (no replica regions) |
Practical verdict. GCP Secret Manager is cheaper at low to medium scale thanks to the always-on free tier and per-version pricing. AWS wins on the rotation story specifically for RDS, Redshift, and DocumentDB where the Lambda templates save real work. GCP’s versioning semantics (monotonic integer with disable-to-rollback) are easier to reason about than AWS staging labels, but the latest alias behavior documented in this article is a sharp edge you need to plan around. Pick based on where your workloads live; there is no rewriting your stack for a $0.40 per secret difference.
FAQ
What is the difference between global and regional secrets in GCP Secret Manager?
Global secrets are the classic offering. They have a replication policy (automatic or user-managed) and the payload lives in multiple regions. Regional secrets are pinned to exactly one region, have no replication, and use a different service endpoint (secretmanager.REGION.rep.googleapis.com). Use regional only when a regulatory requirement mandates that data cannot leave a specific region. For everything else, use global with automatic replication.
Does GCP Secret Manager rotate secrets automatically?
Not on its own. Secret Manager supports notification-based rotation: at the configured interval it publishes a SECRET_ROTATE message to a Pub/Sub topic you set up. A Cloud Function or Cloud Run service subscribed to the topic must actually generate the new credential (call the database admin API, create a new IAM key, whatever the upstream system requires) and add a new version to the secret. GCP never modifies the upstream system for you. This is the opposite of AWS Secrets Manager’s Lambda rotation model where AWS-provided templates handle the full rotation for supported databases.
How does the “latest” alias work when I disable the highest version?
The latest alias resolves to the highest version number, not the highest enabled version. If you disable version 3, an application using latest starts getting a FAILED_PRECONDITION error instead of silently falling back to version 2. To roll back a bad secret, either pin your application to a specific version number, add a new version with the known-good value (which then becomes the new latest), or permanently destroy the bad version. Pinning to explicit version numbers is the recommended production pattern for exactly this reason.
Can I change the replication policy after creating a secret?
No. Replication policy is immutable. To switch from automatic to user-managed or to change the list of user-managed regions, you must create a new secret with the desired policy, copy the versions from the old secret into the new one, and delete the old secret. This is the same constraint that makes CMEK retrofit impossible.
Does using External Secrets Operator on GKE require a JSON key file?
No. ESO supports Workload Identity Federation for GKE natively. Configure the SecretStore with the workloadIdentity auth block referencing a Kubernetes ServiceAccount that has been bound to the Secret Manager IAM role (either via direct resource access or legacy GSA impersonation), and ESO will authenticate through the cluster’s metadata server. No JSON key file should ever touch a production GKE cluster in 2026.
How much does Secret Manager cost in practice?
For most projects, nothing. The always-on free tier covers six active versions and ten thousand access operations per month, which is enough for small production workloads. A medium-sized project with twenty secrets and a few million reads per month pays single-digit dollars. The only way Secret Manager becomes a meaningful line item is if you build a bad loop that queries the same secret on every HTTP request. Cache the secret in memory for at least one minute and you are almost always back inside the free tier.
Where to Go Next
Secret Manager is usually the first GCP security primitive a team puts in production because everything else depends on it. Natural next steps are wiring it into the CI/CD pipeline (GitHub Actions via Workload Identity Federation, no JSON keys), cross-project access from a shared secrets project to application projects, and mapping the existing on-prem secret store (HashiCorp Vault, AWS Secrets Manager from a previous cloud) onto the same model. The official Secret Manager overview, the ESO GCP provider docs, and our GKE Workload Identity guide are the three references worth bookmarking.