Cloud Sql and GKE connection

Cloud Sql and GKE connection with Sql Proxy and Workload Identity(Part1)

To keep db outside a kubernetes cluster has some benefits such as less complexity and building stateless application. To achieve that between your application workload inside the cluster and exterior db we need secure, reliable, modular connection.

In our case study our cluster is in Google GKE and db is Postgres Cloud SQL. To access a Cloud SQL instance from an application running in Google Kubernetes Engine, you can use either the Cloud SQL Proxy (with public or private IP), or connect directly using a private IP address.

The Cloud SQL Proxy is the recommended way to connect to Cloud SQL, even when using private IP. This is because the proxy provides strong encryption and authentication using IAM, which can help keep your database secure.Cloud sql proxy

Workload Identity

With Workload Identity, you can configure a Kubernetes service account to act as a Google service account. Any application running as the Kubernetes service account automatically authenticates as the Google service account when accessing Google Cloud APIs. This enables you to assign fine-grained identity and authorization for applications in your cluster.

To accomplish a secure mapping between Kubernetes service accounts and Google service accounts, Workload Identity introduces the concept of a cluster’s workload identity pool, which allows Identity and Access Management (IAM) to trust and understand Kubernetes service account credentials.

When you enable Workload Identity on your GKE cluster, the cluster’s workload identity pool will be set to This lets IAM authenticate Kubernetes service accounts as the following member name:[K8S_NAMESPACE/KSA_NAME]

In this member name:

  • is the workload identity pool set on the cluster.
  • KSA_NAME is the name of the Kubernetes service account making the request.
  • K8S_NAMESPACE is the Kubernetes namespace where the Kubernetes service account is defined.

There is only one fixed workload identity pool per Google Cloud project,, and it is automatically created for you.

In addition to enabling Workload Identity at the cluster level,
the GKE Metadata Server must be enabled on each node pool

To enable Workload Identity on an existing cluster, modify the cluster with the following command:

gcloud container clusters update bora-playground-301612-dev --zone europe-west2-b

We need GSA and KSA will binding each other but here we need to find out which project has own GSA for sql admin , client  and editor roles so here is the explanation for GKE compute instance and SQL instance which are existing in a different projects

If the Compute Engine instance is in the same project as the Cloud SQL instance,
the default service account for the Compute Engine instance has the necessary
permissions for authenticating the proxy. If the two instances are in different
projects, you must add the Compute Engine instance's service account to the project
containing the Cloud SQL instance.

and here is the first trying command for project GKE running with

gcloud iam service-accounts add-iam-policy-binding
--role roles/iam.workloadIdentityUser
--member "[keycloak/keycloak-bora]"

and it should have:

To manage access to a project, you need a role that includes the following permissions:

  • resourcemanager.projects.get
  • resourcemanager.projects.getIamPolicy
  • resourcemanager.projects.setIamPolicy

To gain these permissions while following the principle of least privilege, ask your administrator to grant you one of the following roles:

  • Project IAM Admin (roles/resourcemanager.projectIamAdmin)
  • Security Admin (roles/iam.securityAdmin)

For kunets-internal service account

gcloud iam service-accounts add-iam-policy-binding
--role roles/iam.workloadIdentityUser
--member "[keycloak/keycloak-bora]" --project kunets-internal

Add an annotation to YOUR-KSA-NAME to complete the binding:

kubectl annotate serviceaccount \
keycloak-bora \ \
-n keycloak

Migrate applications to Workload Identity

Select the migration strategy that is ideal for your environment. Node pools can be migrated in place or you can create new node pools with Workload Identity enabled. We recommend creating new node pools if you also need to modify your application to be compatible with this feature. But prefering creating a new node pool is better or even if you create a new cluster if you enable.

Option 1: Node pool creation with Workload Identity (recommended)

Add a new node pool to the cluster with Workload Identity enabled and manually migrate workloads to that pool. This succeeds only if Workload Identity is enabled on the cluster.

gcloud container node-pools update production
--cluster=bora-playground-301612-dev --workload-metadata=GKE_METADATA

Option 2: Node pool modification

Modify an existing node pool to enable GKE_METADATA. This update succeeds only if Workload Identity is enabled on the cluster. It immediately enables Workload Identity for workloads deployed to the node pool. This change will prevent workloads from using the Compute Engine service account and must be carefully rolled out.

gcloud container node-pools update production
--cluster=bora-playground-301612-dev --workload-metadata=GKE_METADATA

This action requires container.nodes.update permissions on the project.


I discover some instability when I update existence node-pool when enable GKE_METADATA , one of my pod(external-dns) is starting to not reach the google dns zone because of lack of permission or missing credentials, so please check if your default service account is still can be active or not.

time="2021-02-10T21:59:12Z" level=error msg="googleapi: Error 401: Request had
invalid authentication credentials. Expected OAuth 2 access token, login cookie
or other valid authentication credential.
More details:\nReason: authError, Message: Invalid Credentials\n"

And after some work around how solving this problem, external dns is using default service account which
cluster has using so it doesn’t have nodes missing gcp service api authentication, so we have to repeat same process above with separate google service account and kubernetes service account1

gcloud iam service-accounts add-iam-policy-binding --role
roles/iam.workloadIdentityUser --member

kubectl annotate serviceaccount external-dns -n ingress-basic

Test the connections

Verify the service accounts are configured correctly by creating a Pod with the Kubernetes service account that runs the cloud-sdk container image, and connecting to it with an interactive session.

kubectl run -it --image google/cloud-sdk:slim --serviceaccount keycloak-bora --namespace keycloak workload-identity-test

The google/cloud-sdk image includes the gcloud command-line tool which is a convenient way to consume Google Cloud APIs. It may take some time to download the image.

This action requires create Pods RBAC permission within the namespace.

You are now connected to an interactive shell within the created Pod. Run the following command inside the Pod:

gcloud auth list

If the service accounts are correctly configured, the Google service account email address is listed as the active (and only) identity. This demonstrates that by default, the Pod uses the Google service account’s authority when calling Google Cloud APIs.

to a Cloud SQL instance on the same VPC using private IP without the proxy.