The go-to practice when pods require permissions to access cloud services when using Kubernetes is using service accounts. The various clouds offering managed Kubernetes solutions have different implementations but they have the same concept, EKS has IRSA and GKE has Workload Identity. The service accounts that your containers use will have the required permissions to impersonate cloud IAM roles(AWS) or service accounts(GCP) so that they can access cloud resources. There are other alternatives as AWS instance roles but they are not fine-grained enough when running containerized workflows, every container has access to the resources the node is allowed to access. It might be a bit more complex and different coming from a non Kubernetes background but preexisting Terraform modules simplify the creation of the required resources to allow Kubernetes service accounts to impersonate and access cloud resources.
Both Workload Identity and IRSA are well documented and this blog post serves more as a to-go Terraform template for using Kubernetes service accounts.
GKE
By default the most popular GKE module has the variable identity_namespace
enabled and it has the identity name PROJECT_ID.svc.id.goog
.
If workload identity is not enabled for your cluster and if you are not using Terraform you can enable it using the below command:
gcloud container clusters update CLUSTER_NAME \
--region=COMPUTE_REGION \
--workload-pool=PROJECT_ID.svc.id.goog
Now let's create our service account and give it access to cloud resources. In the example below we'll allow Grafana's Loki access to a GCS bucket to store logs in it. First we'll create our GCP service account and allow our Kubernetes service account to impersonate it. In my use case I won't be creating the Kubernetes service account with Terraform, since I don't manage my Kubernetes cluster with Terraform. I'll specify that we'll use a preexisting Kubernetes service account and that we do not annotate it. We'll later on just add the annotations as a parameter in the Loki Helm chart. The annotations follow a common naming schema and are not randomized so it is easy to predict the naming of it. You'll need to make sure that the k8s_sa_name
and namespace
parameter match the service account and the namespace it will be deployed to. In our case we are deploying the Loki Helm chart to the logging
namespace and the default name of the Loki service account is loki
.
module "gke_loki_sa" {
source = "terraform-google-modules/kubernetes-engine/google//modules/workload-identity"
version = "20.0.0"
project_id = var.gcp_project_id
use_existing_k8s_sa = true
annotate_k8s_sa = false
name = "loki-gcs"
k8s_sa_name = "loki"
namespace = "logging"
}
When we create our GCS bucket below we'll specify the service account outputted from the above module as the admin.
module "loki_gcs_buckets" {
source = "terraform-google-modules/cloud-storage/google"
version = "3.2.0"
project_id = var.gcp_project_id
names = [var.environment]
prefix = "loki"
set_admin_roles = true
admins = [format("serviceAccount:%s", module.gke_loki_sa.gcp_service_account_email)]
}
Now you can add the workload identity annotation to your Loki service account and the pod should have access to the GCS bucket.
# Loki helm chart
...
serviceAccount:
annotations:
'iam.gke.io/gcp-service-account': 'loki-gcs@<project-name>.iam.gserviceaccount.com'
You can also use the Terraform Kubernetes provider and let Terraform create the service account.
EKS
For our EKS example we'll create a KMS key to auto-unseal our Vault instances. IRSA should by default be enabled if you are using the EKS module. If you have provisioned the EKS cluster in a different fashion you can use eksctl
to enable IRSA with the command eksctl utils associate-iam-oidc-provider --cluster <cluster> --approve
.
Now that IRSA is enabled we'll start by creating the required cloud resources which are the KMS key and an alias for it and an IAM policy that allows encryption, decryption and description of the key.
locals {
vault_service_account_name = "vault"
vault_service_account_namespace = "auth"
}
resource "aws_kms_key" "vault" {
description = "Used for Vault in the ${var.env} env."
}
resource "aws_kms_alias" "vault" {
name = "alias/vault-${var.env}"
target_key_id = aws_kms_key.vault.key_id
}
resource "aws_iam_policy" "vault" {
name = "${local.vault_service_account_name}-${var.env}"
description = "Vault service account in ${var.env}"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action : [
"kms:Encrypt",
"kms:Decrypt",
"kms:DescribeKey"
],
Resource : aws_kms_key.vault.arn,
Effect : "Allow"
},
]
})
}
Now that we have the required cloud resources, we can create our IAM role and allow the Kubernetes service account to impersonate it using the IRSA module. Similar to the GCP creation we need the name and the namespace to match the location of the deployed Kubernetes resource, we have our Vault instances deployed in the auth
namespace using Helm, the default service account name for the Vault chart is vault
. Our OIDC provider URL uses the outputted value from our EKS module. However, if you are not using the EKS module to create the cluster you could manually add it using the output from the command aws eks describe-cluster --name cluster_name --query "cluster.identity.oidc.issuer" --output text
.
locals {
vault_service_account_name = "vault"
vault_service_account_namespace = "auth"
}
module "vault_iam_assumable_role_admin" {
source = "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc"
version = "4.17.1"
create_role = true
role_name = "${local.vault_service_account_name}-${var.env}"
provider_url = module.eks.oidc_provider
role_policy_arns = [aws_iam_policy.vault.arn]
oidc_fully_qualified_subjects = ["system:serviceaccount:${local.vault_service_account_namespace}:${local.vault_service_account_name}"]
}
Now we'll configure our Kubernetes service account as a Helm parameter for the Vault chart.
# Vault Helm chart
serviceAccount:
annotations:
'eks.amazonaws.com/role-arn': 'arn:aws:iam::<AWS Account ID>:role/vault-<env>'
Similarly to the GCP approach you can use the Terraform Kubernetes provider to create the service account using outputs from the above module.
That's all! A quick, simple and reusable approach to allow Kubernetes pods to use service accounts to access cloud resources.