vault-preview.jpg

GitOps Secret Management with Vault, ArgoCD and Tanka

3 years ago 4841 views
7 min read

Recently I wrote a blog post on how to use Grafana's Tanka with ArgoCD which is my preferred way to write Kubernetes configuration in Jsonnet. However, the post does not go into detail on the last missing piece - how to manage secret credentials when using Tanka with ArgoCD.

When using GitOps you'd prefer to define your secrets in Git as well. However, checking in secrets in Git that are not encrypted is not a good approach and creating them manually makes it hard to reproduce the configuration against multiple environments. A manual process is error prone, it makes it difficult to know what's deployed and what's not, there's no operations logs and it highlights all the issues of not using GitOps.

Vault is a product from Hashicorp which is used to manage secrets safely. It integrates very well with Kubernetes by having a Helm chart that simplifies deployment and also allows injecting secrets into containers using sidecars. Which secrets to inject are defined by annotations which is similar to many other Kubernetes applications making the integration feel very native to Kubernetes.

Deploying Vault with Consul as the Backend

Note: This blog post is not to go into all the details of deploying Vault, their documentation is great for that purpose. I'll just set it up so that it works perfectly fine with ArgoCD and Tanka.

As in the blog post GitOps with ArgoCd and Tanka we use the HelmOperator's CRD HelmRelease to configure Helm releases declaratively. Firstly, we'll need to set up the storage backend for Vault and an GCP/AWS KMS key to auto-unseal Vault using that key. We'll use the Consul as Vault backend and we'll deploy it using the Helm chart version 0.27.0 with the following values:

 server: {
   replicas: 3,
 },
 ui: {
   enabled: true,
 },

We'll use 3 Consul replicas and enable the UI. Next we'll create our key for auto-unsealing Vault. We'll use Terraform to create a GCP KMS key and a Kubernetes service account. The recommended way to access Google services from your GKE cluster is by using Workload Identity. Workload Identity allows a Kubernetes service account to act as a GCP service account, AWS has a similar approach called IRSA(IAM Roles for Service Accounts). These approaches are the best for me when needing to access managed services in either clouds, and configuring them with Terraform seems to be the best approach. I'll walk through how to configure GCP KMS access with Terraform however whatever approach you use should work fine, just that the values for the Helm chart should vary based on the cloud.

The below Terraform snippet needs two providers the Kubernetes provider and the GCP provider. It also use the workload-identity module to create a Kubernetes service account.

locals {
  key_ring         = "vault"
  crypto_key       = "vault-default"
  keyring_location = "global"
}

resource "google_kms_key_ring" "key_ring" {
  project  = var.gcp_project_id
  name     = local.key_ring
  location = local.keyring_location
}

resource "google_kms_crypto_key" "crypto_key" {
  name            = local.crypto_key
  key_ring        = google_kms_key_ring.key_ring.self_link
  rotation_period = "1000000s" // Configure this to your needs
}

module "gke_vault_sa" {
  source                          = "terraform-google-modules/kubernetes-engine/google//modules/workload-identity"
  project_id                      = var.gcp_project_id
  version                         = "v12.0.0"
  automount_service_account_token = true
  name                            = "vault-kms"
  namespace                       = "auth" // Your namespace
}

resource "google_kms_key_ring_iam_binding" "vault_iam_kms_binding" {
  key_ring_id = google_kms_key_ring.key_ring.id
  role        = "roles/owner"

  members = [
    format("serviceAccount:%s", module.gke_vault_sa.gcp_service_account_email),
  ]
}

The above snippet will create a service account in the auth namespace with the name vault-kms. It will have full ownership to the GCP KMS crypto key vault-default that was created. We can now use that service account for our Vault pods and refer to the above GCP KMS key and it will have full access!

We'll use the chart version 0.8.0 with the following values:

ui: {
  enabled: true,
},
server: {
  serviceAccount: {
    create: false,
    name: 'vault-kms',
  },
  ha: {
    enabled: true,
    replicas: 3,
    config: {
      ui: true,
      storage: {
        consul: {
          path: 'vault',
          address: 'consul-consul-server:8500',
        },
      },
      seal: {
        gcpckms: {
          project: 'honeylogic',
          region: 'global',
          key_ring: 'vault',
          crypto_key: 'vault-default',
        },
      },
    },
  },
},

Vault will be setup with High Availability and a Consul backend. We'll also specify the GCP KMS seal and use the service account we created.

Now we should have Vault setup that will auto-unseal using GCP KMS!

Adding Kubernetes as an Authentication Backend to Vault

We will need to setup Kubernetes as an authentication backend to Vault. This makes it possible to validate Kubernetes service accounts to access policies in Vault. We'll start off by adding the Vault auth backend using Terraform.

resource "vault_auth_backend" "kubernetes" {
  type = "kubernetes"
}

Next we'll configure the backend via the CLI. Execute into a vault container:

kubectl exec -it vault-0 -- /bin/sh

Then run the following command to add the host, port, JWT token and CA cert:

vault write auth/kubernetes/config \
        token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
        kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
        kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt

Vault has an indepth guide on setting up Kubernetes as an authentication backend.

We'll create a Vault policy that gives read and list access to the ArgoCD KV storage.

resource "vault_policy" "argo_cd" {
  name = "argo-cd"

  policy = <<EOT
path "kv/data/argo-cd"
{
  capabilities = ["read","list"]
}
EOT
}

And lastly we'll create our Kubernetes auth role. We'll refer to the Vault policy and the Kubernetes backend and also specify the service account name and namespace.

resource "vault_kubernetes_auth_backend_role" "argo_cd" {
  backend                          = vault_auth_backend.kubernetes.path
  role_name                        = "argo-cd"
  bound_service_account_names      = ["argo-cd-repo-server"]
  bound_service_account_namespaces = ["ci-cd"]
  token_ttl                        = 3600
  token_policies                   = [vault_policy.argo_cd.name]
}

Now any pod with the argo-cd-repo-server service account can list and read secrets in the path kv/data/argo-cd as long as they use the role argo-cd.

Injecting secrets into the ArgoCD Repo Server Pod

Now that our ArgoCD repo server has access to the Vault secrets we can inject the them. The injection of secrets is done using annotations which comes very naturally with Kubernetes. We will need 4 annotations to inject a secret file into the pod. We'll annotate that we want to inject Vault agent and specify which role it uses:

    'vault.hashicorp.com/agent-inject': 'true',
    'vault.hashicorp.com/role': 'argo-cd',

We'll need to choose from which key-value storage we'll want to extract the secrets from and also we'll need to template the secrets to Jsonnet. The annotation keys are dynamically set as they include which secret your injecting and how you are templating that specific secret. The pattern is vault.hashicorp.com/agent-inject-secret-<secret name> for choosing the key-value storage and 'vault.hashicorp.com/agent-inject-template-<secret name> for templating the secret. We'll use Jsonnet as the language for templating and name the secret secrets.jsonnet. In our Vault UI we'll define the below secrets:

Vault UI Secrets

We'll extract the variables aws and basic_auth into the secrets.jsonnet file.

'vault.hashicorp.com/agent-inject-secret-secrets.jsonnet': 'kv/argo-cd',
'vault.hashicorp.com/agent-inject-template-secrets.jsonnet': |||
  {{- with secret "kv/argo-cd" }}
  {
    "aws": {
    {{- range $k, $v := .Data.data.aws }}
      {{ $k }}:"{{ $v }}",
    {{- end }}
    },
    "basic_auth": {
    {{- range $k, $v := .Data.data.basic_auth }}
      {{ $k }}:"{{ $v }}",
    {{- end }}
    },
  }
  {{- end }}
|||,

Now an init container will be added to our repo controller server pod which will fetch the secrets and mount it in the /vault/secrets/ directory.

Lastly, Tanka's root dir is not the OS directory /, therefore accessing /vault/secrets/ will not be possible as it will search in your root Tanka directory. On startup we'll need to move the file /vault/secrets/secrets.jsonnet to ./environments/${TK_ENV}. We'll change our Tanka plugin init command to the following:

configManagementPlugins: std.manifestYamlDoc(
  [
    {
      name: 'tanka',
      init: {
        command: [
          'sh',
          '-c',
        ],
        args: [
          'jb install && cp ${SECRET_FILE} ./environments/${TK_ENV}',
        ],
      },

We'll also pass the $SECRET_FILE as an environment variable to our plugin:

kind: 'Application',
metadata: {
  name: 'ops',
  namespace: 'ci-cd',
},
spec: {
  project: 'ops',
  source: {
    repoURL: 'https://github.com/honeylogic-io/ops',
    path: 'tanka',
    targetRevision: 'HEAD',
    plugin: {
      name: 'tanka',
      env: [
        {
          name: 'TK_ENV',
          value: 'default',
        },
        { // IMPORTANT
          name: 'SECRET_FILE',
          value: '/vault/secrets/secrets.jsonnet',
        },
      ],
    },
  },

We can now in our Tanka environment have a config.jsonnet file which always import secrets.jsonnet and is always available at the top layer.

// config.jsonnet

{
  _config+:: {
    secrets: import 'secrets.jsonnet',
  },
}

Which makes the variable available in all Jsonnet files for that Tanka environment! Simply use for example $._config.secrets.aws.secret_key to fetch the AWS secret key!

A full code example of this is for example deploying the Loki Helm chart that requires AWS S3 access. We would use the following values for the Helm chart:

values={
  config: {
    storage_config: {
      aws: {
        s3: 's3://%s:%s@eu-west-1/loki-chunks' % [$._config.secrets.aws.access_key, $._config.secrets.aws.secret_key],
      },
      boltdb_shipper: {
        active_index_directory: '/data/loki/index',
        cache_location: '/data/loki/boltdb-cache',
        shared_store: 'aws',
      },
    },
  },
},

Local Development

Tanka will now always expect a secrets.jsonnet file at the top level of a Tanka environment, therefore testing local changes (e.g comparing diffs) you'll need to have a secrets.jsonnet file existing. This can be easily achieved using the following command vault kv get -field=data -format=JSON kv/argo-cd > secrets.jsonnet. You will just need to define the variable VAULT_ADDR and VAULT_TOKEN to access the secrets in Vault.

Summary

The first part covered how to use ArgoCD with Jsonnet and Tanka, this blog post covered how to manage secrets when using Jsonnet and Tanka. This makes the Tanka/Jsonnet use case fully working end to end, with both CI/CD and secret management. On top of that we can directly inject any secret values to Kubernetes resources. This blog post is complex and contains many moving pieces, however this is my preferred way of defining infrastructure as code, specifically when using Kubernetes.


Similar Posts

GitOps with ArgoCD and Tanka

10 min read

GitOps is becoming the standard of doing continuous delivery. Define your state in Git, automatically update and change the state when pull requests are merged. Within the Kubernetes ecosystem two tools have become very popular for doing this FluxCD and …


Sharing Development Secrets with the Team using Vault

4 min read

My most recent projects have consisted of using a microservice architecture and multiple third party services (analytics, events etc.). For better or worse, this seems to be getting more popular, even for smaller companies and startups. Local development becomes more …


Migrating Kubernetes Resources between ArgoCD Applications

1 min read

I've been using ArgoCD for a while now, and as time went by I started to splitting my Kubernetes resources into smaller ArgoCD Applications. However, I could not figure out clear guidelines on how to do it without downtime. Recently …