GitOps with ArgoCD and Tanka

GitOps with ArgoCD and Tanka

3 years ago 9960 views
10 min read

GitOps is becoming the standard of doing continuous delivery. Define your state in Git, automatically update and change the state when pull requests are merged. Within the Kubernetes ecosystem two tools have become very popular for doing this - FluxCD and ArgoCD. Different in their own way, but both have the same goal - managing your continuous delivery life cycle.

Tanka is a configuration management tool for Kubernetes using the Jsonnet language. The Jsonnet language is an extension of JSON which provides an easy way to manipulate the JSON data. It works very well when applying configuration to Kubernetes due to the k8s-alpha Jsonnet Kubernetes library which is generated for each Kubernetes version. It removes much of the boiler plating that comes with the configuration of Kubernetes resources. My thoughts on the benefits with Jsonnet and Tanka are:

  • A good but getting even better OSS community around Jsonnet with projects/libraries as k8s-alpha, kube-prometheus, Grafana Jsonnet library, kube-thanos amongst others.
  • Easy manipulation of JSON objects with Jsonnet, check out an introduction at Tanka.
  • Monitoring-mixins, reuse OSS Grafana dashboards and Prometheus rules. The maintainers of projects set guidelines on what to monitor and alert on, and you just extend them and adjust them to your needs easily using Jsonnet. Examples of mixins are:

    Check out monitoring-mixins website for more mixins.

  • An awesome CLI which allows easy checking of diffs using tk diff. It also has resource targeting which functions similarly to terraform -target, example tk diff -t Deployment/example.

  • Tanka provides documentation and guidelines on how to work with Jsonnet, how to use the Jsonnet-bundler and how to set up your folder structure amongst other things. This is one downside I find with Jsonnet, the language is hard to get started with and is lacking documentation. Tanka is solving some of these problems.

ArgoCD has a feature called plugins which in theory should provide a way to use any configuration management tool with ArgoCD. FluxCD does not have the same feature parity. ArgoCD allows me to:

  • Not have to generate YAML files and push them to Git so that the CD tools syncs the YAML manifests. Just as with e.g. JS/SCSS, we want to avoid pushing generated files to Git.
  • Create pull requests where the only diff is the Jsonnet code that been added, not 100s of lines of YAML. This makes the pull requests much more readable.
  • Apply my Tanka workflow automatically and mirror what I do locally. The Kubernetes resources are automatically applied to the Kubernetes cluster.

If you're not familiar with ArgoCD, Tanka or Jsonnet feel free to check them out before proceeding.

Update 13/02/2023

This blog post has been adjusted to ArgoCD's v2.6 deprecation of plugin usage within the ArgoCD repo server container. This was done due to security reasons. Instead, plugins should be running in a sidecar that has all the tools needed to generate manifest. The repository has been updated as well below. If you would like to see the previous configuration that does not use the sidecar plugin setup check the GitHub repository commit history.

Note: A full demo of the code can be found at adinhodovic/tanka-argocd-demo.

Initializing Tanka

Create a directory for your Tanka configs and run tk init. Tanka will create the following directory structure:

.
├── environments
│  └── default
│     ├── main.jsonnet
│     └── spec.json
├── jsonnetfile.json
├── jsonnetfile.lock.json
├── lib
│  └── k.libsonnet
└── vendor
   ├── github.com
     ├── grafana
       └── jsonnet-libs
          └── ksonnet-util
             └── kausal.libsonnet
     └── ksonnet
        └── ksonnet-lib
           └── ksonnet.beta.4
              ├── k.libsonnet
              └── k8s.libsonnet
   ├── ksonnet-util
   └── ksonnet.beta.4

Out-of-the-box you'll have an environment called default which is sufficient for this demo. A couple of Jsonnet libraries for creating and managing Kubernetes resources will also be added. The vendor directory will have all the vendor dependencies and the lib directory will have a k.libsonnet Jsonnet file which imports the Kubernetes jsonnet library and any extensions you'll want. Now that Tanka is initialized, we'll want to create our first ArgoCD project so that any future resources are automatically added using GitOps.

Deploying ArgoCD with Tanka and Jsonnet-bundler support

We'll deploy ArgoCD using Helm, and I'll do it initially using Helm but then allow ArgoCD's Helm integration to take over. First, install ArgoCD in the default namespace with the command:

helm upgrade argo-cd argo/argo-cd --set 'configs.params.server\.insecure=true' --set 'configs.params.server\.disable\.auth=true'

Preferably this installation happens in a local Minikube cluster so that it is easy to test. We'll disable auth and SSL as we this is a demo just, please don't use these values in production.

Let's create our Helm release for ArgoCD in a new file called argo-cd.jsonnet, we'll start of with the chart specification and metadata:

{
  // Plugin specific configs
  local tankaVersion = 'v0.20.0',
  local jsonnetBundlerVersion = 'v0.5.1',
  local pluginDir = '/home/argocd/cmp-server/plugins',

  argoCdChart: {
    helmApplication: {
      apiVersion: 'argoproj.io/v1alpha1',
      kind: 'Application',
      metadata: {
        name: 'argo-cd',
        namespace: 'default',
      },
      spec: {
        project: 'default',
        destination: {
          namespace: 'default',
          server: 'https://kubernetes.default.svc',
        },
        source: {
          chart: 'argo-cd',
          repoURL: 'https://argoproj.github.io/argo-helm',
          targetRevision: '5.20.0',
          helm: {
            releaseName: 'argo-cd',
            values: |||
        ...
        ...

The above snippet is the just a regular helm install --name argo-cd argo/argo-cd defined declaratively using the HelmRelease CRD. We'll extend the above snippet with values for the Helm chart. ArgoCD has multiple services, and we'll need to tweak two of them -- the API server and the repository controller. The API server handles the application life cycle and the repository controller manages the git repository - generating and applying manifests. We'll add the following values to our API server specification:

  • We'll disable auth for this demo.
  • We'll disable TLS for this demo.
            values: |||
              %s
            ||| % std.manifestYamlDoc(
              {
                configs: {
                  params: {
                    'server.insecure': true,
                    'server.disable.auth': true,
                  },
                },

The ArgoCD repository controller requires a service account with RBAC privileges to manage yor Kubernetes resources. We'll use the prepackaged admin RBAC account that is defined in the Helm chart. Note: The below service account give ArgoCD full API access, you will need to add security constraints according to your needs!

                repoServer: {
                  clusterAdminAccess: {
                    enabled: true,
                  },

We'll also adjust the repoServer by adding an extra container which will be installing the Tanka binary and the Jsonnet-bundler to fetch all dependencies pre manifest generation. The container will act as a sidecar that generates all manifests for the repo server. We use curl to install the all binaries and finally run the argocd-cmp-server command that communicates with the ArgoCD repo server. We'll also mount the ConfigMap called plugin.yaml that will be defined below. It will contain yaml config for the Tanka ArgoCD plugin.

                  extraContainers: [
                    {
                      name: 'cmp',
                      image: 'curlimages/curl',

                      local jsonnetBundlerCurlCommand = 'curl -Lo %s/jb https://github.com/jsonnet-bundler/jsonnet-bundler/releases/download/%s/jb-linux-amd64' % [pluginDir, jsonnetBundlerVersion],
                      local tankaCurlCommand = 'curl -Lo %s/tk https://github.com/grafana/tanka/releases/download/%s/tk-linux-amd64' % [pluginDir, tankaVersion],
                      local chmodCommands = 'chmod +x %s/jb && chmod +x %s/tk' % [pluginDir, pluginDir],
                      command: [
                        'sh',
                        '-c',
                        '%s && %s && %s && /var/run/argocd/argocd-cmp-server' % [jsonnetBundlerCurlCommand, tankaCurlCommand, chmodCommands],
                      ],
                      securityContext: {
                        runAsNonRoot: true,
                        runAsUser: 999,
                      },
                      volumeMounts: [
                        {
                          mountPath: '/var/run/argocd',
                          name: 'var-files',
                        },
                        {
                          mountPath: pluginDir,
                          name: 'plugins',
                        },
                        {
                          mountPath: '/home/argocd/cmp-server/config/plugin.yaml',
                          subPath: 'plugin.yaml',
                          name: 'cmp-plugin',
                        },
                      ],
                    },
                  ],
                  volumes: [
                    {
                      configMap: {
                        name: 'cmp-plugin',
                      },
                      name: 'cmp-plugin',
                    },
                    {
                      emptyDir: {},
                      name: 'cmp-tmp',
                    },
                  ],

ArgoCD should have all the dependencies to run Tanka and the Jsonnet-bundler by now.

Lastly, we'll just define the plugin specification that defines how ArgoCD should invoke Tanka. The plugin is stored within a ConfigMap and mounted by the sidecar as you can see above. The plugin specification uses a Kubernetes native format, but it is not a custom resource definition. In the initialization of the plugin we want to install all dependencies with Jsonnet-bundler and then to generate manifest we want to run tk show. We also want the plugin to be used for all ArgoCD applications (assuming the setup is centered around Tanka), we need to ensure that it discovers every application. Therefore, we set fileName to * There's an example below:

  argoCdPlugin: {
    apiVersion: 'v1',
    kind: 'ConfigMap',
    metadata: {
      name: 'cmp-plugin',
      namespace: 'default',
    },
    data: {
      'plugin.yaml': |||
        %s
      ||| % std.manifestYamlDoc({
        apiVersion: 'argoproj.io/v1alpha1',
        kind: 'ConfigManagementPlugin',
        metadata: {
          name: 'tanka',
          namespace: 'default',
        },
        spec: {
          version: tankaVersion,
          init: {
            command: [
              'sh',
              '-c',
              '%s/jb install' % pluginDir,
            ],
          },
          generate: {
            command: [
              'sh',
              '-c',
              '%s/tk show environments/${ARGOCD_ENV_TK_ENV} --dangerous-allow-redirect' % pluginDir,
            ],
          },
          discover: {
            fileName: '*',
          },
        },
      }),
    },
  },

Creating your first ArgoCD Application and AppProject

An AppProject is a logical grouping of ArgoCD Applications, which control where and which Kubernetes resources can be deployed. An Application is a Kubernetes resource which is used to deploy Kubernetes resources from a Git source and also uses a destination source which is a Kubernetes server.

The ArgoCD documentation explains Applications here and AppProjects here.

We now need to add our first AppProject. We'll keep the naming consistent to the Tanka environment which is the environment called default. The below snippet is very lenient allowing any resources, destinations and sources, you can tweak this to any security demands you want to impose.

  defaultProject: {
    apiVersion: 'argoproj.io/v1alpha1',
    kind: 'AppProject',
    metadata: {
      name: 'default',
      namespace: 'default',
      finalizers: [
        'resources-finalizer.argocd.argoproj.io',
      ],
    },
    spec: {
      description: 'MyOrg Default AppProject',
      sourceRepos: [
        '*',
      ],
      clusterResourceWhitelist: [
        {
          group: '*',
          kind: '*',
        },
      ],
      destinations: [
        {
          namespace: '*',
          server: '*',
        },
      ],
    },
  },

Lastly we need our actual ArgoCD Application. The application will refer to the Git repository for this blog post, and it will also refer to the AppProject above. The Tanka plugin will by default be used for any application, we'll only need to pass the environment default in the variable TK_ENV. The changes will be deployed in cluster and we wil allow ArgoCD to prune resources. This means if you delete something in Git, then ArgoCD will delete it in your cluster. We'll also enable self-healing which means that if any Kubernetes resource is changed and deviates from our definition in Git, it will automatically self-heal restoring the resource definition to the one in Git.

  defaultApplication: {
    apiVersion: 'argoproj.io/v1alpha1',
    kind: 'Application',
    metadata: {
      name: 'default',
    },
    spec: {
      project: 'default',
      source: {
        repoURL: 'https://github.com/adinhodovic/tanka-argocd-demo',
        path: 'tanka',
        targetRevision: 'HEAD',
        plugin: {
          env: [
            {
              name: 'TK_ENV', // prefixed in the Plugin with `ARGOCD_ENV_`
              value: 'default',
            },
          ],
        },
      },
      destination: {
        server: 'https://kubernetes.default.svc',
      },
      syncPolicy: {
        automated: {
          prune: true,
          selfHeal: true,
        },
      },
    },
  },

Let's now apply our changes and deploy ArgoCD with the Tanka command tk apply .. We should now have ArgoCD deployed as well as an AppProject and an Application. We can port forward the ArgoCD server to ensure that it is functioning as it should. Use kubectl to port forward the server service kubectl port-forward -n default svc/argo-cd-argocd-server 8080:80. You should now be able to access the server UI at localhost:8080. On the root page of the ArgoCD server you should see our default Application we deployed. You can view all the details of the application as the Git repository and the environment variables.

You can now push your repository to Git and ArgoCD will successfully sync the repository. Since we use a single Tanka environment and a ArgoCD Application, ArgoCD will manage itself using GitOps, so any changes to your ArgoCD helm deployment will be automatically synced. However, if the sync would fail then you would need to manually fix it so that ArgoCD works and can sync again. You can go around this by excluding ArgoCD from the Tanka environment or use a different environment for various applications.

Now that we have ArgoCD running, and it can apply manifests using Tanka we can start creating our first application using Jsonnet and Tanka.

Creating your first deployment using Tanka & ArgoCD

Our first application will be a simple echo container. We will be using the Kubernetes Jsonnet library which provides functions that will remove excessive yaml boilerplate. We'll start of by creating our container in a new file called echo.jsonnet:

local container = $.core.v1.container,
local containerPort = $.core.v1.containerPort,
echo_container::
  container.new('echo', 'k8s.gcr.io/echoserver:1.4') +
  container.withPorts(containerPort.new('http', 8080)),

The container uses the echoserver image and exposes the port 8080. Now we'll add a deployment for the container and a service for the deployment. The service will be created using serviceFor that comes from Grafana's ksonnet-util library. To use the ksonnet-util library you can install it with the Jsonnet-bundler using the following command jb install github.com/grafana/jsonnet-libs/ksonnet-util. You can then import it in your main Jsonnet file.

  local deployment = $.apps.v1.deployment,
  echo_deployment:
    deployment.new('echo', 1, [self.echo_container]),

  oauth2_proxy_service:
    $.util.serviceFor(self.echo_deployment),

Now we will import our echo.jsonnet files in our main Jsonnet file.

(import 'ksonnet-util/kausal.libsonnet') +
(import 'argo-cd.jsonnet') +
(import 'echo.jsonnet')

We can compare diffs between the cluster and our added Jsonnet code. We'll use Tanka's diff function with the target flag to only show diff for the deployment tk diff . -t deployment/echo. Tanka will display the diff, and you can introspect it to ensure that you are content with the changes. However, we do not want to apply the changes, we want to push them to git and ArgoCD will take over from there. Push them and head over to localhost:8080 where the ArgoCD server is port forwarded. ArgoCD syncs the application state every 5 minutes by default, so it might take a minute or two for your changes to be synced. You can add webhooks or lower the sync time to remove the delay. When ArgoCD has synced your changes a DAG should be generated for the Application displaying the echo Service, Deployment, ReplicaSet and Pod.

ArgoCD Tanka

Now you've applied the GitOps workflow by using the continuous delivery tool ArgoCD which synced our changes from Git. On top of that we got to do it with Tanka and Jsonnet without any YAML template generation.

A full demo of the code can be found at adinhodovic/tanka-argocd-demo.

Next up is secret injection when using Tanka and ArgoCD with Hashicorp's Vault.


Similar Posts

3 years ago
devops ci/cd jsonnet argo-cd tanka vault

GitOps Secret Management with Vault, ArgoCD and Tanka

7 min read

Recently I wrote a blog post on how to use Grafana's Tanka with ArgoCD which is my prefered way to write Kubernetes configuration in Jsonnet. However, the post does not go into detail on the last missing piece - how …


4 years ago
gitlab-ci devops automation kaniko ci/cd

Creating templates for Gitlab CI Jobs

4 min read

Writing Gitlab CI templates becomes repetitive when you have similar applications running the same jobs. If a change to a job is needed it will be most likely needed to do the same change in every repository. On top of …


3 years ago
devops infrastructure argo-cd ci-cd

Migrating Kubernetes Resources between ArgoCD Applications

1 min read

I've been using ArgoCD for a while now, and as time went by I started to splitting my Kubernetes resources into smaller ArgoCD Applications. However, I could not figure out clear guidelines on how to do it without downtime. Recently …