Blog Posts

Most Popular Blog Tags

GKE on a Budget: Disabling Expensive Defaults for Leaner Clusters

A while back, I wrote a blog post on creating a low-cost managed Kubernetes cluster. The solution centers around Google Kubernetes Engines’s (GKE) free zonal cluster and preemptive node pools. This allows for a very low-cost Kubernetes cluster which is useful for learning purposes or for small workloads. The same setup is in use today for me; however, over time, the GKE cluster has by default become bloated. Google have enabled by default logging, monitoring, and other features to the cluster, which is great for production workloads, but if you are looking to cut costs, then many of these features don’t make sense.

Kubernetes Events Monitoring with Loki, Alloy, and Grafana

Kubernetes events offer valuable insights into the activities within your cluster, providing a comprehensive view of each resource’s status. While they’re beneficial for debugging individual resources, they often face challenges due to the absence of aggregation. This can lead to issues such as events being garbage collected, the necessity to view them promptly, difficulties in filtering and searching, and limited accessibility for other systems. The blog post explores configuring Loki with Alloy to efficiently scrape Kubernetes events and visualize them in Grafana.

Configuring Kube-prometheus-stack Dashboards and Alerts for K3s Compatibility

The kube-prometheus-stack Helm chart, which deploys the kubernetes-mixin, is designed for standard Kubernetes setups, often pre-configured for specific cloud environments. However, these configurations are not directly compatible with k3s, a lightweight Kubernetes distribution. Since k3s lacks many of the default cloud integrations, issues arise, such as missing metrics, broken graphs, and unavailable endpoints (example issue). This blog post will guide you through adapting the kube-prometheus-stack Helm chart and the kubernetes-mixin to work seamlessly in k3s environments, ensuring functional dashboards and alerts tailored to k3s.

Comprehensive Kubernetes Autoscaling Monitoring with Prometheus and Grafana

The kubernetes-mixin is a popular resource for providing excellent dashboards and alerts for monitoring Kubernetes clusters. However, it lacks comprehensive support for autoscaling components such as Pod Disruption Budgets (PDB), Horizontal and Vertical Pod Autoscalers (HPA, VPA), Karpenter, and the Cluster Autoscaler. These essential components are commonly deployed in Kubernetes environments but do not have standardized, open-source monitoring solutions. This blog post aims to solve that by introducing the kubernetes-autoscaling-mixin - a set of Prometheus rules and Grafana dashboards for Kubernetes autoscaling.

Configuring VPA to Use Historical Metrics for Recommendations and Expose Them in Kube-state-metrics

The Vertical Pod Autoscaler (VPA) can manage both your pods’ resource requests but also recommend what the limits and requests for a pod should be. Recently, the kube-state-metrics project removed built-in support for VPA recommendation metrics, which made the VPA require additional configuration to be valuable. This blog post will cover how to configure the VPA to expose the recommendation metrics and how to visualize them in Grafana.

May 07, 2022 4 minutes

IRSA and Workload Identity with Terraform

The go-to practice when pods require permissions to access cloud services when using Kubernetes is using service accounts. The various clouds offering managed Kubernetes solutions have different implementations but they have the same concept, EKS has IRSA and GKE has Workload Identity. The service accounts that your containers use will have the required permissions to impersonate cloud IAM roles(AWS) or service accounts(GCP) so that they can access cloud resources. There are other alternatives as AWS instance roles but they are not fine-grained enough when running containerized workflows, every container has access to the resources the node is allowed to access. It might be a bit more complex and different coming from a non Kubernetes background but preexisting Terraform modules simplify the creation of the required resources to allow Kubernetes service accounts to impersonate and access cloud resources.

May 06, 2022 7 minutes

Private EKS API Endpoint behind OpenVPN

AWS offers a managed Kubernetes solution called Elastic Kubernetes Service (EKS). When an EKS cluster is spun up the Kubernetes API is by default accessible by the public. However, this might be something that your company does not approve of due to security reasons, they might want to limit Kubernetes API access only to private networks. In that case you might want to bring up a service as OpenVPN and route private traffic through it. That would allow you to access the Kubernetes API through a private endpoint using OpenVPN. In this blog post we’ll use Terraform to provision our infrastructure required for a private EKS cluster and we’ll use OpenVPN Access Server as our VPN solution.

Migrating Kubernetes PersistentVolumes across Regions and AZs on AWS

Persistent volumes in AWS are tied to one Availability Zone(AZ), therefore if you were to create a cluster in an AZ where the volume is not created in you would not be able to use it. You will need to migrate the volume to one of the zones your cluster is in. Similarly, if a Kubernetes cluster is moved across AWS regions you will need to create a snapshot and copy it to that region before creating a volume.

Creating a Low Cost Managed Kubernetes Cluster for Personal Development using Terraform

Kubernetes is an open-source system that’s popular amongst developers. It automatically deploys, scales, and manages containerized applications. Yet for those working outside of the traditional startup or corporate setting, Kubernetes can be CPU and memory-intensive, disincentivizing developers from using a strategically beneficial tool. We’ll highlight some of the constraints posed by using Kubernetes and offer a series of low-cost, practically-beneficial solution: Google Kubernetes Engine deployed and managed with Terraform.

Monitoring Kubernetes InitContainers with Prometheus

Kubernetes InitContainers are a neat way to run arbitrary code before your container starts. It ensures that certain pre-conditions are met before your app is up and running. For example it allows you to:

  • run database migrations with Django or Rails before your app starts
  • ensure a microservice or API you depend on to is running
Shynet