Deploying a Django Channels ASGI Server to Kubernetes

8 months ago 1844 views
6 min read

Django channels is great for adding support of various network protocols as WebSockets, chat protocols, IoT protocols and other. It's a great way to add real-time functionality to your Django application. To use channels however, requires you to deploy an ASGI server in production. This is a bit more complex and there are multiple flavors of ASGI servers to choose from. In this blog post I'll show you how to deploy Django channels with Daphne and Nginx in a Kubernetes environment.

The to-go ASGI server for Django channels is Daphne, this is what the channels package documentation recommends. Daphne is an HTTP, HTTP2, and WebSocket protocol server for ASGI and ASGI-HTTP, developed as part of the Django Channels project. Daphne handles both non ASGI and ASGI traffic, so you can deploy it as a replacement for your traditional Django HTTP server (e.g. Gunicorn).

However, this is not something I prefer. I prefer to have a dedicated HTTP server for handling HTTP traffic and a dedicated ASGI server for handling ASGI traffic. I found this discussion that I agreed with, the conclusion I drew was that it's better to have a dedicated ASGI server and a dedicated HTTP server. We know how Gunicorn and Django work together, therefore there is no reason to change that. Deploy a separate ASGI server and use Nginx to split traffic between the ASGI and WSGI server based on request paths.

To deploy two different servers and split traffic between them might be a trivial task for senior engineers, but for junior engineers and beginners it might be a bit more complex. Let's take a look at the resources needed for this.

Adding ASGI Support to the Django Application

Let's first add ASGI support to our Django application. Install channels and daphne.

pip install 'channels[daphne]'

Add the following to your Django settings file:

# settings.py

INSTALLED_APPS = [
    "channels", # needs to be first
    # ...
]

ASGI_APPLICATION = "config.asgi.application"
CHANNEL_LAYERS = {
    "default": {
        "BACKEND": "channels.layers.InMemoryChannelLayer",

        # If you have redis deployed
        # "BACKEND": "channels_redis.core.RedisChannelLayer",
        # "CONFIG": {
          # "hosts": [env("REDIS_URL")],
        # },
    },
}

Create a file called asgi.py, the path in my case is <root>/config/asgi.py:

import os

from channels.auth import AuthMiddlewareStack
from channels.routing import ProtocolTypeRouter, URLRouter
from channels.security.websocket import AllowedHostsOriginValidator
from django.core.asgi import get_asgi_application

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.production")
# Initialize Django ASGI application early to ensure the AppRegistry
# is populated before importing code that may import ORM models.
django_asgi_app = get_asgi_application()

application = ProtocolTypeRouter(
    {
        "http": django_asgi_app,
        "websocket": AllowedHostsOriginValidator(
            AuthMiddlewareStack(URLRouter([])) # We'll add the routing later
        ),
    }
)

Next, we'll need to create a simple websocket consumer which accepts the connection if the user is authenticated and returns the messages sent. Create a file called consumers.py:

from channels.generic.websocket import WebsocketConsumer
from django.conf import settings


class ReturnConsumer(WebsocketConsumer):

    def connect(self):
        # Simple bypass for testing
        if settings.BYPASS_WEBSOCKET_AUTH:
            self.accept()
            return

        if not "user" in self.scope:
            self.close(4401)
            return

        self.user = self.scope["user"]

        if not self.user.is_authenticated:
            self.close(4401)
            return

        self.accept()

    def receive(self, text_data=None, bytes_data=None):
        self.send(text_data=text_data)

Lastly, we need to add a routing file. Create a file called urls.py:

from django.urls import re_path

from . import consumers


websocket_urlpatterns = [
    re_path(
        r"ws/return/(?P<return_session>\w+)/$", consumers.ReturnConsumer.as_asgi()
    ),
]

Now we'll adjust our asgi.py file to include the routing:

...
# pylint: disable=unused-import,wrong-import-position
from example.urls import websocket_urlpatterns

application = ProtocolTypeRouter(
    {
        "http": django_asgi_app,
        "websocket": AllowedHostsOriginValidator(
            AuthMiddlewareStack(URLRouter(websocket_urlpatterns))
        ),
    }
)

Now a couple of things have happened here. We have new URL routes for our websocket consumers, and we have added a new consumer called ReturnConsumer. The addition of daphne in INSTALLED_APPS will replace the default runserver server to run Daphne's ASGI server, which supports both ASGI and non-ASGI traffic. This works great for development, but for production we'll want to deploy a dedicated ASGI server. Furthermore, in development you might want to run a separate ASGI server and a separate HTTP server to mimic the production environment. Also, there are other reasons for this as for example django-extensions runserver_plus does not work with Daphne. To do so we'll replace daphne with channels in INSTALLED_APPS:

INSTALLED_APPS = [
    "channels",
    # ...
]

Now runserver and runserver_plus will work as per usual, but locally we'll need to run a separate ASGI server. You can do so by running the following command:

daphne config.asgi:application -p 8001

Notice that we run it on port 8001 as otherwise it will collide with 8000 that the default runserver runs on. Now we have a separate ASGI server running on port 8001 and a separate HTTP server running on port 8000. We can test our websocket consumer by connecting to ws://localhost:8001/ws/return/123/ and sending a message. The message should be returned by the consumer.

Deploying the ASGI Server

Note: this guide will showcase a deployment to Kubernetes, but the same principles apply to other container orchestration systems. We'll showcase how I deploy Django.wtf to Kubernetes, which is a Django application I maintain.

Up til now I'll assume that there is a WSGI server deployed with, for example Gunicorn. Now we'll need to deploy our ASGI server using Daphne. First we'll create a deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/instance: django-wtf-asgi
    app.kubernetes.io/name: django-wtf-asgi
  name: django-wtf-asgi
  namespace: django-wtf
spec:
  selector:
    matchLabels:
      app.kubernetes.io/instance: django-wtf-asgi
      app.kubernetes.io/name: django-wtf-asgi
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: django-wtf-asgi
        app.kubernetes.io/name: django-wtf-asgi
    spec:
      containers:
      - command:
        - daphne
        - config.asgi:application
        - -b
        - 0.0.0.0
        - -p
        - "80"
        image: europe-west1-docker.pkg.dev/honeylogic/default/django-wtf:3fa39fadafdee238963ff733da4954aa181f1f47
        name: django-wtf-asgi
        ports:
        - containerPort: 80
          name: asgi

Note the command that the container runs, this deployment will run daphne with the asgi config on port 80. We'll also need to create a service for this deployment:

apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: django-wtf-asgi
    app.kubernetes.io/name: django-wtf-asgi
  name: django-wtf-asgi
  namespace: django-wtf
spec:
  ports:
  - name: django-wtf-asgi-asgi
    port: 80
    targetPort: 80
  selector:
    app.kubernetes.io/instance: django-wtf-asgi
    app.kubernetes.io/name: django-wtf-asgi

The last thing we need to add is the Ingress resource. This will split traffic between the ASGI server and the WSGI server. I use ingress-nginx but you can use any Ingress controller you like. We'll have two Ingress resources, the first one is our WSGI server. It takes all traffic and routes it to the default backend which is our WSGI server:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: django-wtf
  namespace: django-wtf
spec:
  ingressClassName: default
  rules:
  - host: django.wtf
    http:
      paths:
      - backend:
          service:
            name: django-wtf
            port:
              number: 80
        path: /
        pathType: ImplementationSpecific

The second one is our ASGI server. It takes all traffic to /ws and routes it to our ASGI server. The important bit is the pathType which is set to Prefix. The ingress controller will match the path based on the pathType and route the traffic to the correct backend based on the path. It also manages the priority of Ingress resources based on in our case the path. The /ws path will have higher priority and therefore take precedence over the / path.

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: django-wtf-asgi
  namespace: django-wtf
spec:
  ingressClassName: default
  rules:
  - host: django.wtf
    http:
      paths:
      - backend:
          service:
            name: django-wtf-asgi
            port:
              number: 80
        path: /ws
        pathType: Prefix

Summary

In this blog post we've added ASGI support to our Django application and deployed a dedicated ASGI server using Daphne. We've also added an Ingress resource to split traffic between the ASGI server and the WSGI server. All of this is showcased using Kubernetes resources, as that's how I deploy my Django applications. This is a great way to add real-time functionality to your Django application and deploy it in a Kubernetes environment. I hope you found this blog post helpful and that you now have a better understanding of how to deploy Django channels with Daphne and Nginx in a Kubernetes environment.


Similar Posts

Adding a Shell to the Django Admin

3 min read

Django's admin is an amazing batteries-included addition to the Django framework. It's a great tool for managing your data, and it is easily extendable and customizable. The same goes for Django's ./manage.py shell command. If we were to combine these …


Mocking ASGI Scope in WSGI Requests when Testing Django Async Views

3 min read

Django channels are great for asynchronous views, but they can be a headache to test. This is due to the incompatibility with database transactions and all the side effects that the incompatibility produces. For example, the common pattern of rolling …


Kickstarting Infrastructure for Django Applications with Terraform

8 min read

When creating Django applications or using cookiecutters as Django Cookiecutter you will have by default a number of dependencies that will be needed to be created as a S3 bucket, a Postgres Database and a Mailgun domain.