django-development-and-production-logging

Django Development and Production Logging

1 year ago 3556 views
12 min read

Django comes with many great built-in features, logging is one of them. Pre-configured logging setups that vary between development (debug mode) and production environments. Easy integration to send you emails on errors, out-of-the-box support for various settings as log levels and log format. But how do you setup colorized logging, JSON logging and unified logging formats between Celery and Django, for example? Examples were hard to find and documentation was tricky to understand. Therefore, this blog post will walk through an opinionated setup on Django development and production logging using Structlog.

There is a full configuration example at the bottom of this post.

Structlog

The logging setup uses the Structlog package for logging. It also has a Django integration called Django-Structlog. The Structlog package provides easy setup for both JSON logs and colorized development console logs. Before getting started with Structlog let's take a look at three concepts that the base Python logging package provides, which are logger naming, propagation and log levels that we'll use in our configuration.

Logger Naming

All configuration used in this blog post bases logger instantiation on passing the argument __name__ which indicates module name, e.g structlog.get_logger(__name__) will result in the name myproject.package.views if the instantiation happens within myproject.package.views.py file. This means that the logger name will be myproject.package.views and the loggers defined for myproject, myproject.package and myproject.package.views will be used to configure the logger when it becomes instantiated. As an example, take a look at the configuration below.

LOGGING = {
    "version": 1,
    "root": {
        "handlers": ["console"],
        "level": "WARNING",
    },
    "loggers": {
        "myproject": {
            ... settings
        },
        "myproject.package": {
            ... settings
        },
        "myproject.package.views": {
            ... settings
        },

All options in the three blocks in the loggers section will be considered when instantiating the logger, with the importance going from child to parents.

Propagation and Root logger

Propagation allows us to propagate logs from a child logger to a parent logger. As an example, take a look at the usage of a child logger with the default root logger.

LOGGING = {
    "version": 1,
    "root": {
        "handlers": ["console"],
        "level": "WARNING",
    },
    "loggers": {
        "django_structlog": {
            "level": DJANGO_LOG_LEVEL,
        },
        "my_project": {"level": DJANGO_LOG_LEVEL},

The root logger catches any logs that are propagated upwards. We have defined a handler called console for the logger which has the log level warning. In this case, any logs that are logged within the django_structlog and my_project module will be propagated upwards to the root logger. No handlers are needed to be defined for the child loggers, since the root logger has a handler. Defining handlers for the my_project and django_structlog will cause logs to be logged multiple times, both for the handler defined at the root level and at the child levels. Therefore, a common practice is to only define handlers at the root level and propagate all logs for the child loggers. This is also documented as a potential solution in the Python documentation on logging.

Provided that their propagate setting is left set to True. A common scenario is to attach handlers only to the root logger, and to let propagation take care of the rest.

Log Level

Log levels are useful to both separate the severity of the logs, but also to drop logs that are noisy. Python logging provides a way to configure levels per child logger. Let's take a look at an example.

LOGGING = {
    "version": 1,
    "loggers": {
        "myproject.package": {
            "level": "WARNING",
        },
        "myproject.package.noisy": {"level": "CRITICAL"},

The noisy file will only have logs with the level CRITICAL logged, while all other files within the package will have logs with the level WARNING logged. It can be viewed as if the log level is inherited by child packages if not defined at the child level. For example, logs in the file myproject.package.normal will write logs with the level WARNING. However, levels do not propagate in the same way as handlers, e.g. logs won't be duplicated if you have two different levels per child/parent logger. If it's set for a child then that is the level accepted for that log, if not, it's delegated to its parent til a level is found. The Python documentation on setLevel says the following:

The term ‘delegation to the parent’ means that if a logger has a level of NOTSET, its chain of ancestor loggers is traversed until either an ancestor with a level other than NOTSET is found, or the root is reached.

Python has in depth documentation on logging configuration here.

Now that we have the logging basics, we can continue with Structlog.

Installing Django-structlog

We'll install the Django-structlog package for Django using pip/poetry:

poetry add django-structlog

Add it to installed applications:

INSTALLED_APPS = [
    ...
    "django_structlog",
    ...
]

Lastly, we'll configure Structlog with our preferred built-in processors:

base_structlog_processors = [
    structlog.contextvars.merge_contextvars,
    structlog.stdlib.add_logger_name,
    structlog.stdlib.add_log_level,
    structlog.stdlib.filter_by_level,
    # Perform %-style formatting.
    structlog.stdlib.PositionalArgumentsFormatter(),
    # Add a timestamp in ISO 8601 format.
    structlog.processors.TimeStamper(fmt="iso"),
    structlog.processors.StackInfoRenderer(),
    # If some value is in bytes, decode it to a unicode str.
    structlog.processors.UnicodeDecoder(),
    # Add callsite parameters.
    structlog.processors.CallsiteParameterAdder(
        {
            structlog.processors.CallsiteParameter.FILENAME,
            structlog.processors.CallsiteParameter.FUNC_NAME,
            structlog.processors.CallsiteParameter.LINENO,
        }
    ),
]

base_structlog_formatter = [structlog.stdlib.ProcessorFormatter.wrap_for_formatter]

structlog.configure(
    processors=base_structlog_processors + base_structlog_formatter,  # type: ignore
    logger_factory=structlog.stdlib.LoggerFactory(),
    cache_logger_on_first_use=True,
)

We use variables base_structlog_formatter to be able to extend the list in production environment that will be showcased in the full configuration. We also define the formatter as a separate array, as the formatter must be the final processor in the chain.

Here's the functionality of some of the processors:

  • add_logger_name - Adds the logger's name, which is defined during instantiation using the variable __name__ usually.
  • add_log_level - Adds the log level based on the method_name used, e.g. logger.error will have the level error.
  • filter_by_level - Excludes logs for each logger by the defined level.
  • CallsiteParameterAdder - Add the filename, function name and the line number to the logs.
  • TimeStamper - Adds a timestamp to the log, we use the ISO format by passing it as an argument.
  • StackInfoRenderer - Adds stack information to the log.

We can now use Structlog:

import structlog

logger = structlog.get_logger(__name__)
logger.info("Hello World")

Using Django-structlog's Request Middleware

Django by default logs server request logs using the django.request and django.server loggers. I found Django-structlog's request middleware to have better log format and contain more information, as for example attaching a request_id and user_id to the logs. You can also extend request metadata using signals provided by Django-structlog. The middleware can be added with the below snippet.

MIDDLEWARE = [
    ...
    "django_structlog.middlewares.RequestMiddleware",
    ...
]

Which will produce logs for each request as in the image below.

django-request-logs

Then I prefer to disable the runserver(development) and request(production) logs using a NullHandler otherwise requests will be logged multiple times and the format will be different for each request.

    "handlers": {
        ...
        "null": {
            "class": "logging.NullHandler",
        },
    },
    "root": {
        "handlers": ["console"],
        "level": "WARNING",
    },
    "loggers": {
        ...
        # Use structlog middleware
        "django.server": {
            "handlers": ["null"],
            "propagate": False,
        },
        # Use structlog middleware
        "django.request": {
            "handlers": ["null"],
            "propagate": False,
        },
        # Use structlog middleware
        "django.channels.server": {
            "handlers": ["null"],
            "propagate": False,
        },
        # Use structlog middleware
        "werkzeug": {
            "handlers": ["null"],
            "propagate": False,
        },

The above is optional, the Django-structlog middleware does not need to replace the built-in Django loggers or the Werkzeug runserver_plus logs.

Development Logging

Development logging is different from production logging since I find it easier to read logs in a console where the logs are non-JSON formatted, colorized and have a nicer UI for exceptions amongst other things. The Structlog package provides a console renderer that has a colorization feature and also provides integration with Rich and better-exceptions that format exceptions nicer.

Colorized Output

Colorized output is a nice to have locally as it separates the logs by the severity level. Error and warning logs are easier to find, the image below shows an example of that.

django-logs-colorized

Structlog provides a console formatter that has colorization, and you can add it using the following snippet.

LOGGING = {
    "formatters": {
        "colored_console": {
            "()": structlog.stdlib.ProcessorFormatter,
            "processor": structlog.dev.ConsoleRenderer(colors=True),
        },
    },
    "handlers": {
        "console": {
            "class": "logging.StreamHandler",
            "formatter": "colored_console",
        },
    },

We use the structlog.dev.ConsoleRenderer processor with the parameter colors=True. The console handler uses the colored_console. Now, our root logger can use the console handler.

LOGGING = {
    "root": {
        "handlers": ["console"],
        "level": "WARNING",
    },
}

Beautifully Formatted Exceptions

Exceptions are great but tricky to read, usually presented in a console as a wall of text. As mentioned previously, libraries as Rich solves this by enhancing the output of exceptions. Structlog integrates with both Rich and Better Exceptions. You just need to install Rich or Better Exceptions.

poetry add Rich --group dev

Now the exceptions locally will have a much nicer output.

django-beautiful-exceptions

Database Logs

The logger django.db.backends handles database logs and by default logs all SQL statements at the DEBUG level, additionally Django must run in DEBUG mode for the logs to be outputted to the logging handler. The database logs can be useful when for example analyzing performance bottlenecks. You can use the following snippet to enable detailed database logging.

    "loggers": {
        # DB logs
        "django.db.backends": {
            "level": "DEBUG",
        },

And logs for each database interaction will stream in the console as in the image below.

django-database-logs

Note: The Django debug toolbar middleware(still seeing this happening even if the issue is resolved) breaks django.db.backends logging when using the middleware "debug_toolbar.middleware.DebugToolbarMiddleware". However, in the case that you are using the debug toolbar you might not need database logging since the DebugToolbar shows SQL queries even better.

Production Logging

JSON Logs

In a production environment where you usually have a logging system in place, it makes more sense to use a structured format as JSON rather than colored console output. It should be easy to parse logs and extract any field of the log that you want. It also allows for aggregations and dashboards summarizing logs over time. Let's add the JSON formatter and handler.

LOGGING = {
    "version": 1,
    "disable_existing_loggers": False,
    "formatters": {
        ...
        "json_formatter": {
            "()": structlog.stdlib.ProcessorFormatter,
            "processor": structlog.processors.JSONRenderer(),
        },
    },
    "handlers": {
        ...
        "json": {
            "class": "logging.StreamHandler",
            "formatter": "json_formatter",
        },
}

Now we can configure the root logger to use the JSON handler.

LOGGING = {
    "version": 1,
    "disable_existing_loggers": False,
    "root": {
        "handlers": ["json"],
        "level": "WARNING",
    },

All logs streamed will have the JSON format as in the image below.

django-json-logs

Tracing Requests

If you are using OpenTelemetry and want to correlate logs into traces, so that there's both a trace_id and a span_id in the log message, you can use the following snippet to add traces into the log event:

from opentelemetry import trace

def add_open_telemetry_spans(_, __, event_dict):
    span = trace.get_current_span()
    if not span.is_recording():
        return event_dict

    ctx = span.get_span_context()
    parent = getattr(span, "parent", None)

    event_dict["span_id"] = f"{ctx.span_id:x}"
    event_dict["trace_id"] = f"{ctx.trace_id:x}"
    if parent:
        event_dict["parent_span_id"] = f"{parent.span_id:x}"

    return event_dict

Now we'll add the processor.

base_structlog_processors = [
    structlog.contextvars.merge_contextvars,
    structlog.stdlib.add_logger_name,
    ...
    add_open_telemetry_spans,
    ...

Then in your logging system you can jump from logs to traces using the trace ID. There's an image below that showcases this using Grafana for visualization and Tempo as the tracing backend.

django-trace-tempo-link

You can also do it just by searching by the trace_id. This is optional but can be very useful.

Celery Integration

Now that Django has its logging configured, we want to use the same logging setup between Celery and Django. We can do so by using the config_loggers signal and passing our logging config from the Django settings:

from logging.config import dictConfig

from celery.signals import setup_logging
from django.conf import settings

@setup_logging.connect
def config_loggers(*args, **kwargs):
    dictConfig(settings.LOGGING)

Now, Celery should have the same configuration for logs. Also, we'll add the Django-structlog Celery integration that logs the lifecycle for a Celery task. Just set the flag to true, as in the snippet below.

DJANGO_STRUCTLOG_CELERY_ENABLED = True

And also add the DjangoStructlogInitSteps:

from django_structlog.celery.steps import DjangoStructLogInitStep

app = Celery("my_app")

app.steps["worker"].add(DjangoStructLogInitStep)

The Structlog Celery receiver will log task events such as started/completed/failed alongside all other Celery events. Also, the logs will use the commonly defined format that we used for Django. The image below shows both.

celery-logging

If you want to customize the Celery receiver level of logs, for example to not output info logs as task_recevied and task_started events, you can increase the verbosity on the django_structlog.celery logger level.

    "loggers": {
        "django_structlog": {
            "level": DJANGO_LOG_LEVEL,
        },
        # Django Structlog Celery receivers
        "django_structlog.celery": {
            "level": DJANGO_CELERY_LOG_LEVEL,
        },

Full Configuration Example

Settings are separated using base.py and production.py similar to the Django-cookiecutter setup. We also use environment variables to configure log levels for the default, Django-structlog request middleware, Django-structlog Celery receiver and database logs.

We choose to use the level WARNING for the root logger, as it is by default in the Python logging package. This will be less noisy when logs are propagated from 3rd party packages to the root logger. Override the log level on a child/module basis.

base.py

import structlog

DJANGO_LOG_LEVEL = os.getenv("DJANGO_LOG_LEVEL", "DEBUG")
DJANGO_REQUEST_LOG_LEVEL = os.getenv("DJANGO_REQUEST_LOG_LEVEL", "INFO")
DJANGO_CELERY_LOG_LEVEL = os.getenv("DJANGO_CELERY_LOG_LEVEL", "INFO")
DJANGO_DATABASE_LOG_LEVEL = os.getenv("DJANGO_DATABASE_LOG_LEVEL", "CRITICAL")


LOGGING = {
    "version": 1,
    "disable_existing_loggers": False,
    "formatters": {
        "colored_console": {
            "()": structlog.stdlib.ProcessorFormatter,
            "processor": structlog.dev.ConsoleRenderer(colors=True),
        },
        "json_formatter": {
            "()": structlog.stdlib.ProcessorFormatter,
            "processor": structlog.processors.JSONRenderer(),
        },
    },
    "handlers": {
        "console": {
            "class": "logging.StreamHandler",
            "formatter": "colored_console",
        },
        "json": {
            "class": "logging.StreamHandler",
            "formatter": "json_formatter",
        },
        "null": {
            "class": "logging.NullHandler",
        },
    },
    "root": {
        "handlers": ["console"],
        "level": "WARNING",
    },
    "loggers": {
        "django_structlog": {
            "level": DJANGO_LOG_LEVEL,
        },
        # Django Structlog request middlewares
        "django_structlog.middlewares": {
            "level": DJANGO_REQUEST_LOG_LEVEL,
        },
        # Django Structlog Celery receivers
        "django_structlog.celery": {
            "level": DJANGO_CELERY_LOG_LEVEL,
        },
        "jobs_aggregator": {
            "level": DJANGO_LOG_LEVEL,
        },
        # DB logs
        "django.db.backends": {
            "level": DJANGO_DATABASE_LOG_LEVEL,
        },
        # Use structlog middleware
        "django.server": {
            "handlers": ["null"],
            "propagate": False,
        },
        # Use structlog middleware
        "django.request": {
            "handlers": ["null"],
            "propagate": False,
        },
        # Use structlog middleware
        "django.channels.server": {
            "handlers": ["null"],
            "propagate": False,
        },
        # Use structlog middleware
        "werkzeug": {
            "handlers": ["null"],
            "propagate": False,
        },
    }
}


def add_open_telemetry_spans(_, __, event_dict):
    span = trace.get_current_span()
    if not span.is_recording():
        return event_dict

    ctx = span.get_span_context()
    parent = getattr(span, "parent", None)

    event_dict["span_id"] = f"{ctx.span_id:x}"
    event_dict["trace_id"] = f"{ctx.trace_id:x}"
    if parent:
        event_dict["parent_span_id"] = f"{parent.span_id:x}"

    return event_dict


base_structlog_processors = [
    structlog.contextvars.merge_contextvars,
    structlog.stdlib.add_logger_name,
    structlog.stdlib.add_log_level,
    structlog.stdlib.filter_by_level,
    add_open_telemetry_spans,
    # Perform %-style formatting.
    structlog.stdlib.PositionalArgumentsFormatter(),
    # Add a timestamp in ISO 8601 format.
    structlog.processors.TimeStamper(fmt="iso"),
    structlog.processors.StackInfoRenderer(),
    # If some value is in bytes, decode it to a unicode str.
    structlog.processors.UnicodeDecoder(),
    # Add callsite parameters.
    structlog.processors.CallsiteParameterAdder(
        {
            structlog.processors.CallsiteParameter.FILENAME,
            structlog.processors.CallsiteParameter.FUNC_NAME,
            structlog.processors.CallsiteParameter.LINENO,
        }
    ),
]

base_structlog_formatter = [structlog.stdlib.ProcessorFormatter.wrap_for_formatter]

structlog.configure(
    processors=base_structlog_processors + base_structlog_formatter,  # type: ignore
    logger_factory=structlog.stdlib.LoggerFactory(),
    cache_logger_on_first_use=True,
)

DJANGO_STRUCTLOG_CELERY_ENABLED = True

In production, we want to replace the console handler with the json handler and convert tracebacks to Python dicts/JSON. I also prefer to only enable warning and critical logs for requests, those that have a status code of 400 and above. We do so by changing the default value for the DJANGO_REQUEST_LOG_LEVEL variable.

production.py

DJANGO_REQUEST_LOG_LEVEL = os.getenv("DJANGO_REQUEST_LOG_LEVEL", "WARNING")

LOGGING["root"]["handlers"] = ["json"]  # type: ignore

production_structlog_processors = [
    structlog.processors.dict_tracebacks,
]

structlog.configure(
    processors=base_structlog_processors
    + production_structlog_processors
    + base_structlog_formatter,  # type: ignore
    logger_factory=structlog.stdlib.LoggerFactory(),
    cache_logger_on_first_use=True,
)

Lastly, our Celery configuration for logging.

from logging.config import dictConfig

from celery import Celery
from celery.signals import setup_logging
from django.conf import settings
from django_structlog.celery.steps import DjangoStructLogInitStep

app = Celery("my_app")

app.steps["worker"].add(DjangoStructLogInitStep)

@setup_logging.connect
def config_loggers(*args, **kwargs):
    dictConfig(settings.LOGGING)

Summary

The Structlog package is amazing in providing a way to implement structured logging in your Python project. Feature heavy out of the box with a simple API that allows you to extend the logging setup easily to suit your needs. Solves many of the common needs of JSON logging, adding metadata to logs, colorized development logs amongst other things.

The Django-structlog package enhances the Structlog experience for Django by providing a way to easily install Structlog and integrate it into your Django project. It also provides both a Django request middleware which logs the lifecycle events of requests and a Celery receiver which logs lifecycle events of tasks.

Both projects are great, and I use them to configure logging for my Django projects. The configuration presented in this blog post is opinionated, but I think it serves as a good basis for the common needs of readable development logs and JSON formatted production logs. If you choose to use it and extend it, feel free to share if any additional metadata or processors were added to your configuration!


Similar Posts

Django Error Tracking and Performance Monitoring with Sentry

6 min read

As good as a framework that Django is, the default method of sending an email when getting an error leave much to be desired. The database or cache acting flaky? Expect 1000s of emails depicting that error, which usually does …


Celery Monitoring with Prometheus and Grafana

5 min read

Celery is a python project used for asynchronous job processing and task scheduling in web applications or distributed systems. It is very commonly used together with Django, Celery as the asynchronous job processor and Django as the web framework. Celery …


Simple Django User Session Clearing using Celery

1 min read

Django provides session support out-of-the-box and stores sessions in the django_session database table. Django leaves it up to the project maintainers to purge sessions in their Django project. This means that if it's not done on a regular basis the …