sentry-production-releases

Django Error Tracking and Performance Monitoring with Sentry

2 weeks ago New!

6 min read


As good as a framework that Django is the default method of sending an email when getting an error leave much to be desired. The database or cache acting flaky? Expect 1000s of emails depicting that error which usually does not provide enough details for you to understand the issue. Luckily, Sentry which is an error tracking and monitoring platform provides an out-of-the-box integration for Django and Celery. Sentry provides good documentation of their integration with Django, but there are multiple advanced use cases that you can go into as for example:

  • How to disable transaction sampling for specific views such as health checks or Prometheus metrics?
  • How to correlate versions and environments with errors?
  • How to integrate Sentry into your CI to create releases?
  • How to add spans for transactions?

All of these things will be covered in this blog post.

Setup Sentry

The setup of Sentry is well covered in their documentation, but I'll include it for an end to end example. First, install Sentry (pip/poetry):

poetry add sentry-sdk

Ensure that you've created a project on the Sentry homepage that you will use for your application. Now, let's instantiate Sentry:

import sentry_sdk
from sentry_sdk.integrations.django import DjangoIntegration
from sentry_sdk.integrations.logging import LoggingIntegration

SENTRY_DSN = env.str("SENTRY_DSN") # DSN of your Sentry project
SENTRY_LOG_LEVEL = env.int("DJANGO_SENTRY_LOG_LEVEL", logging.INFO)

sentry_logging = LoggingIntegration(
    level=SENTRY_LOG_LEVEL,  # Capture info and above as breadcrumbs
    event_level=logging.ERROR,  # Send errors as events
)
integrations = [
    sentry_logging,
    DjangoIntegration(),
]
sentry_sdk.init(
    dsn=SENTRY_DSN,
    integrations=integrations,
    environment=env.str("SENTRY_ENVIRONMENT"),
    sample_rate=env.float("SENTRY_SAMPLE_RATE", default=1.0),
    release=env.str("SENTRY_RELEASE"),
)

The above depends on a couple of environment variables which are optional:

  • SENTRY_ENVIRONMENT - the environment this application is running in, for example staging or production.
  • SENTRY_RELEASE - if you are versioning your artifacts set the variable to the version, for example 0.0.1 or 23k42of4vc(commit SHA)

Lastly, we'll need to adjust the loggers to add Sentry for handling errors:

LOGGING = {
    "version": 1,
    "disable_existing_loggers": True,
    "formatters": {
        "verbose": {
            "format": "%(levelname)s %(asctime)s %(module)s "
            "%(process)d %(thread)d %(message)s"
        }
    },
    "handlers": {
        "console": {
            "level": "DEBUG",
            "class": "logging.StreamHandler",
            "formatter": "verbose",
        }
    },

    "loggers": {
        "sentry_sdk": {"level": "ERROR", "handlers": ["console"], "propagate": False}, # Add this line
    }
}

Adding Celery and Redis Integrations

Sentry has a native integration for Celery and Redis, therefore adding them is very straight forward:

from sentry_sdk.integrations.celery import CeleryIntegration
from sentry_sdk.integrations.django import DjangoIntegration
from sentry_sdk.integrations.logging import LoggingIntegration
from sentry_sdk.integrations.redis import RedisIntegration

integrations = [
    sentry_logging,
    DjangoIntegration(),
    CeleryIntegration(),
    RedisIntegration(),
]
...

Extend the integrations in the previous example as shown above.

Creating Sentry Releases

Sentry releases are versions of your applications that also map to environments in Sentry. They are great for correlating when an issue was introduced, the frequency of that issue in various releases, the code snippets responsible for that issue amongst other things. Creating Sentry releases when deploying your application artifact is great, and it should be integrated into your CI/CD workflow. We are using GitHub actions and Sentry provides an OSS action for creating releases. First, you'll need to setup the required prerequisites by creating a Sentry integration. Then, we can create a Sentry release simply with the below snippet:

- name: Create Sentry Release
  uses: getsentry/[email protected]
  if: github.event_name == 'push' && github.ref == 'refs/heads/main' # On pushes to main release to staging
  env:
    SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
    SENTRY_ORG: <my-org>
    SENTRY_PROJECT: ${{ github.event.repository.name }}
  with:
    environment: staging
    version: ${{ github.sha }}

Now you'll start seeing releases as in the below image:

sentry-production-releases

Adding a Custom Traces Sampler

Performance monitoring has a concept called sampling rate which decides how many of the transactions (views/tasks) should be sent to Sentry. Initially, you might send 100% of the transactions - every HTTP request and Celery task is sent as a transaction to Sentry. However, when you start receiving more traffic this will quickly become unsustainable and expensive. You'll end up lowering the sampling rate, however you may want the sampling to be dynamic where important views and tasks have a much higher sampling rate than the default. You might also have noisy transactions as health checks both for Django and Celery or metric scraping. For these use cases Sentry allows us to define a traces sampler function which can provide dynamic sampling rates based on whatever logic you add to it.

In my use case I need Sentry to sample core tasks at 100% sample rate and to mute specific endpoints (health checks and Prometheus metrics). I'll provide an example of this below.

First let's define the default transaction sampling rate at 10%:

DEFAULT_SAMPLING_RATE = env.float("SENTRY_TRACES_SAMPLE_RATE", default=0.10)

Now we'll define our custom traces sampler function which has the capability of providing custom sampling rates for HTTP views and Celery tasks:

def traces_sampler(sampling_context):
    """
    Sampling function for Sentry.
    See https://docs.sentry.io/platforms/python/guides/django/configuration/sampling/ for more details.
    """

    # Django Views
    # Use `wsgi_environ` from the Django integration as it contains the parsed URL data.
    if sampling_context.get("wsgi_environ"):
        request_route = sampling_context["wsgi_environ"]["PATH_INFO"]
        if request_route in SENTRY_ROUTE_RATES:
            return SENTRY_ROUTE_RATES[request_route]

    # Celery Tasks
    transaction_name = sampling_context["transaction_context"]["name"]
    if transaction_name in SENTRY_TASK_RATES:
        return SENTRY_TASK_RATES[transaction_name]

    return DEFAULT_SAMPLING_RATE

The above example has two conditional blocks where the first block checks if the request path exists in SENTRY_ROUTE_RATES and the second conditional checks if the name (the Celery integration sets the transaction context name to the Celery task name) of the transaction exists in SENTRY_TASK_RATES. For both cases we return the custom sampling rate.

We can now use the traces_sampler function when instantiating our sentry_sdk:

sentry_sdk.init(
    dsn=SENTRY_DSN,
    integrations=integrations,
    environment=env.str("SENTRY_ENVIRONMENT"),
    traces_sampler=traces_sampler, # Our custom traces_sampler
    release=env.str("SENTRY_RELEASE"),
)

Muting Specific Views and Tasks

Now that we have our custom traces sampler. Let's set the variables that define the tasks sampling rate and the routes/paths sampling rate:

SENTRY_ROUTE_RATES = {
    "/health/": 0.0, # Health checks
    "/prometheus/metrics": 0.0, # Prometheus metrics
    "/my/important/view": 1.0,
}
SENTRY_TRANSACTION_RATES = {
    "my.core.task": 1.0,  # Core task for the system
    "celery.backend_cleanup": 0.0, # Backend cleanups
}

The above provides a good default base configuration that I use across projects.

Adding Spans to Transactions

Although the out-of-the-box integrations Sentry provides for Python, Redis, Django etc. are great, sometimes some transaction spans are missing instrumentation or in some cases maybe you want to add custom instrumentation spans for some blocks of your code. This is also easy to do with Sentry, here's an example:

from sentry_sdk import start_span

def my_complex_function:
    with start_span(
        op="complex_function_1",
        description="Running function 1",
    ) as span:
        span.set_data("my_app.important_metadata_key_1", value)
        span.set_data("my_app.important_metadata_key_2", value)
        xyz.function()
        abc.function()

    with start_span(
        op="complex_function_2",
        description="Running function 2",
    ) as span:
        span.set_data("my_app.important_metadata_key_1", value)
        span.set_data("my_app.important_metadata_key_2", value)
        xyz.function()
        abc.function()

Now each transaction that contains my_complex_function will have two spans, one will be called complex_function_1 and the other will be called complex_function_2. It will provide all the metadata of the request or Celery task and also any additional metadata you add. Alongside that, you'll be able to see how long each span took and if it had any errors!

Summary

Sentry is a great replacement for Django error emails and error handling in general for Django. On top of that it provides great performance insights for both Celery and Django. The detailed insights become even greater when adding creation of Sentry releases to your CI/CD pipeline, custom transaction spans with custom metadata, versioning and environments. All of this makes it possible to pinpoint what code snippet cause an issue, when it was deployed, to which environment and how was it performance. On top of that the granularity of a custom tracing sampler function allows us to sample a high percentage of for example important long-running Celery tasks making it possible to track every error and transaction of those tasks!


Similar Posts

2 weeks ago New!
jsonnet mixin grafana monitoring prometheus django

Django Monitoring with Prometheus and Grafana

5 min read

The [Prometheus package](https://github.com/korfuri/django-prometheus) for Django provides a great Prometheus integration, but the open source dashboards and alerts that exist are not that great. The to-go [Grafana dashboard](https://grafana.com/grafana/dashboards/9528-django-prometheus/) does not use a large portion of metrics …


2 years ago
github grafana monitoring ci/cd

Correlating Continuous Deployments with Application Metrics using Grafana's Github Plugin

5 min read

GitOps is becoming the standard of doing continuous deployments. Define your application state in Git, automatically update and change the state when pull requests are merged. This means that deployments happens continuously, usually multiple times …


9 months ago
grafana monitoring prometheus graphql apollo nestjs

NestJS Apollo GraphQL Prometheus Metrics and Grafana Dashboards

4 min read

Apollo GraphQL and NestJS are gaining traction quickly, however the monitoring approaches are unclear. At the moment (late 2021 / early 2022) there are no default exporters or libraries for Prometheus metrics and the same …