Getting distributed tracing working in Django usually means stitching together OpenTelemetry SDK setup, Django instrumentation, OTLP export, database and Redis hooks, outbound HTTP tracing, and Celery propagation. This post walks through a general Django tracing setup with OpenTelemetry, what to instrument, how to propagate traces into Celery, and how to add custom span data. If you also want the logs and metrics side, see Django Development and Production Logging, Django Monitoring with Prometheus and Grafana, and Celery Monitoring with Prometheus and Grafana. I cover django-o11y near the end as the packaged version of this setup.
What Django tracing should capture
The useful part of tracing is not the top-level request span by itself. You want the whole request path in one trace:
- the incoming Django request
- database queries
- cache reads and writes
- outbound HTTP calls
- background work kicked off through Celery
When that is in place, a slow request stops being a black box. You can see whether the time went into SQL, Redis, an external API, or a worker task.
Install OpenTelemetry packages
Start with the OpenTelemetry SDK, OTLP exporter, and Django instrumentation:
pip install \
opentelemetry-sdk \
opentelemetry-exporter-otlp \
opentelemetry-instrumentation-django
Then add instrumentors for the dependencies your Django app actually uses:
pip install \
opentelemetry-instrumentation-psycopg2 \
opentelemetry-instrumentation-psycopg \
opentelemetry-instrumentation-redis \
opentelemetry-instrumentation-requests \
opentelemetry-instrumentation-httpx \
opentelemetry-instrumentation-celery
You do not need every package here. Install the instrumentors that match your stack.
Configure tracing in Django
Set up the tracer provider and OTLP exporter in settings.py.
# settings.py
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.instrumentation.django import DjangoInstrumentor
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
resource = Resource.create(
{
"service.name": "my-django-app",
"deployment.environment": "production",
}
)
trace_provider = TracerProvider(resource=resource)
trace_provider.add_span_processor(
BatchSpanProcessor(
OTLPSpanExporter(endpoint="http://localhost:4317", insecure=True)
)
)
trace.set_tracer_provider(trace_provider)
DjangoInstrumentor().instrument()
service.name matters because it is how traces are grouped in backends like Tempo, Grafana Cloud, Honeycomb, or Datadog.
If you prefer environment variables, OpenTelemetry also supports standard settings like OTEL_SERVICE_NAME and OTEL_EXPORTER_OTLP_ENDPOINT.
Instrument Django requests
With that in place, Django creates a server span for each request.
The span includes the request method, route, and response status code.
Instrument databases, Redis, and HTTP clients
Request spans are only the entry point. The child spans are what make traces useful.
For PostgreSQL and Redis:
from opentelemetry.instrumentation.psycopg import PsycopgInstrumentor
from opentelemetry.instrumentation.psycopg2 import Psycopg2Instrumentor
from opentelemetry.instrumentation.redis import RedisInstrumentor
Psycopg2Instrumentor().instrument(enable_commenter=True)
PsycopgInstrumentor().instrument(skip_dep_check=True, enable_commenter=True)
RedisInstrumentor().instrument()
For outbound HTTP:
from opentelemetry.instrumentation.httpx import HTTPXClientInstrumentor
from opentelemetry.instrumentation.requests import RequestsInstrumentor
RequestsInstrumentor().instrument()
HTTPXClientInstrumentor().instrument()
At this point a single request trace can show SQL queries, Redis calls, and external API latency in the same span tree.
Propagate traces into Celery
Celery is where many Django tracing setups stop short. The HTTP span exists, but the task execution becomes disconnected from the original request.
Install the Celery instrumentation and enable it in the worker process:
from opentelemetry.instrumentation.celery import CeleryInstrumentor
CeleryInstrumentor().instrument()
Then a task triggered by a request can continue the trace if context is propagated through the broker.
from celery import shared_task
@shared_task
def generate_invoice(order_id: int) -> None:
invoice = build_invoice(order_id)
invoice.send()
If the trace is wired correctly, the worker span appears under the originating request trace instead of as an unrelated root span.
This part is worth testing early. It is common to think tracing is configured correctly because HTTP spans show up, while Celery tasks are still detached.
Add custom span data
Automatic instrumentation covers framework and library behavior. It does not know your application concepts.
For business or domain context, add attributes to the current span:
from opentelemetry import trace
def checkout(request):
span = trace.get_current_span()
if span.is_recording():
span.set_attribute("tenant.id", request.tenant.slug)
span.set_attribute("tenant.plan", request.tenant.plan)
span.set_attribute("checkout.variant", "v2")
return process_checkout(request)
This makes traces much easier to filter and compare later.
You can also create your own spans around blocks of work:
from opentelemetry import trace
tracer = trace.get_tracer(__name__)
def import_orders(batch_id: str) -> None:
with tracer.start_as_current_span("orders.import") as span:
span.set_attribute("batch.id", batch_id)
sync_orders(batch_id)
Manual spans help when a single view or task contains several meaningful steps and you want more than one long catch-all span.
Send traces somewhere you can inspect them
The exporter needs a collector or backend on the other side. For local development, Tempo plus Grafana is a practical setup. Grafana Alloy or the OpenTelemetry Collector can receive OTLP traffic and forward it to Tempo.
Once that is running, generate a request and inspect the trace tree in Grafana Explore. You should see:
- one Django request span
- child spans for SQL and cache calls
- child spans for outbound HTTP calls
- a linked or nested span for the Celery task if the request triggered one
If you only see the top-level request span, the instrumentation coverage is incomplete.
The packaged version: django-o11y
This is the setup I got tired of rebuilding. django-o11y is the package I put together to bundle these tracing patterns, along with logs, metrics, and profiling, into one installable configuration.
It builds on the same patterns covered in Django Development and Production Logging, Django Monitoring with Prometheus and Grafana, and Celery Monitoring with Prometheus and Grafana.
Instead of wiring the SDK, exporter, middleware, and instrumentors by hand, you can do:
pip install django-o11y[postgres,redis,http,celery]
from django_o11y.logging.setup import build_logging_dict
LOGGING = build_logging_dict()
INSTALLED_APPS = [
"django_o11y",
"django_prometheus",
# ...
]
MIDDLEWARE = [
"django_prometheus.middleware.PrometheusBeforeMiddleware",
"django.middleware.security.SecurityMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django_o11y.tracing.middleware.TracingMiddleware",
"django_o11y.logging.middleware.LoggingMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
"django_prometheus.middleware.PrometheusAfterMiddleware",
]
DJANGO_O11Y = {
"SERVICE_NAME": "my-django-app",
"RESOURCE_ATTRIBUTES": {
"deployment.environment": "production",
"service.namespace": "web",
},
"TRACING": {
"ENABLED": True,
"OTLP_ENDPOINT": "http://localhost:4317",
"SAMPLE_RATE": 0.1,
},
"CELERY": {
"ENABLED": True,
},
}
That setup gives you:
- Django request spans
- PostgreSQL, MySQL, SQLite, and Redis spans when the matching extras are installed
- outbound HTTP spans for
requests,urllib3,urllib, andhttpx - Celery trace propagation via W3C TraceContext
- helper functions for custom tags and manual spans
- a local stack command with Grafana, Tempo, Loki, Prometheus, Alloy, and Pyroscope
The docs are at adinhodovic.github.io/django-o11y, and the repo is at github.com/adinhodovic/django-o11y.