Context Propagation: Correlating Logs, Traces, and Metrics
Context propagation is the mechanism that connects all telemetry signals generated during request processing, regardless of how many services it traverses. Without context propagation, traces, logs, and metrics remain isolated signals; with it, they become a coherent narrative telling the complete story of every request.
In this article, we will analyze context propagation mechanisms in OpenTelemetry, from trace context to baggage, from log-trace correlation to patterns ensuring no signal is lost across service boundaries.
What You Will Learn in This Article
- How trace context propagation works between services
- W3C Baggage: transporting custom metadata between services
- Log-trace correlation: linking structured logs to traces
- Patterns for complete cross-service correlation
- Handling propagation in asynchronous scenarios (queues, events)
- Troubleshooting propagation issues
Trace Context Propagation
Trace context propagation is the fundamental mechanism connecting spans from different services into the same trace. When one service calls another, the trace context (trace_id, span_id, trace_flags) is serialized into request headers and deserialized by the receiving service to create a child span.
OpenTelemetry uses Propagators to manage context serialization and deserialization. The default propagator is W3C TraceContext, but OTel also supports B3 (Zipkin), Jaeger, and custom formats.
from opentelemetry import trace, context
from opentelemetry.propagate import inject, extract
from opentelemetry.propagators.textmap import DefaultTextMapPropagator
import requests
tracer = trace.get_tracer("order-service")
# --- CLIENT SIDE: inject context into request ---
def call_payment_service(order):
with tracer.start_as_current_span("call-payment-service") as span:
span.set_attribute("order.id", order.id)
# Create a dictionary for headers
headers = {}
# Inject trace context into headers
inject(headers)
# headers now contains:
# { "traceparent": "00-4bf92f35...-00f067aa...-01",
# "tracestate": "" }
# Send request with context headers
response = requests.post(
"http://payment-service/api/charge",
json={"order_id": order.id, "amount": order.total},
headers=headers
)
return response.json()
# --- SERVER SIDE: extract context from request ---
from flask import Flask, request as flask_request
app = Flask(__name__)
@app.route("/api/charge", methods=["POST"])
def handle_charge():
# Extract trace context from request headers
ctx = extract(flask_request.headers)
# Create a child span in the extracted context
with tracer.start_as_current_span(
"process-charge",
context=ctx,
kind=trace.SpanKind.SERVER,
attributes={
"payment.amount": flask_request.json["amount"]
}
) as span:
result = process_payment(flask_request.json)
span.set_attribute("payment.status", result.status)
return result.to_json()
W3C Baggage: Metadata Between Services
W3C Baggage is a mechanism for transporting custom key-value pairs across service boundaries, along with the trace context. Unlike span attributes (which are local), baggage is propagated to all downstream services.
Baggage is useful for transporting context information that does not belong to the span but is needed by multiple services: customer tier, A/B test variant, request priority, origin region.
from opentelemetry import baggage, context
from opentelemetry.propagate import inject
# --- Set baggage in the origin service ---
def handle_api_request(request):
# Set baggage values
ctx = baggage.set_baggage("customer.tier", "premium")
ctx = baggage.set_baggage("ab.test.variant", "checkout-v2", context=ctx)
ctx = baggage.set_baggage("request.priority", "high", context=ctx)
# Attach context
token = context.attach(ctx)
try:
with tracer.start_as_current_span("handle-request") as span:
# Baggage will be automatically propagated
# to all downstream services
headers = {}
inject(headers)
# headers now also contains:
# "baggage": "customer.tier=premium,ab.test.variant=checkout-v2,request.priority=high"
call_downstream_service(headers)
finally:
context.detach(token)
# --- Read baggage in a downstream service ---
def handle_downstream_request(request):
ctx = extract(request.headers)
token = context.attach(ctx)
try:
# Read baggage values
customer_tier = baggage.get_baggage("customer.tier")
ab_variant = baggage.get_baggage("ab.test.variant")
priority = baggage.get_baggage("request.priority")
with tracer.start_as_current_span("downstream-operation") as span:
# Use baggage as span attributes
span.set_attribute("customer.tier", customer_tier or "standard")
span.set_attribute("ab.test.variant", ab_variant or "control")
# Differentiate behavior based on baggage
if priority == "high":
process_with_priority(request)
else:
process_normal(request)
finally:
context.detach(token)
Baggage: When to Use and When to Avoid
| Scenario | Recommendation | Reason |
|---|---|---|
| Customer tier for routing | Yes, use baggage | Needed by multiple services for routing decisions |
| A/B test variant | Yes, use baggage | Every service needs to know which variant to serve |
| Sensitive data (PII, tokens) | No, never baggage | Baggage travels in plain text in HTTP headers |
| Large data (payload, JSON) | No, avoid | Baggage increases the size of every request |
| Request ID / correlation ID | Use trace_id instead | trace_id is already propagated by trace context |
Log-Trace Correlation
Log-trace correlation is the pattern that links logs generated during request
processing to the corresponding trace. It is achieved by injecting trace_id and
span_id into every log line, allowing filtering all logs associated with a
specific trace.
import logging
import json
from opentelemetry import trace
class TraceContextFilter(logging.Filter):
"""Filter that adds trace_id and span_id to every log record"""
def filter(self, record):
span = trace.get_current_span()
if span and span.is_recording():
ctx = span.get_span_context()
record.trace_id = format(ctx.trace_id, "032x")
record.span_id = format(ctx.span_id, "016x")
record.trace_flags = format(ctx.trace_flags, "02x")
record.service_name = "order-service"
else:
record.trace_id = "0" * 32
record.span_id = "0" * 16
record.trace_flags = "00"
record.service_name = "order-service"
return True
# Configure logger with filter
logger = logging.getLogger("order-service")
logger.addFilter(TraceContextFilter())
# JSON formatter that includes trace context
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter(json.dumps({
"timestamp": "%(asctime)s",
"level": "%(levelname)s",
"message": "%(message)s",
"service": "%(service_name)s",
"trace_id": "%(trace_id)s",
"span_id": "%(span_id)s",
"trace_flags": "%(trace_flags)s",
"logger": "%(name)s"
})))
logger.addHandler(handler)
# Usage: logs will automatically have trace_id and span_id
def process_order(order):
with tracer.start_as_current_span("process-order") as span:
logger.info(f"Processing order {order.id}")
# Output: {"trace_id": "4bf92f35...", "span_id": "00f067aa...",
# "message": "Processing order ORD-123", ...}
validate_order(order)
logger.info(f"Order {order.id} validated successfully")
process_payment(order)
logger.info(f"Payment processed for order {order.id}")
Propagation in Asynchronous Scenarios
Context propagation in asynchronous scenarios (message queues, events, scheduled tasks) requires special attention. Unlike synchronous HTTP calls where headers carry context, in asynchronous systems the context must be serialized into the message payload or its metadata.
from opentelemetry import trace, context
from opentelemetry.propagate import inject, extract
import json
tracer = trace.get_tracer("order-service")
# --- PRODUCER: inject context into Kafka message ---
def publish_order_event(order, kafka_producer):
with tracer.start_as_current_span(
"publish-order-created",
kind=trace.SpanKind.PRODUCER,
attributes={
"messaging.system": "kafka",
"messaging.destination.name": "order-events",
"messaging.operation.type": "publish",
"order.id": order.id
}
) as span:
# Inject trace context into message headers
headers = {}
inject(headers)
# Convert headers to Kafka format
kafka_headers = [(k, v.encode()) for k, v in headers.items()]
kafka_producer.produce(
topic="order-events",
key=order.id.encode(),
value=json.dumps(order.to_dict()).encode(),
headers=kafka_headers
)
# --- CONSUMER: extract context from Kafka message ---
def consume_order_events(kafka_consumer):
for message in kafka_consumer:
# Convert Kafka headers to dictionary
headers = {k: v.decode() for k, v in message.headers()}
# Extract trace context
ctx = extract(headers)
# Create span in extracted context
with tracer.start_as_current_span(
"process-order-event",
context=ctx,
kind=trace.SpanKind.CONSUMER,
attributes={
"messaging.system": "kafka",
"messaging.destination.name": "order-events",
"messaging.operation.type": "process"
}
) as span:
order_data = json.loads(message.value())
span.set_attribute("order.id", order_data["id"])
process_order_event(order_data)
Context Propagation Checklist
- Synchronous HTTP: context is propagated automatically by auto-instrumentation via HTTP headers
- gRPC: automatic propagation via gRPC metadata (auto-instrumentation)
- Message queue: manually inject/extract context in message headers
- Thread pool: capture and reattach context in the new thread with
attach/detach - Async/await: verify that the async framework maintains context (Python asyncio supports it natively)
- Scheduled tasks: create a new root span or link to the original trace
Propagation Troubleshooting
Propagation issues are among the most common when adopting observability. When traces appear "broken" (isolated spans without parents), there is typically a context propagation problem. Here are the most frequent causes and solutions.
Common Propagation Problems
Broken traces: spans appear as separate traces instead of connected. Cause:
an intermediate proxy or load balancer removes traceparent headers. Solution:
configure the proxy to pass tracing headers through.
Context lost in threads: spans created in secondary threads have no parent.
Cause: context is not propagated across thread boundaries. Solution: use
context.get_current() and attach() in the new thread.
Propagator not configured: services use different formats (W3C vs B3).
Solution: configure the CompositePropagator with all required formats.
Baggage not propagated: baggage values do not reach downstream services.
Cause: BaggagePropagator is not registered. Solution: add
W3CBaggagePropagator to the propagators configuration.
Conclusions and Next Steps
Context propagation is the glue that holds the entire observability system together. Without it, traces, logs, and metrics remain isolated signals. With correct propagation, they become a unified narrative that allows following every request from entry point to response, across all involved services.
The three key mechanisms are: W3C TraceContext for trace_id propagation, W3C Baggage for custom metadata transport, and log-trace correlation for linking logs to corresponding traces.
In the next article, we will explore eBPF instrumentation, a revolutionary technique that provides kernel-level observability without modifying application code and without agents, using eBPF programs to intercept system calls.







