Monitoring Python Applications with Horux

Monitoring Python Applications with Horux

Horux Team

January 14, 2026

• 4 min read

Python Monitoring Made Simple

Python powers everything from simple scripts to massive AI pipelines and high-scale web APIs. But monitoring Python apps often involves juggling multiple libraries—one for logging, another for metrics, and a third for tracing.

We built the Horux Python SDK to change that. It's a single, lightweight library that handles metrics, logs, and distributed tracing with zero friction.

In this guide, we'll show you how to instrument your Python applications in minutes.

Installation

First, install the package via pip:

pip install horux-client

The Basics

Initializing the client is straightforward. You can configure it via environment variables (recommended for production) or directly in code.

from horux_client import HoruxClient, HoruxConfig

# Automatically loads HORUX_API_TOKEN and HORUX_SERVICE_ID from env
config = HoruxConfig.from_env()

with HoruxClient(config) as client:
    client.metrics.counter("script.execution", 1)
    client.logs.info("Script finished successfully")

Using the with statement ensures that any buffered metrics or logs are flushed before your script exits.

Instrumenting FastAPI

FastAPI is the modern standard for Python APIs. Here's how to add full monitoring to a FastAPI app using middleware.

import time
from fastapi import FastAPI, Request
from horux_client import HoruxClient, HoruxConfig

app = FastAPI()

# Initialize the client globally
client = HoruxClient(HoruxConfig.from_env())

@app.middleware("http")
async def monitor_requests(request: Request, call_next):
    start_time = time.time()
    
    # Process the request
    response = await call_next(request)
    
    # Calculate duration
    duration = time.time() - start_time
    
    # 1. Track request count
    client.metrics.counter("http_requests_total", 1, {
        "method": request.method,
        "path": request.url.path,
        "status": str(response.status_code)
    })
    
    # 2. Track latency (using histograms for distribution)
    # Note: For simplicity, we're using gauge here, but in production
    # you might want to use histograms if you calculate buckets
    client.metrics.gauge("http_request_duration_seconds", duration, {
        "method": request.method
    })
    
    # 3. Log the request with context
    client.logs.info(f"Request processed: {request.method} {request.url.path}", {
        "method": request.method,
        "path": request.url.path,
        "status_code": response.status_code,
        "duration_ms": duration * 1000,
        "user_agent": request.headers.get("user-agent")
    })
    
    return response

@app.on_event("shutdown")
def shutdown_event():
    client.close()

With just this middleware, you get:

  • Traffic volume: Requests per second/minute
  • Error rates: 4xx and 5xx tracking
  • Latency: How long endpoints take to respond
  • Access logs: Structured logs for every request

Working with Flask

If you're using Flask, the pattern is very similar using before_request and after_request hooks:

from flask import Flask, request, g
from horux_client import HoruxClient, HoruxConfig
import time

app = Flask(__name__)
client = HoruxClient(HoruxConfig.from_env())

@app.before_request
def start_timer():
    g.start_time = time.time()

@app.after_request
def record_metrics(response):
    if hasattr(g, 'start_time'):
        duration = time.time() - g.start_time
        
        client.metrics.counter("http_requests_total", 1, {
            "method": request.method,
            "endpoint": request.endpoint or "unknown",
            "status": str(response.status_code)
        })
        
        client.logs.info(f"Handled {request.method} {request.path}", {
            "status": response.status_code,
            "duration_ms": duration * 1000
        })
        
    return response

Background Tasks and Workers

Monitoring isn't just for HTTP APIs. Background workers (like Celery or simple loops) are critical to monitor since they often fail silently.

Here's how to monitor a simple worker process:

import time
from horux_client import HoruxClient, HoruxConfig

def process_job(job_id):
    client = HoruxClient(HoruxConfig.from_env())
    
    start = time.time()
    try:
        # Simulate work
        print(f"Processing job {job_id}...")
        time.sleep(0.5)
        
        duration = time.time() - start
        
        # Track success
        client.metrics.counter("jobs.processed", 1, {"status": "success"})
        client.metrics.gauge("job.duration", duration)
        
    except Exception as e:
        # Track failure
        client.metrics.counter("jobs.processed", 1, {"status": "failure"})
        
        # Log error with stack trace
        client.logs.error(f"Job {job_id} failed", {
            "error": str(e),
            "job_id": job_id
        })
        raise
    finally:
        # Important for short-lived scripts!
        client.close()

Structured Logging

One of the best features of the SDK is structured logging. Instead of parsing text strings, you log JSON objects that are instantly searchable in Horux.

# Bad: Hard to parse programmatically
logging.info(f"User {user_id} purchased item {item_id} for ${price}")

# Good: Instantly queryable (e.g., 'price > 100')
client.logs.info("Purchase completed", {
    "user_id": user_id,
    "item_id": item_id,
    "price": price,
    "currency": "USD",
    "region": "us-east-1"
})

Next Steps

The Python SDK is open source and available on PyPI. Check out the full documentation for advanced features like:

  • Configuring batch sizes and flush intervals
  • Adding global instance metadata (e.g., Kubernetes pod names)
  • Using the SDK with Django
  • Testing utilities

Ready to start monitoring?