Now in Private Beta
You built the agent.
Now make it operable.
Waxell Observe is a lightweight Python package that brings governance to any AI agent. Track every LLM call, enforce cost limits, and apply runtime policy — with two lines of code and zero changes to your agent.
What Waxell Observe Is
Waxell Observe is a Python package that auto-instruments your existing agents — OpenAI, Anthropic, LiteLLM, Groq, HuggingFace — and adds operational structure without changing how your agent works. Two lines at startup. No migration. No rewrite. No vendor lock-in.
It intercepts LLM calls at the provider level, which means it works regardless of the framework above it. Your agent keeps running the way you built it. Waxell makes it visible, bounded, and controllable.

How It Works
Two lines to start. One decorator to go deeper.
01 Install the package
Add Waxell Observe to your existing Python project. Nothing else changes.
02 Initialize — two lines
Call init() once at startup. Every OpenAI and Anthropic call is now auto-instrumented. Your agent code stays untouched.
03 Add structure with @observe
For function-level tracing, add the decorator. Get automatic run tracking, IO capture, and policy enforcement per function — no refactor required.
what you get
Operational structure for autonomous systems.
Waxell Observe replaces ad-hoc logging and manual oversight with a real governance layer. Everything your agents do — tracked, scored, and governed from a single control plane.

