Now in Private Beta

You built the agent.

Now make it operable.

Waxell Observe is a lightweight Python package that brings governance to any AI agent. Track every LLM call, enforce cost limits, and apply runtime policy — with two lines of code and zero changes to your agent.

the problem

Agents are running. Nobody is watching.

You shipped the agent. It calls models, burns tokens, triggers actions, and touches production data. It runs continuously. And right now, you have no structured way to see what it is doing, what it is costing, or when it should stop.


Most agent frameworks optimize for building. Almost none optimize for operating. The moment your agent moves from experiment to production, you inherit a set of problems that no amount of prompt engineering will solve.

the problem

Agents are running. Nobody is watching.

You shipped the agent. It calls models, burns tokens, triggers actions, and touches production data. It runs continuously. And right now, you have no structured way to see what it is doing, what it is costing, or when it should stop.


Most agent frameworks optimize for building. Almost none optimize for operating. The moment your agent moves from experiment to production, you inherit a set of problems that no amount of prompt engineering will solve.

What Waxell Observe Is

A governance layer that attaches to your agent at runtime.

A governance layer that attaches to your agent at runtime.

Waxell Observe is a Python package that auto-instruments your existing agents — OpenAI, Anthropic, LiteLLM, Groq, HuggingFace — and adds operational structure without changing how your agent works. Two lines at startup. No migration. No rewrite. No vendor lock-in.


It intercepts LLM calls at the provider level, which means it works regardless of the framework above it. Your agent keeps running the way you built it. Waxell makes it visible, bounded, and controllable.

COMPATIBILITY

Bring your agent. However you built it.

Waxell Observe operates at the LLM provider level, beneath your framework. It does not care how your agent is structured, what orchestration layer you use, or how many abstractions sit above it.

LangChain

LlamaIndex

CrewAI

LiteLLM

Your Framework

COMPATIBILITY

Bring your agent. However you built it.

Waxell Observe operates at the LLM provider level, beneath your framework. It does not care how your agent is structured, what orchestration layer you use, or how many abstractions sit above it.

LangChain

LlamaIndex

CrewAI

LiteLLM

Your Framework

COMPATIBILITY

Bring your agent. However you built it.

Waxell Observe operates at the LLM provider level, beneath your framework. It does not care how your agent is structured, what orchestration layer you use, or how many abstractions sit above it.

LangChain

LlamaIndex

CrewAI

LiteLLM

Your Framework

How It Works

Two lines to start. One decorator to go deeper.

01 Install the package

Add Waxell Observe to your existing Python project. Nothing else changes.

$ pip install waxell-observe

02 Initialize — two lines

Call init() once at startup. Every OpenAI and Anthropic call is now auto-instrumented. Your agent code stays untouched.

import waxell_observe
waxell_observe.init(api_key="wax_sk_...", api_url="https://waxell.dev")

# Import LLM SDKs AFTER init() they're now auto-instrumented
from openai import OpenAI

client = OpenAI()
response = client.chat.completions.create(
  model="gpt-4o",
  messages=[{"role": "user", "content": "Hello!"}]
)
# Automatically traced model, tokens, cost, latency

03 Add structure with @observe

For function-level tracing, add the decorator. Get automatic run tracking, IO capture, and policy enforcement per function — no refactor required.

from waxell_observe import observe

@observe(agent_name="support-bot")
async def handle_ticket(query: str) -> str:
  response = await call_llm(query)
  return response

# Every call now creates a tracked execution run
# with inputs, outputs, status, and policy enforcement

what you get

Operational structure for autonomous systems.

Waxell Observe replaces ad-hoc logging and manual oversight with a real governance layer. Everything your agents do — tracked, scored, and governed from a single control plane.

LLM CALL TRACKING

Every LLM interaction recorded — model, token counts, cost estimates, latency, and prompt/response previews. Browse, filter, and inspect in the dashboard.

SESSIONS & USER TRACKING

Group related runs by session. Track per-user identity, usage patterns, and cost attribution. Conversation-level analytics out of the box.


PROMPT MANAGEMENT

Version-controlled prompts with labels, a playground for testing, and SDK retrieval for production. Manage prompts as infrastructure, not strings.


COST MANAGEMENT

Per-model usage breakdowns, per-user cost attribution, and custom pricing overrides. Know exactly what your agents are spending and where.


POLICY ENFORCEMENT

Pre-execution and mid-execution checks with allow, block, warn, and throttle actions. Constraints applied during runtime — not after the fact.


SCORING & EVALUATION

Capture quality scores via SDK, UI annotations, or automated LLM-as-Judge evaluators. Build human review workflows with annotation queues.


Waxell Observe is in private beta.

We are onboarding teams that are running agents in production — or preparing to. If you have an agent and want to operate it with real governance, request access below.

Waxell Observe is in private beta.

We are onboarding teams that are running agents in production — or preparing to. If you have an agent and want to operate it with real governance, request access below.

Waxell Observe is in private beta.

We are onboarding teams that are running agents in production — or preparing to. If you have an agent and want to operate it with real governance, request access below.

Waxell

Waxell provides a governance and orchestration layer for building and operating autonomous agent systems in production.

Product

Company

Follow Us

© 2026 Waxell. All rights reserved.

Patent Pending.

Waxell

Waxell provides a governance and orchestration layer for building and operating autonomous agent systems in production.

Product

Company

Follow Us

© 2026 Waxell. All rights reserved.

Patent Pending.

Waxell

Waxell provides a governance and orchestration layer for building and operating autonomous agent systems in production.

Product

Company

Follow Us

© 2026 Waxell. All rights reserved.

Patent Pending.