Try the Luthien Proxy Alpha
The Luthien Proxy is now available in alpha. It's a Claude Code and OpenAI Codex CLI compatible LLM proxy that brings Redwood-style AI Control to production agentic deployments.
Prerequisites: Docker, Python 3.13+, and uv package manager
Get started in minutes:
# Clone and configure
git clone https://github.com/LuthienResearch/luthien-proxy
cd luthien-proxy
cp .env.example .env
# Add OPENAI_API_KEY and ANTHROPIC_API_KEY to .env
# Start the proxy (make sure Docker is running first)
./scripts/quick_start.sh
# Test it works
./scripts/test_gateway.sh
# Launch Claude Code or Codex with the proxy
./scripts/launch_claude_code.sh
# or
./scripts/launch_codex.sh
Verify Everything is Working
Once the proxy is running, navigate to http://localhost:8000/activity/monitor to see the activity monitor. This real-time dashboard shows all requests flowing through the proxy, policy decisions, and detailed event streams. As you interact with Claude Code or Codex, you'll see requests, policy events, and responses appear in the monitor.
You can also visit http://localhost:8000/policy-config to switch between policies without restarting the proxy. Changes take effect immediately.
The proxy provides policy orchestration, decision logic, and full observability for your AI systems. The proxy gives you complete control over requests and responses in-flight.
Policies
Policies let you monitor, modify, block, or enhance any aspect of LLM interactions. For most use cases, we recommend using
SimpleJudgePolicy, which provides a declarative rules-based interface with automatic LLM judge logic.
Switch Policies at Runtime
The easiest way to try different policies is through the web UI at localhost:8000/policy-config. Browse available policies, select one to activate, and changes take effect immediately—no restart required.
You can also use the Admin API to switch policies programmatically. This is especially useful for live-updating policies from within a Claude Code session:
# Get current active policy
curl http://localhost:8000/admin/policy/current \
-H "Authorization: Bearer admin-dev-key"
# List available policy classes
curl http://localhost:8000/admin/policy/list \
-H "Authorization: Bearer admin-dev-key"
# Create and activate a new policy instance
curl -X POST http://localhost:8000/admin/policy/create \
-H "Content-Type: application/json" \
-H "Authorization: Bearer admin-dev-key" \
-d '{
"name": "my-judge-policy",
"policy_class_ref": "luthien_proxy.policies.tool_call_judge_v3:ToolCallJudgeV3Policy",
"config": {
"model": "openai/gpt-4o-mini",
"probability_threshold": 0.99
}
}'
curl -X POST http://localhost:8000/admin/policy/activate \
-H "Content-Type: application/json" \
-H "Authorization: Bearer admin-dev-key" \
-d '{"name": "my-judge-policy"}'
Watch the activity monitor at localhost:8000/activity/monitor to see policy events as requests flow through.
Implement Your Own Policy
Create your policy class in src/luthien_proxy/policies/my_policy.py:
from luthien_proxy.policies.simple_judge_policy import SimpleJudgePolicy
class MyPolicy(SimpleJudgePolicy):
"""Block dangerous commands before they execute."""
RULES = [
"Never allow 'rm -rf' commands",
"Block requests to delete production data",
"Prevent executing untrusted code"
]
Key concepts:
SimpleJudgePolicyhandles LLM judge logic automatically for requests, responses, and tool calls- Define your safety rules declaratively in the
RULESlist - The framework evaluates each rule against incoming requests and tool calls
- Dangerous operations are blocked before they can execute
See the available policies in src/luthien_proxy/policies/:
- noop.py — Pass-through filtering
- uppercase_nth_word.py — Transformation example
- tool_call_judge_v3.py — AI-based safety evaluation
Advanced: AI-Supervised Tool Calls
For production deployments requiring stronger safety guarantees, the tool_call_judge_v3.py policy demonstrates advanced streaming capabilities and local LLM review. It intercepts tool calls as they arrive, sends them to a local language model for safety evaluation, and blocks dangerous operations before they execute. This approach combines the speed of streaming with the intelligence of AI-powered decision-making, enabling real-time protection without sacrificing responsiveness.
Early Development
Luthien Proxy is in active development and subject to rapid change. We encourage you to star the repository to follow updates and open issues for bugs, feature requests, or questions. Your feedback helps shape the future of production AI Control.