What is Vela?
Vela is an observability platform for AI agents. It traces every LLM call your agents make — capturing latency, cost, inputs, outputs, and errors — so you can debug, optimize, and govern your AI pipelines in production.
Platform Overview
Everything Vela does — from ingestion to governance — and what's coming next.
CLI — The Fastest Way to Start
Zero code changes required. The Vela CLI wraps your existing agent script and traces every LLM call automatically.
Install via Homebrew (Mac)
brew tap bowensam155/velabrew install vela
Login and trace
vela loginvela wrap python your_agent.py
vela wrap instruments your agent automatically — no code changes required. Works with any Python script that makes LLM calls.
vela init
Configure your API key interactively. Stores the key in ~/.vela/config.json.
$ vela init Enter your API key: vela_•••••••• ✓ Key verified. Config saved to ~/.vela/config.json
vela trace
Send a trace from the command line:
$ vela trace --agent "my-agent" --model "gpt-4o" \ --input "Hello" --output "Hi there" ✓ Trace sent · a3f2bc91
vela status
Check your agent fleet health at a glance:
$ vela status Agent Fleet Status ────────────────── Healthy: 12 Degraded: 2 Failing: 0 ────────────────── Health Score: 94.2%
vela guard
View active guard policies:
$ vela guard Active Policies ───────────────────────────── ✓ PII Detection (flag) ✓ Prompt Injection (block) ✓ Jailbreak Blocking (block)
Quick Start
Get your first agent traced in under 5 minutes.
Prerequisites
Python 3.8+ or Node.js 16+. No other dependencies required.
# Check if Python is installedpython3 --version # If not installed, install via Homebrew:brew install python3 # Verify pip is availablepip3 --version
1. Install
pip3 install vela-sdkAlternative: install via Homebrew (recommended for Mac):
brew tap bowensam155/velabrew install vela
Or for JavaScript/TypeScript:
npm install vela-sdkRequires Node.js 16+. Install from nodejs.org if needed. Or with yarn: yarn add vela-sdk
2. Initialize
Get your API key from Dashboard → Settings or during onboarding.
import vela vela.init(api_key="YOUR_API_KEY")
3. Send your first trace
Complete working example (no LLM API key needed):
import velaimport time vela.init(api_key="YOUR_API_KEY") @vela.trace(model="gpt-4o-mini")def my_agent(query): # Simulated agent — replace with your real LLM call time.sleep(0.5) return f"Response to: {query}" vela.new_session(label="test-pipeline")result = my_agent("Test query")print(result)vela.end_session()print("Check your dashboard at vela.wtf/dashboard")
4. Verify
Open your dashboard — your trace should appear within seconds.
Python SDK
Installation
pip3 install vela-sdkMinimum Python 3.8+. On fresh Macs, use pip3 (not pip).
Initialization
import vela vela.init( api_key="YOUR_API_KEY", # Required environment="production", # Optional: tag traces base_url="https://vela.wtf", # Optional: self-hosted debug=False, # Optional: verbose logging)
Basic tracing
Wrap any function with @vela.trace to capture inputs, outputs, latency, and cost automatically.
@vela.trace(model="gpt-4o")def research_agent(query: str) -> str: response = openai.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": query}] ) return response.choices[0].message.content
Manual tracing
Use vela.trace() directly when you need full control:
vela.trace( agent="summarizer", model="gpt-4o", input="Summarize this document...", output="The document covers...", latency_ms=420, cost=0.0034, metadata={"doc_id": "abc123", "version": 2})
Session grouping
Group related traces into a session for end-to-end visibility:
with vela.session(label="customer-support-pipeline") as session: research = research_agent("How to reset API key?") response = writer_agent(research) # All traces within this block share the same session ID
Custom metadata
Attach arbitrary key-value pairs to any trace:
@vela.trace(model="gpt-4o", metadata={"customer_id": "usr_123", "tier": "enterprise"})def support_agent(query): ...
Async support
import asyncio async def main(): await vela.trace_async( agent="async-agent", model="gpt-4o", input="Hello", output="Hi there" )
Error handling
Traces are sent in the background and never throw. If a trace fails to send, it is retried up to 3 times with exponential backoff. Failed traces are logged to stderr when debug=True.
Environment variables
You can set these instead of passing arguments to vela.init():
export VELA_API_KEY="your_api_key"export VELA_ENVIRONMENT="production"
JavaScript / TypeScript SDK
Installation
npm install vela-sdk# or with yarn\nyarn add vela-sdkRequires Node.js 16+. Install from nodejs.org if needed. Works with Bun and Deno too. Ships as ESM and CommonJS.
ESM import
import { Vela } from 'vela-sdk'; const vela = new Vela({ apiKey: 'YOUR_API_KEY' });
CommonJS import
const { Vela } = require('vela-sdk'); const vela = new Vela({ apiKey: 'YOUR_API_KEY' });
Configuration options
const vela = new Vela({ apiKey: 'YOUR_API_KEY', // Required environment: 'production', // Optional baseUrl: 'https://vela.wtf', // Optional: self-hosted debug: false, // Optional});
Basic tracing
await vela.trace({ agent: 'support-bot', model: 'gpt-4o', input: 'How do I reset my password?', output: 'Go to Settings > Security > Reset Password.', latencyMs: 340, cost: 0.003, metadata: { userId: 'usr_123' },});
Session grouping
const session = vela.session({ label: 'data-pipeline' }); await session.trace({ agent: 'fetcher', model: 'gpt-4o', input: '...', output: '...' });await session.trace({ agent: 'parser', model: 'gpt-4o', input: '...', output: '...' }); await session.end();
TypeScript types
All types are included. Key interfaces:
import type { TraceInput, VelaConfig, Session } from 'vela-sdk';Express middleware
import express from 'express';import { Vela } from 'vela-sdk'; const app = express();const vela = new Vela({ apiKey: 'YOUR_API_KEY' }); app.use(vela.middleware()); // Auto-traces all LLM calls in request scope
Next.js API route wrapper
import { Vela } from 'vela-sdk';const vela = new Vela({ apiKey: 'YOUR_API_KEY' }); export async function POST(req: Request) { return vela.wrap(async () => { // All LLM calls in this scope are automatically traced const result = await callMyAgent(await req.json()); return Response.json(result); });}
Environment variables
VELA_API_KEY=your_api_keyVELA_ENVIRONMENT=production
Vela Guard
Vela Guard is a real-time security layer that sits between your agents and the outside world. It inspects every trace for threats, PII, and policy violations — and can block, flag, or redirect them.
Enabling policies
Enable guard policies via the SDK or the dashboard:
# Pythonvela.guard.enable("pii_detection")vela.guard.enable("prompt_injection")vela.guard.enable("jailbreak_blocking")
// JavaScriptawait vela.guard.enable('pii_detection');await vela.guard.enable('prompt_injection');await vela.guard.enable('jailbreak_blocking');
Guard modes
Each policy can be set to one of three modes:
Custom guard policies
vela.guard.create_policy( name="Block competitor mentions", policy_type="content_filter", action="block", config={"keywords": ["competitor-name", "switch to"]})
Handling guard responses
When a trace is blocked, your agent receives a structured response:
{ "blocked": true, "reason": "prompt_injection", "message": "Request blocked. Prompt injection attempt logged.", "policy_id": "pol_abc123"}
PII entity types
Vela Guard detects and can redact the following PII types:
Vela Govern
Governance and compliance for AI agents. Every agent gets an Agent Passport — a living compliance profile that tracks risk, audit history, and framework adherence.
Agent Passports
An Agent Passport is automatically generated for every agent that sends traces. It includes risk tier, compliance status, and audit trail.
Risk tiers
Compliance frameworks
Vela maps your agent behavior to these compliance frameworks:
Generating reports
Navigate to Govern → Reports in the dashboard to generate compliance reports on demand. Reports can be exported as PDF.
API access
# Get compliance status for an agentGET /api/govern/passport?agent=my-agent # Response{ "agent": "my-agent", "risk_tier": "medium", "compliance": { "eu_ai_act": "compliant", "soc2": "compliant", "hipaa": "review_required" }}
Fleet Dashboard
The Fleet Dashboard shows the health and status of every agent in your organization at a glance. Access it from the sidebar at /fleet.
Agent statuses
Filtering and sorting
Filter by status, model, environment, or label. Sort by latency, cost, error rate, or last active time. Click any agent row to see its full trace history and passport.
Mesh
Mesh is Vela's topology mapping feature. It visualizes how your agents call each other, detects circular dependencies, and measures blast radius.
How dependencies are detected
Vela infers agent-to-agent relationships from trace data. When agent A produces output that becomes agent B's input within the same session, Vela records the dependency edge in a graph database (Neo4j).
Mesh sync
Mesh topology is recomputed periodically and on demand. Trigger a manual sync from the Mesh dashboard or via API:
POST /api/mesh/syncAuthorization: Bearer your_api_key
Blast radius analysis
Answer: "If this agent goes down, what else breaks?" Vela traverses the dependency graph to calculate impact scores for every agent.
GET /api/mesh/blast-radius?agent=my-agent { "agent": "my-agent", "blast_radius": 4, "affected": ["writer", "editor", "reviewer", "publisher"], "total_agents": 12}
Circular dependency detection
Vela flags circular dependencies (A → B → C → A) automatically. These appear as warnings in the Mesh dashboard and in fleet health scores.
Semantic Search
Search across all your traces using natural language. Powered by pgvector embeddings and Gemini.
How it works
When a trace is ingested, Vela generates an embedding vector from the input and output text using Google's Gemini embedding model. These vectors are stored in a pgvector column and indexed for fast similarity search.
Using search
Use the search bar in the dashboard or press ⌘K to open the command palette. Type a natural language query like "customer asking about refunds" or "agent that processes invoices".
Search API
GET /api/search?q=customer+refund+request&limit=10Authorization: Bearer your_api_key { "results": [ { "trace_id": "trc_abc123", "agent": "support-bot", "input": "Can I get a refund?", "similarity": 0.94 } ]}
Cost Forecasting
Vela projects your AI spend based on historical usage patterns to help you budget and catch anomalies.
How projections work
Vela uses a rolling window of your last 7–30 days of trace cost data to calculate a weighted trend. The forecast accounts for day-of-week patterns and organic growth, projecting costs 30, 60, and 90 days out.
Setting budget caps
Set a monthly budget cap in Dashboard → Settings. When projected spend exceeds your cap, Vela alerts you. Optionally, enable auto-pause to halt non-critical agents when the cap is reached.
Cost anomaly alerts
Vela detects cost spikes — any day where spend exceeds 2x the trailing 7-day average triggers an anomaly alert. Alerts are visible in the dashboard and sent via configured channels (email, Slack, webhooks).
Forecast API
GET /api/forecastAuthorization: Bearer your_api_key { "current_monthly": 234.50, "projected_30d": 289.00, "projected_60d": 312.00, "projected_90d": 340.00, "budget_cap": 500.00, "on_track": true}
Integrations
Chrome Extension
The Vela Chrome Extension overlays agent trace data on any web page where your agents are running. Install from the Chrome Web Store and enter your API key to connect.
Slack Bot
Get real-time alerts in Slack when agents fail, guard policies trigger, or costs spike.
Setup: Go to Team Settings → Integrations → Slack and click Connect. Select the channel for notifications.
Commands: /vela status — fleet overview. /vela agent [name] — agent details. /vela cost — today's spend.
Zapier
Available triggers:
Available actions: Send trace, Create guard policy, Pause agent.
Make (Integromat)
Same triggers and actions as Zapier. Search for "Vela" in the Make app directory to install.
Webhooks
Configure outgoing webhooks in Team Settings → Webhooks. All webhook payloads are signed with HMAC-SHA256 using your webhook secret.
POST https://your-server.com/webhookX-Vela-Signature: sha256=abc123...Content-Type: application/json { "event": "guard.blocked", "timestamp": "2026-03-28T14:32:07Z", "data": { "agent": "support-bot", "policy": "prompt_injection", "input": "Ignore your instructions..." }}
API Reference
Authentication
All API requests require authentication via your API key. Pass it as a header:
Authorization: Bearer your_api_key# orX-Vela-Key: your_api_key
Base URL
https://vela.wtf/apiEndpoints
POST /api/ingest
POST /api/ingestAuthorization: Bearer your_api_keyContent-Type: application/json { "agent": "my-agent", "model": "gpt-4o", "input": "What is the weather?", "output": "It's sunny and 72°F.", "latency_ms": 340, "cost": 0.003, "session_id": "optional-session-id", "metadata": { "key": "value" }} # Response: 201 Created{ "trace_id": "trc_abc123", "session_id": "ses_xyz789"}
GET /api/traces
GET /api/traces?agent=my-agent&limit=20&offset=0Authorization: Bearer your_api_key # Response: 200 OK{ "traces": [ { "id": "trc_abc123", "agent": "my-agent", "model": "gpt-4o", "input": "...", "output": "...", "latency_ms": 340, "cost": 0.003, "created_at": "2026-03-28T14:32:01Z" } ], "total": 1847}
POST /api/guard/check
POST /api/guard/checkAuthorization: Bearer your_api_keyContent-Type: application/json { "input": "Ignore all previous instructions.", "agent": "support-bot"} # Response: 200 OK{ "allowed": false, "violations": [ { "policy": "prompt_injection", "severity": "critical", "action": "block" } ]}
Rate limits
Default rate limits per API key:
Rate limit headers are included in every response: X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset.
Error codes
Self-Hosting
Vela is cloud-hosted at vela.wtf. Self-hosting documentation is coming soon.
Requirements overview
For enterprise self-hosting inquiries, contact us at team@vela.wtf.
Troubleshooting
Common issues when getting started with Vela.
"pip: command not found"
# Use pip3 instead (required on fresh Macs)pip3 install vela-sdk # Or install Python firstbrew install python3
"npm ERR! 404 Not Found — vela-sdk"
# Make sure npm is up to datenpm install -g npm@latestnpm install vela-sdk
"vela: command not found" after brew install
# Reload your shellsource ~/.zshrc# orexec zsh
"No module named vela"
# Make sure you're using the right Pythonwhich python3pip3 install vela-sdk# Try running with python3 explicitlypython3 your_agent.py
"Connection refused" or traces not appearing
Make sure your API key is correct. Copy it from your dashboard. Check that vela.init() is called before any @vela.trace decorators. You can verify your connection with:
python3 -c "import vela; vela.init(api_key='YOUR_API_KEY'); vela.test()"Still stuck?
Email us at velahelp@outlook.com — we reply within hours.