Skip to main content
TraceLM acts as a transparent proxy for the OpenAI API while adding observability features. All OpenAI endpoints are supported with the same request/response format.

Base URL

https://api.tracelm.ai

Integration Methods

TraceLM SDK

Use our official SDKs for Python or TypeScript with built-in task tracking and detection.

Direct API

Make HTTP requests directly to TraceLM endpoints using any HTTP client.

Required Headers

All API requests require the following headers:
HeaderDescription
AuthorizationYour OpenAI API key in Bearer format: Bearer sk-...
X-API-KeyYour TraceLM project API key (starts with lt_)
Content-TypeMust be application/json

Agent Observability Headers

These optional headers enable advanced agent monitoring features:
HeaderTypeDescription
X-Task-IDstringUnique identifier for the current agent task. Groups related LLM calls together.
X-Conversation-IDstringUnique identifier for the user conversation/session. Groups multiple tasks together.
X-User-IDstringYour application’s user identifier for tracking user-specific traces.
curl -X POST "https://api.tracelm.ai/v1/chat/completions" \
  -H "Authorization: Bearer sk-your-openai-key" \
  -H "X-API-Key: lt_your-tracelm-key" \
  -H "X-Task-ID: task_abc123" \
  -H "X-Conversation-ID: conv_xyz789" \
  -H "X-User-ID: user_456" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'
When using agent headers, TraceLM automatically detects execution loops, tool failures, and context issues.

API Endpoints

LLM Proxy

MethodEndpointDescription
POST/v1/chat/completionsCreate a chat completion (OpenAI compatible)

Tasks

MethodEndpointDescription
GET/api/v1/tasksList tasks with filters
GET/api/v1/tasks/{id}Get task details
POST/api/v1/tasks/{id}/detectRun detection analysis
PUT/api/v1/tasks/{id}/completeMark task as completed
PUT/api/v1/tasks/{id}/failMark task as failed

Conversations

MethodEndpointDescription
GET/api/v1/conversationsList conversations
GET/api/v1/conversations/{id}Get conversation details
GET/api/v1/conversations/{id}/context-failuresGet context failures

Verification

MethodEndpointDescription
POST/api/v1/verification/verify/{trace_id}Trigger verification
GET/api/v1/verification/results/{trace_id}Get verification results

Error Codes

CodeDescriptionResolution
400Bad request - invalid JSON or parametersCheck your request body and parameters
401Invalid or missing API keyVerify your Authorization and X-API-Key headers
403API key doesn’t have accessCheck project permissions and API key scope
404Resource not foundVerify the resource ID exists
429Rate limit exceededImplement exponential backoff
500Internal server errorRetry with backoff, contact support if persistent
502/503Service temporarily unavailableRetry after a short delay

Rate Limits

TraceLM applies rate limits to ensure fair usage. Rate limit information is returned in response headers:
HeaderDescription
X-RateLimit-LimitMaximum requests allowed per window
X-RateLimit-RemainingRequests remaining in current window
X-RateLimit-ResetUnix timestamp when the rate limit resets