TraceLM acts as a transparent proxy for the OpenAI API while adding observability features. All OpenAI endpoints are supported with the same request/response format.
Base URL
Integration Methods
All API requests require the following headers:
| Header | Description |
|---|
Authorization | Your OpenAI API key in Bearer format: Bearer sk-... |
X-API-Key | Your TraceLM project API key (starts with lt_) |
Content-Type | Must be application/json |
These optional headers enable advanced agent monitoring features:
| Header | Type | Description |
|---|
X-Task-ID | string | Unique identifier for the current agent task. Groups related LLM calls together. |
X-Conversation-ID | string | Unique identifier for the user conversation/session. Groups multiple tasks together. |
X-User-ID | string | Your application’s user identifier for tracking user-specific traces. |
curl -X POST "https://api.tracelm.ai/v1/chat/completions" \
-H "Authorization: Bearer sk-your-openai-key" \
-H "X-API-Key: lt_your-tracelm-key" \
-H "X-Task-ID: task_abc123" \
-H "X-Conversation-ID: conv_xyz789" \
-H "X-User-ID: user_456" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello!"}]
}'
When using agent headers, TraceLM automatically detects execution loops, tool failures, and context issues.
API Endpoints
LLM Proxy
| Method | Endpoint | Description |
|---|
| POST | /v1/chat/completions | Create a chat completion (OpenAI compatible) |
Tasks
| Method | Endpoint | Description |
|---|
| GET | /api/v1/tasks | List tasks with filters |
| GET | /api/v1/tasks/{id} | Get task details |
| POST | /api/v1/tasks/{id}/detect | Run detection analysis |
| PUT | /api/v1/tasks/{id}/complete | Mark task as completed |
| PUT | /api/v1/tasks/{id}/fail | Mark task as failed |
Conversations
| Method | Endpoint | Description |
|---|
| GET | /api/v1/conversations | List conversations |
| GET | /api/v1/conversations/{id} | Get conversation details |
| GET | /api/v1/conversations/{id}/context-failures | Get context failures |
Verification
| Method | Endpoint | Description |
|---|
| POST | /api/v1/verification/verify/{trace_id} | Trigger verification |
| GET | /api/v1/verification/results/{trace_id} | Get verification results |
Error Codes
| Code | Description | Resolution |
|---|
| 400 | Bad request - invalid JSON or parameters | Check your request body and parameters |
| 401 | Invalid or missing API key | Verify your Authorization and X-API-Key headers |
| 403 | API key doesn’t have access | Check project permissions and API key scope |
| 404 | Resource not found | Verify the resource ID exists |
| 429 | Rate limit exceeded | Implement exponential backoff |
| 500 | Internal server error | Retry with backoff, contact support if persistent |
| 502/503 | Service temporarily unavailable | Retry after a short delay |
Rate Limits
TraceLM applies rate limits to ensure fair usage. Rate limit information is returned in response headers:
| Header | Description |
|---|
X-RateLimit-Limit | Maximum requests allowed per window |
X-RateLimit-Remaining | Requests remaining in current window |
X-RateLimit-Reset | Unix timestamp when the rate limit resets |