Skip to main content
POST
https://api.tracelm.ai
/
v1
/
chat
/
completions
Chat Completions
curl --request POST \
  --url https://api.tracelm.ai/v1/chat/completions \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "<string>",
  "messages": [
    {}
  ],
  "temperature": 123,
  "max_tokens": 123,
  "stream": true,
  "tools": [
    {}
  ]
}
'
{
  "id": "<string>",
  "object": "<string>",
  "created": 123,
  "model": "<string>",
  "choices": [
    {}
  ],
  "usage": {}
}
Creates a chat completion. This endpoint is fully compatible with the OpenAI Chat Completions API, with automatic tracing added.

Request

model
string
required
The model to use for the completion (e.g., gpt-4o-mini, gpt-4o, gpt-4-turbo)
messages
array
required
An array of messages in the conversation. Each message has:
  • role: system, user, assistant, or tool
  • content: The message content
temperature
number
Sampling temperature (0-2). Higher values make output more random.
max_tokens
integer
Maximum number of tokens to generate.
stream
boolean
Whether to stream the response. Defaults to false.
tools
array
A list of tools the model may call. See Working with Tool Calls.

Agent Headers (Optional)

X-Task-ID
string
Unique identifier for the current agent task.
X-Conversation-ID
string
Unique identifier for the user conversation/session.
X-User-ID
string
Your application’s user identifier.

Response

id
string
Unique identifier for the completion.
object
string
Always chat.completion.
created
integer
Unix timestamp of when the completion was created.
model
string
The model used for the completion.
choices
array
An array of completion choices. Each choice has:
  • index: The index of the choice
  • message: The generated message
  • finish_reason: Why the model stopped (stop, length, tool_calls)
usage
object
Token usage information:
  • prompt_tokens: Tokens in the prompt
  • completion_tokens: Tokens in the completion
  • total_tokens: Total tokens used

Examples

Basic Request

curl -X POST "https://api.tracelm.ai/v1/chat/completions" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "X-API-Key: $TRACELM_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "What is the capital of France?"}
    ],
    "temperature": 0.7
  }'

Response

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1677858242,
  "model": "gpt-4o-mini",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "The capital of France is Paris."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 25,
    "completion_tokens": 8,
    "total_tokens": 33
  }
}

With Tool Calls

curl -X POST "https://api.tracelm.ai/v1/chat/completions" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "X-API-Key: $TRACELM_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [{"role": "user", "content": "What is the weather in NYC?"}],
    "tools": [
      {
        "type": "function",
        "function": {
          "name": "get_weather",
          "description": "Get weather for a location",
          "parameters": {
            "type": "object",
            "properties": {
              "location": {"type": "string"}
            },
            "required": ["location"]
          }
        }
      }
    ]
  }'

Streaming

curl -X POST "https://api.tracelm.ai/v1/chat/completions" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "X-API-Key: $TRACELM_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [{"role": "user", "content": "Write a poem"}],
    "stream": true
  }'
Streaming responses are sent as Server-Sent Events (SSE):
data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"delta":{"content":"The"},"index":0}]}

data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"delta":{"content":" capital"},"index":0}]}

data: [DONE]

With Agent Headers

curl -X POST "https://api.tracelm.ai/v1/chat/completions" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "X-API-Key: $TRACELM_API_KEY" \
  -H "X-Task-ID: task_abc123" \
  -H "X-Conversation-ID: conv_xyz789" \
  -H "X-User-ID: user_456" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'
When using agent headers, all traces are automatically grouped and detection is enabled.