Prerequisites
Before you begin, you’ll need:
- A TraceLM account (sign up here)
- An OpenAI API key
- Python 3.8+ or Node.js 16+
Step 1: Get Your API Key
Create a Project
After logging in, create a new project from the dashboard. Each project has its own API key and isolated traces.
Generate API Key
Navigate to your project settings and generate a TraceLM API key. Your key will start with lt_.
Keep your API key secure. Never commit it to version control or expose it in client-side code.
Step 2: Install the SDK
Python
TypeScript/JavaScript
Step 3: Initialize the Client
from tracelm import TraceLM
tracelm = TraceLM(
api_key="lt_your-tracelm-key", # Your TraceLM API key
openai_api_key="sk-your-openai-key", # Your OpenAI API key
)
import { TraceLM } from 'tracelm';
const tracelm = new TraceLM({
apiKey: 'lt_your-tracelm-key',
openaiApiKey: 'sk-your-openai-key',
});
For production, use environment variables instead of hardcoding API keys:
TRACELM_API_KEY
OPENAI_API_KEY
Step 4: Send Your First Trace
from tracelm import TraceLM
tracelm = TraceLM()
# Make a traced LLM call
response = tracelm.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello, world!"}]
)
print(response.choices[0].message.content)
import { TraceLM } from 'tracelm';
const tracelm = new TraceLM();
// Make a traced LLM call
const response = await tracelm.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello, world!' }],
});
console.log(response.choices[0].message.content);
Step 5: View Your Traces
Head to the TraceLM Dashboard to see your traces in real-time. You’ll see:
- All LLM calls with request/response content
- Latency and token usage metrics
- Quality signals and detection results
- Task and conversation groupings
Using Tasks for Agent Observability
If you’re building an AI agent, use tasks to group related LLM calls:
from tracelm import TraceLM
tracelm = TraceLM()
# Create a task to group related LLM calls
with tracelm.task(name="booking_flow", user_id="user_123") as task:
response1 = tracelm.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Find flights to NYC"}]
)
response2 = tracelm.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "user", "content": "Find flights to NYC"},
{"role": "assistant", "content": response1.choices[0].message.content},
{"role": "user", "content": "Book the cheapest one"}
]
)
# Complete and run detection
result = task.complete()
if result.loops.detected:
print(f"Warning: Loop detected!")
if result.failures.total > 0:
print(f"Warning: {result.failures.total} tool failures")
import { TraceLM } from 'tracelm';
const tracelm = new TraceLM();
// Create a task to group related LLM calls
const task = tracelm.startTask({
name: 'booking_flow',
userId: 'user_123',
});
try {
const response1 = await tracelm.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Find flights to NYC' }],
});
const response2 = await tracelm.chat.completions.create({
model: 'gpt-4o-mini',
messages: [
{ role: 'user', content: 'Find flights to NYC' },
{ role: 'assistant', content: response1.choices[0].message.content! },
{ role: 'user', content: 'Book the cheapest one' },
],
});
// Complete and run detection
const result = await task.complete();
if (result?.loops.detected) {
console.log('Warning: Loop detected!');
}
if ((result?.failures.total ?? 0) > 0) {
console.log(`Warning: ${result?.failures.total} tool failures`);
}
} finally {
tracelm.endTask();
}
Next Steps