Skip to main content

AI Kit Observability

Track LLM usage, monitor costs, and set up alerts for your AI applications.

npm install @ainative/ai-kit-observability

Usage Tracking

Automatically track every LLM call:

import { createTracker } from '@ainative/ai-kit-observability';

const tracker = createTracker({
projectId: 'my-project',
apiKey: 'your-key',
});

// Wrap your LLM calls
const response = await tracker.track(async () => {
return await client.chat.completions.create({
model: 'meta-llama/llama-3.3-70b-instruct',
messages: [{ role: 'user', content: 'Hello' }],
});
});

// Access metrics
console.log(tracker.metrics);
// { totalCalls: 42, totalTokens: 15000, totalCost: 0.45, avgLatency: 320 }

Cost Alerts

Get notified when spending exceeds thresholds:

import { setCostAlert } from '@ainative/ai-kit-observability';

setCostAlert({
dailyLimit: 10.00, // USD
monthlyLimit: 200.00,
onAlert: (alert) => {
console.log(`Cost alert: ${alert.type} — $${alert.currentSpend}`);
// Send notification, pause processing, etc.
},
});

Dashboard Metrics

Export metrics for your own dashboards:

const summary = tracker.getSummary({ period: '7d' });

// {
// totalCalls: 1542,
// totalTokens: 425000,
// totalCost: 12.45,
// avgLatency: 280,
// errorRate: 0.008,
// byModel: {
// 'llama-3.3-70b-instruct': { calls: 1200, tokens: 380000, cost: 10.20 },
// 'llama-3.1-8b-instruct': { calls: 342, tokens: 45000, cost: 2.25 },
// }
// }

Logging Integration

Connect to your existing logging infrastructure:

const tracker = createTracker({
projectId: 'my-project',
logger: {
info: (msg, data) => myLogger.info(msg, data),
error: (msg, data) => myLogger.error(msg, data),
},
});