LangWatch Observability
LangWatch (GitHub) is an LLM Ops platform for monitoring, experimenting, measuring and improving LLM pipelines, with a fair-code distribution model.
Setup
Obtain your LANGWATCH_API_KEY
from the LangWatch dashboard.
pnpm add langwatch
Ensure LANGWATCH_API_KEY
is set:
LANGWATCH_API_KEY='your_api_key_here'
Basic Concepts
- Each message triggering your LLM pipeline as a whole is captured with a Trace.
- A Trace contains multiple Spans, which are the steps inside your pipeline.
- Traces can be grouped together on LangWatch Dashboard by having the same
thread_id
in their metadata, making the individual messages become part of a conversation.- It is also recommended to provide the
user_id
metadata to track user analytics.
- It is also recommended to provide the
Configuration
The AI SDK supports tracing via Next.js OpenTelemetry integration. By using the LangWatchExporter
, you can automatically collect those traces to LangWatch.
First, you need to install the necessary dependencies:
npm install @vercel/otel langwatch @opentelemetry/api-logs @opentelemetry/instrumentation @opentelemetry/sdk-logs
Then, set up the OpenTelemetry for your application, follow one of the tabs below depending whether you are using AI SDK with Next.js or on Node.js:
You need to enable the instrumentationHook
in your next.config.js
file if you haven't already:
/** @type {import('next').NextConfig} */const nextConfig = { experimental: { instrumentationHook: true, },};
module.exports = nextConfig;
Next, you need to create a file named instrumentation.ts
(or .js
) in the root directory of the project (or inside src
folder if using one), with LangWatchExporter
as the traceExporter:
import { registerOTel } from '@vercel/otel';import { LangWatchExporter } from 'langwatch';
export function register() { registerOTel({ serviceName: 'next-app', traceExporter: new LangWatchExporter(), });}
(Read more about Next.js OpenTelemetry configuration on the official guide)
Finally, enable experimental_telemetry
tracking on the AI SDK calls you want to trace:
const result = await generateText({ model: openai('gpt-4o-mini'), prompt: 'Explain why a chicken would make a terrible astronaut, be creative and humorous about it.', experimental_telemetry: { isEnabled: true, // optional metadata metadata: { userId: 'myuser-123', threadId: 'mythread-123', }, },});
That's it! Your messages will now be visible on LangWatch:
Example Project
You can find a full example project with a more complex pipeline and AI SDK and LangWatch integration on our GitHub.
Manual Integration
The docs from here below are for manual integration, in case you are not using the AI SDK OpenTelemetry integration, you can manually start a trace to capture your messages:
import { LangWatch } from 'langwatch';
const langwatch = new LangWatch();
const trace = langwatch.getTrace({ metadata: { threadId: 'mythread-123', userId: 'myuser-123' },});
Then, you can start an LLM span inside the trace with the input about to be sent to the LLM.
const span = trace.startLLMSpan({ name: 'llm', model: model, input: { type: 'chat_messages', value: messages, },});
This will capture the LLM input and register the time the call started. Once the LLM call is done, end the span to get the finish timestamp to be registered, and capture the output and the token metrics, which will be used for cost calculation, e.g.:
span.end({ output: { type: 'chat_messages', value: [chatCompletion.choices[0]!.message], }, metrics: { promptTokens: chatCompletion.usage?.prompt_tokens, completionTokens: chatCompletion.usage?.completion_tokens, },});
Resources
For more information and examples, you can read more below:
Support
If you have questions or need help, join our community: