Integration guides
OpenTelemetry
Integrate with an existing OpenTelemetry setup, or send OTel spans directly to Otis.
Follow this guide if either is true:
- Your app already has OpenTelemetry instrumentation and you want to add Otis without rewriting it.
- You're using a non-wrapped AI library that emits OpenTelemetry spans (OpenLLMetry, the official OTel GenAI instrumentations, or custom OTel spans) and you want Otis to process them.
If you're starting fresh, use the Next.js, Node.js, or Serverless guides instead. Those use otis.wrap(ai) and don't require any OpenTelemetry knowledge.
Approach 1 — Add an Otis span processor to your existing provider
If you already configure a TracerProvider for another backend, add OtisSpanProcessor alongside your existing processors:
import { BasicTracerProvider } from "@opentelemetry/sdk-trace-base";
import { OtisSpanProcessor } from "@runotis/sdk";
const provider = new BasicTracerProvider();
provider.addSpanProcessor(new OtisSpanProcessor({
apiKey: process.env.OTIS_API_KEY!,
}));
provider.register();Every span produced on this provider is exported to both your existing backend and Otis.
Approach 2 — Use OtisExporter with @vercel/otel
For Next.js apps on Vercel that use @vercel/otel:
import { registerOTel } from "@vercel/otel";
import { OtisExporter } from "@runotis/sdk";
export function register() {
registerOTel({
serviceName: "my-app",
traceExporter: new OtisExporter({
apiKey: process.env.OTIS_API_KEY!,
}),
});
}Approach 3 — Point an OTLP exporter at Otis ingest
Any OpenTelemetry instrumentation that supports OTLP HTTP export can send directly to the Otis ingest endpoint:
https://ingest.runotis.com/v1/tracesSet the Authorization header to Bearer <OTIS_API_KEY>. The exact configuration depends on your instrumentation; most support OTEL_EXPORTER_OTLP_ENDPOINT and OTEL_EXPORTER_OTLP_HEADERS environment variables.
Do not combine with otis.wrap()
Don't combine with otis.wrap()
OtisSpanProcessor / OtisExporter and otis.wrap() both route spans to Otis. Using both on the same calls sends duplicate spans. Pick one integration approach per call site.
Recognized GenAI attributes
The Otis trace processor automatically recognizes GenAI semantic convention attributes from any OpenTelemetry-based instrumentation (OTel GenAI semantic conventions, OpenLLMetry, or manually emitted spans). Multiple naming conventions are checked in priority order; the first non-empty value wins.
User message
| Priority | Attribute |
|---|---|
| 1 | ai.prompt.lastUserMessage |
| 2 | ai.prompt.messages |
| 3 | ai.prompt |
| 4 | gen_ai.input.messages (OTel GenAI semconv v1.37.0+) |
| 5 | gen_ai.prompt.messages (older) |
| 6 | gen_ai.prompt (OpenLLMetry) |
Response message
| Priority | Attribute |
|---|---|
| 1 | ai.response.text |
| 2 | gen_ai.output.messages |
| 3 | gen_ai.response.text |
| 4 | gen_ai.completion |
Token usage
| Priority | Input tokens | Output tokens |
|---|---|---|
| 1 | ai.usage.promptTokens | ai.usage.completionTokens |
| 2 | ai.usage.input_tokens | ai.usage.output_tokens |
| 3 | gen_ai.usage.prompt_tokens | gen_ai.usage.completion_tokens |
| 4 | gen_ai.usage.input_tokens | gen_ai.usage.output_tokens |
Model and provider
| Priority | Model | Provider |
|---|---|---|
| 1 | ai.model.id | ai.model.provider |
| 2 | gen_ai.request.model | gen_ai.system |
| 3 | — | gen_ai.provider.name |
Context IDs
| Field | Span attributes (priority order) |
|---|---|
| Session | session.id, session_id |
| User | user.id, user_id, enduser.id |
| Document | document.id, document_id, project.id, workspace.id, file.id |
| Chat | chat.id, chat_id, conversation.id, thread.id |
GenAI span event fallback
If span attributes contain no prompt/response content, extraction falls back to GenAI span events:
gen_ai.user.message,gen_ai.system.message,gen_ai.tool.message→ user messagegen_ai.assistant.message,gen_ai.choice→ response message
Content is extracted from the content event attribute. Span attributes always take priority over events.
SpanKind conventions
Spans are expected to follow the OTel GenAI semantic conventions for SpanKind:
| Span type | SpanKind | Examples |
|---|---|---|
| AI inference | CLIENT | ai.generateText, ai.doGenerate, anthropic.messages.create |
| Tool execution | INTERNAL | ai.tool.getWeather |
| Agent invocation | CLIENT | claude-agent.query |
Coexistence with otis.wrap()
otis.wrap(ai) intercepts the model's doGenerate / doStream methods directly. It does not call provider.register() and does not touch the global OpenTelemetry tracer provider, so the SDK coexists safely with any existing OpenTelemetry setup.
The two approaches produce independent span trees on separate providers:
wrap()creates spans on Otis's internal provider → sent to Otis- Your existing tracer creates spans on your own provider → sent to your backend
If you have experimental_telemetry configured on AI SDK calls for another backend, keep it; wrap() passes it through unchanged:
// Both active — Otis gets model-wrapping spans, your other backend gets AI SDK telemetry spans
const { streamText } = otis.wrap(ai);
const result = await streamText({
model: anthropic("claude-sonnet-4-6"),
prompt: "Hello",
experimental_telemetry: { isEnabled: true, tracer: myOwnTracer },
});This means you can start with your existing OpenTelemetry setup and incrementally adopt otis.wrap() for calls where you want the richer AI-specific span tree (tool execution, streaming metrics, auto-detected nested calls).
OpenTelemetry version compatibility
The SDK supports both OpenTelemetry v1.x and v2.x through duck typing. OtisSpanProcessor and OtisExporter work with either version.
Integration checklist
-
OTIS_API_KEYset in env -
OtisSpanProcessororOtisExporteradded to your tracer provider (or OTLP exporter pointed at ingest) - Not double-routing: if using
otis.wrap(), not also usingOtisSpanProcessoron the same calls - Spans emitted by your existing instrumentation use one of the recognized attribute sets
- A test AI call produces a trace that reaches Otis with model, tokens, and prompt/response attributes populated
Next steps
- Tracing —
wrap(),traced(),span.log - Events and exceptions —
sendEvent,sendException - Privacy — PII redaction and identifier hashing
- Customization —
beforeSend, debug logging, configuration - Identity — user and group identity, auth provider pass-through
- Feedback signals — link user feedback to specific AI responses