OOtis Docs

Integration guides

OpenTelemetry

Integrate with an existing OpenTelemetry setup, or send OTel spans directly to Otis.

Follow this guide if either is true:

  • Your app already has OpenTelemetry instrumentation and you want to add Otis without rewriting it.
  • You're using a non-wrapped AI library that emits OpenTelemetry spans (OpenLLMetry, the official OTel GenAI instrumentations, or custom OTel spans) and you want Otis to process them.

If you're starting fresh, use the Next.js, Node.js, or Serverless guides instead. Those use otis.wrap(ai) and don't require any OpenTelemetry knowledge.

Approach 1 — Add an Otis span processor to your existing provider

If you already configure a TracerProvider for another backend, add OtisSpanProcessor alongside your existing processors:

import { BasicTracerProvider } from "@opentelemetry/sdk-trace-base";
import { OtisSpanProcessor } from "@runotis/sdk";

const provider = new BasicTracerProvider();
provider.addSpanProcessor(new OtisSpanProcessor({
  apiKey: process.env.OTIS_API_KEY!,
}));
provider.register();

Every span produced on this provider is exported to both your existing backend and Otis.

Approach 2 — Use OtisExporter with @vercel/otel

For Next.js apps on Vercel that use @vercel/otel:

instrumentation.ts
import { registerOTel } from "@vercel/otel";
import { OtisExporter } from "@runotis/sdk";

export function register() {
  registerOTel({
    serviceName: "my-app",
    traceExporter: new OtisExporter({
      apiKey: process.env.OTIS_API_KEY!,
    }),
  });
}

Approach 3 — Point an OTLP exporter at Otis ingest

Any OpenTelemetry instrumentation that supports OTLP HTTP export can send directly to the Otis ingest endpoint:

https://ingest.runotis.com/v1/traces

Set the Authorization header to Bearer <OTIS_API_KEY>. The exact configuration depends on your instrumentation; most support OTEL_EXPORTER_OTLP_ENDPOINT and OTEL_EXPORTER_OTLP_HEADERS environment variables.

Do not combine with otis.wrap()

Don't combine with otis.wrap()

OtisSpanProcessor / OtisExporter and otis.wrap() both route spans to Otis. Using both on the same calls sends duplicate spans. Pick one integration approach per call site.

Recognized GenAI attributes

The Otis trace processor automatically recognizes GenAI semantic convention attributes from any OpenTelemetry-based instrumentation (OTel GenAI semantic conventions, OpenLLMetry, or manually emitted spans). Multiple naming conventions are checked in priority order; the first non-empty value wins.

User message

PriorityAttribute
1ai.prompt.lastUserMessage
2ai.prompt.messages
3ai.prompt
4gen_ai.input.messages (OTel GenAI semconv v1.37.0+)
5gen_ai.prompt.messages (older)
6gen_ai.prompt (OpenLLMetry)

Response message

PriorityAttribute
1ai.response.text
2gen_ai.output.messages
3gen_ai.response.text
4gen_ai.completion

Token usage

PriorityInput tokensOutput tokens
1ai.usage.promptTokensai.usage.completionTokens
2ai.usage.input_tokensai.usage.output_tokens
3gen_ai.usage.prompt_tokensgen_ai.usage.completion_tokens
4gen_ai.usage.input_tokensgen_ai.usage.output_tokens

Model and provider

PriorityModelProvider
1ai.model.idai.model.provider
2gen_ai.request.modelgen_ai.system
3gen_ai.provider.name

Context IDs

FieldSpan attributes (priority order)
Sessionsession.id, session_id
Useruser.id, user_id, enduser.id
Documentdocument.id, document_id, project.id, workspace.id, file.id
Chatchat.id, chat_id, conversation.id, thread.id

GenAI span event fallback

If span attributes contain no prompt/response content, extraction falls back to GenAI span events:

  • gen_ai.user.message, gen_ai.system.message, gen_ai.tool.message → user message
  • gen_ai.assistant.message, gen_ai.choice → response message

Content is extracted from the content event attribute. Span attributes always take priority over events.

SpanKind conventions

Spans are expected to follow the OTel GenAI semantic conventions for SpanKind:

Span typeSpanKindExamples
AI inferenceCLIENTai.generateText, ai.doGenerate, anthropic.messages.create
Tool executionINTERNALai.tool.getWeather
Agent invocationCLIENTclaude-agent.query

Coexistence with otis.wrap()

otis.wrap(ai) intercepts the model's doGenerate / doStream methods directly. It does not call provider.register() and does not touch the global OpenTelemetry tracer provider, so the SDK coexists safely with any existing OpenTelemetry setup.

The two approaches produce independent span trees on separate providers:

  • wrap() creates spans on Otis's internal provider → sent to Otis
  • Your existing tracer creates spans on your own provider → sent to your backend

If you have experimental_telemetry configured on AI SDK calls for another backend, keep it; wrap() passes it through unchanged:

// Both active — Otis gets model-wrapping spans, your other backend gets AI SDK telemetry spans
const { streamText } = otis.wrap(ai);
const result = await streamText({
  model: anthropic("claude-sonnet-4-6"),
  prompt: "Hello",
  experimental_telemetry: { isEnabled: true, tracer: myOwnTracer },
});

This means you can start with your existing OpenTelemetry setup and incrementally adopt otis.wrap() for calls where you want the richer AI-specific span tree (tool execution, streaming metrics, auto-detected nested calls).

OpenTelemetry version compatibility

The SDK supports both OpenTelemetry v1.x and v2.x through duck typing. OtisSpanProcessor and OtisExporter work with either version.

Integration checklist

  • OTIS_API_KEY set in env
  • OtisSpanProcessor or OtisExporter added to your tracer provider (or OTLP exporter pointed at ingest)
  • Not double-routing: if using otis.wrap(), not also using OtisSpanProcessor on the same calls
  • Spans emitted by your existing instrumentation use one of the recognized attribute sets
  • A test AI call produces a trace that reaches Otis with model, tokens, and prompt/response attributes populated

Next steps

On this page