Integration guides
Node.js server
Integration for long-running Node.js servers (Express, Fastify, Hono, NestJS, and the Node adapters of meta-frameworks).
This guide is for long-running Node.js servers — Express, Fastify, Hono, NestJS, and the Node adapters of meta-frameworks (Remix, SvelteKit, Nuxt, Astro). If you're on Lambda, Vercel Functions, Cloudflare Workers, Netlify, or Firebase Functions, read the Serverless guide instead. For framework-specific init hooks (SvelteKit hooks.server.ts, Nuxt server/plugins, NestJS main.ts), see the Framework reference; this page covers the shared long-running-server patterns.
Install
npm install @runotis/sdk1. Initialize once at startup
import { initOtis } from "@runotis/sdk";
export const otis = initOtis({
apiKey: process.env.OTIS_API_KEY!,
serviceName: "my-app",
});Import this module once at the top of your entry point so initOtis runs before any traced code:
import "./otis"; // Must be first
import express from "express";
import { startServer } from "./server";
startServer();Do not re-initialize per request. initOtis sets up a long-lived exporter and should run exactly once.
2. Graceful shutdown
Long-running processes lose spans if they exit without flushing. The SDK auto-registers a beforeExit hook, but add an explicit shutdown on signals:
import { otis } from "./otis";
async function shutdown() {
await otis.shutdown();
process.exit(0);
}
process.on("SIGTERM", shutdown);
process.on("SIGINT", shutdown);otis.shutdown() flushes pending spans and closes the exporter.
3. Chat route (Express example)
import { otis } from "../otis";
import { contextFromChatRequest } from "@runotis/sdk";
import { streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import type { Request, Response } from "express";
export async function handleChat(req: Request, res: Response) {
const { userId, sessionId } = req.session; // from your auth middleware
const ctx = contextFromChatRequest(req.body, {
userId,
sessionId,
});
await otis.withContext(ctx, async () => {
const { streamText: tracedStreamText } = otis.wrap(ai);
const result = await tracedStreamText({
model: anthropic("claude-sonnet-4-6"),
messages: req.body.messages,
});
result.pipeTextStreamToResponse(res);
});
}Long-running servers don't need serverless: true or waitUntil. The exporter batches spans in the background and the process stays alive.
If your API is on a separate host from your browser app
Pass the session ID through a header:
// Client sends: X-Otis-Session-Id: sess_xxx
const sessionId = req.get("x-otis-session-id");
const ctx = contextFromChatRequest(req.body, { userId, sessionId });Or have the browser include cookies on the cross-origin request (requires credentials: "include" on fetch and CORS setup on the server).
4. Browser side
If your server renders HTML or you have an SPA frontend, set up the browser SDK separately. See Browser & consent for the browser init pattern and GDPR handling.
5. Tracing your own code
Beyond AI calls, wrap any function to trace it:
import { otis } from "./otis";
const fetchContext = otis.traced(async function fetchContext(userId: string, span) {
span.log({ metadata: { source: "vector-store" } });
return await vectorStore.search(userId);
});
const handleChat = otis.traced(async function handleChat(msg: string, userId: string, span) {
const context = await fetchContext(userId); // auto-nests as child span
const { text } = await generateText({...}); // auto-nests as child span
span.log({ output: text });
return text;
});See Tracing for the full traced() + span.log() API.
Integration checklist
-
OTIS_API_KEYset in env -
initOtis()runs once at startup, before traced code loads - Signal handlers call
otis.shutdown() - Chat route uses
withContext+wrap(ai)+contextFromChatRequest - Auth middleware provides
userIdandsessionId - A test AI call produces a trace with
user.id,chat.id, and token usage
Next steps
- Tracing — supported frameworks,
traced(),span.log - Events and exceptions —
sendEvent,sendException - Identity — user identity and auth provider pass-through
- Feedback signals — record user feedback on AI responses
- Privacy — PII redaction and identifier hashing
- Customization —
beforeSend, debug logging, configuration