Integration guides
Serverless functions
Integration for AWS Lambda, Cloudflare Workers, Netlify, Firebase Functions, and other short-lived runtimes.
This guide covers short-lived runtimes that freeze or terminate the function after the response is returned — AWS Lambda, Vercel Functions, Cloudflare Workers, Netlify Functions, Firebase Functions, and similar. If you're on Next.js, use the Next.js guide. For long-running Node servers, see the Node.js guide. For framework-specific init hooks (SvelteKit, Nuxt, Astro, Remix), see the Framework reference.
You must flush before the function freezes
The critical difference vs a long-running server: spans must flush before the function freezes. If you return a response without awaiting the flush, spans are lost.
Install
npm install @runotis/sdkInitialize with serverless: true
import { initOtis } from "@runotis/sdk";
const otis = initOtis({
apiKey: process.env.OTIS_API_KEY!,
serviceName: "my-function",
serverless: true,
});serverless: true sets maxExportBatchSize: 1 and scheduledDelayMillis: 0 so every span triggers an immediate export fetch().
The fetch itself is asynchronous, so serverless: true alone is not enough. You must ensure the function stays alive until it completes. See patterns below.
Pattern A — Non-streaming handlers
await the full result, then await otis.flush() before returning:
import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
export async function handler(event: APIGatewayEvent) {
const { text } = await otis.withContext(
{ userId: event.headers["x-user-id"] },
async () => {
const { generateText: tracedGenerateText } = otis.wrap(ai);
return await tracedGenerateText({
model: anthropic("claude-sonnet-4-6"),
prompt: JSON.parse(event.body!).prompt,
});
},
);
await otis.flush(); // Wait for export before returning
return {
statusCode: 200,
body: JSON.stringify({ text }),
};
}Pattern B — Streaming handlers
The response is returned before the stream completes. On Vercel Functions use waitUntil:
import { waitUntil } from "@vercel/functions";
export async function POST(req: Request) {
const { streamText: tracedStreamText } = otis.wrap(ai);
const result = await tracedStreamText({
model: anthropic("claude-sonnet-4-6"),
messages: (await req.json()).messages,
});
waitUntil(otis.flush());
return result.toTextStreamResponse();
}On Cloudflare Workers, use ctx.waitUntil():
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext) {
const result = await tracedStreamText({ /* ... */ });
ctx.waitUntil(otis.flush());
return result.toTextStreamResponse();
},
};If no waitUntil primitive is available, flush inside the stream's onFinish callback:
const result = await tracedStreamText({
model: anthropic("claude-sonnet-4-6"),
messages,
onFinish: () => { waitUntil(otis.flush()); },
});Streaming span lifecycle
streamText and streamObject spans end on whichever completion signal fires first:
.usage/.text/.objectpromises resolving, OR- The
textStream/partialObjectStreamiterator completing (drained, broken out of, or thrown out of)
A 100 ms grace period after iteration completion lets attribute-capture promises populate the span before it ends. This means spans close reliably even when a consumer iterates the stream but never reads .usage.
The typical pattern (handler returns result.toTextStreamResponse() and awaits response delivery) gives the stream enough time to consume fully before the handler resolves. Combined with waitUntil(otis.flush()), spans flush reliably within the function's lifetime.
Cloudflare Workers — edge entry
For Workers, import from the edge entry. It omits AsyncLocalStorage-dependent features that aren't available in all Workers configurations:
import { initOtis } from "@runotis/sdk/edge";For Workers with nodejs_compat enabled, you can use the main entry and get the full feature set including withContext and traced.
Other serverless platforms
The patterns above generalize to other short-lived platforms:
- Netlify Functions — same shape as AWS Lambda. Streaming handlers can call
context.waitUntil(otis.flush())(Netlify's edge runtime); classic functions shouldawait otis.flush()before returning. Set env vars in the Netlify dashboard ornetlify.toml[build.environment]. - Firebase Functions — Lambda-shaped (Node runtime, freezes after response). Use Pattern A:
await otis.flush()before returning the response. Set env vars viafirebase functions:secrets:set. - AWS Amplify (SSR) and SST — both delegate to AWS Lambda under the hood. The Lambda patterns above apply directly.
- Google Cloud Functions and Azure Functions — same Lambda shape;
await otis.flush()before returning.
For all of these, serverless: true on initOtis is required (see Initialize with serverless: true).
Integration checklist
-
serverless: trueset oninitOtis - Non-streaming:
await otis.flush()before returning - Streaming:
waitUntil(otis.flush())(or equivalent platform primitive) - A test AI call produces a trace that actually reaches ingest (check after the function cold-starts and again after warm)
Next steps
- Next.js guide — if you're using Next.js, start there
- Tracing —
wrap()variants,traced(),span.log - Events and exceptions —
sendEvent,sendException - Privacy — PII redaction and identifier hashing
- Customization —
beforeSend, debug logging, configuration