Documentation Index
Fetch the complete documentation index at: https://sedataai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
I don’t see any traces in the dashboard
Walk this list top to bottom — the first match is almost always the cause.Did you call instrumentServer before registerTool?
Did you call instrumentServer before registerTool?
Tools registered before instrumentation are not patched. Move
instrumentServer(...) to immediately after new McpServer(...).Is samplingRate accidentally 0?
Is samplingRate accidentally 0?
samplingRate: 0 drops everything. Default is 1.0. Print it on startup:Is enableTracing turned off?
Is enableTracing turned off?
enableTracing: false disables the exporter entirely. Default is true.Is the exporterEndpoint right?
Is the exporterEndpoint right?
Should look like
https://otel.sedata-ai.tech/v1. The package appends
/traces and /metrics. A trailing slash is fine. A path like
/v1/traces would result in /v1/traces/traces — wrong.Is the auth header set?
Is the auth header set?
Switch to console exporter and confirm spans are produced at all:If they print, the issue is auth or endpoint. If they don’t print, the
issue is upstream of export (instrumentation or sampler).
Is there more than one @opentelemetry/api in node_modules?
Is there more than one @opentelemetry/api in node_modules?
Run
npm ls @opentelemetry/api. Multiple versions create no-op tracers.
Dedupe with npm dedupe or pin a single version.Does the process exit before flushing?
Does the process exit before flushing?
Short-lived scripts can exit before the batch processor flushes. Always
await telemetry.shutdown() before process.exit.I see traces but metrics are missing
Are you using exporterType: 'console'?
Are you using exporterType: 'console'?
The console exporter has no metric reader — by design, for
local debugging. Switch to
otlp-http to get metrics.Is enableMetrics: false?
Is enableMetrics: false?
Default is
true. If your config object came from another module, double
check it isn’t being overridden.Is metricExportIntervalMs too high?
Is metricExportIntervalMs too high?
Default is
5000 ms. If you set 60000, you’ll wait a minute between
flushes. For local dev, try 2000.Did you await shutdown() at exit?
Did you await shutdown() at exit?
Without it, the last batch — including
mcp.server.session.duration —
is dropped.Safety checks are always failing
Is the API key set?
Is the API key set?
The wrapper uses the key passed to
instrumentServer via
exporterAuth.token or exporterAuth.apiKey. Confirm it’s loaded:Is the parameter actually a string?
Is the parameter actually a string?
The wrapper only checks string values. Numbers, objects, and
undefined
pass through untouched and no safety attributes are recorded.Is the parameter name correct?
Is the parameter name correct?
parameterName must match a field on your inputSchema. Typos result in
silent skips.Is the network reachable?
Is the network reachable?
Try
curl https://api.sedata-ai.tech/security/safety-check — if it
can’t reach, neither can the SDK.My handler runs even when content is flagged
The wrapper short-circuits on flagged content — it does not call your handler. If your handler runs anyway, one of these is true:- The wrapper isn’t actually applied (you registered the raw handler).
- The safety API failed (logged warning) and the wrapper fail-opened.
- The parameter wasn’t a string and was skipped.
mcp.safety_check.flagged === true
attribute means the wrapper ran. If you see it AND your handler ran, file an
issue with the trace id.
The error spam in dev is annoying
The wrapper prints debug logs by default (e.g.Safety check wrapper called with params: ...). These are deliberate while
the package is at 0.x. To silence:
TypeScript: type ‘any’ on tool params
ThesafetyCheck wrapper currently types its handler as (params: any) => any. To get proper typing, annotate the inner function:
Still stuck?
FAQ
Quick answers to common questions.
Support
How to reach the team.