This guide shows how to enable Visor telemetry and tracing with OpenTelemetry, export traces/metrics, auto‑instrument Node libraries, and generate a static HTML trace report.
Enable telemetry to serverless NDJSON traces:
export VISOR_TELEMETRY_ENABLED=true
export VISOR_TELEMETRY_SINK=file
export VISOR_TRACE_DIR=output/traces # optional, defaults to output/traces
visor --config ./.visor.yaml --output json
# Inspect traces
ls output/traces/*.ndjsonTelemetry is configured via environment variables (highest precedence):
| Variable | Description | Default |
|---|---|---|
VISOR_TELEMETRY_ENABLED |
Enable telemetry (true/false) |
false |
VISOR_TELEMETRY_SINK |
Sink type: otlp, file, or console |
file |
VISOR_TRACE_DIR |
Directory for trace files | output/traces |
VISOR_TRACE_REPORT |
Generate static HTML trace report (true/false) |
false |
VISOR_TELEMETRY_AUTO_INSTRUMENTATIONS |
Enable auto‑instrumentations (true/false) |
false |
VISOR_TELEMETRY_FULL_CAPTURE |
Capture full AI prompts/responses in spans | false |
VISOR_FALLBACK_TRACE_FILE |
Explicit path for NDJSON trace file | auto-generated |
OTEL_EXPORTER_OTLP_ENDPOINT |
OTLP endpoint URL (for traces, metrics, and logs) | - |
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT |
OTLP endpoint for traces (overrides above) | - |
OTEL_EXPORTER_OTLP_METRICS_ENDPOINT |
OTLP endpoint for metrics (overrides above) | - |
OTEL_EXPORTER_OTLP_LOGS_ENDPOINT |
OTLP endpoint for logs (overrides above) | - |
OTEL_EXPORTER_OTLP_HEADERS |
Headers for OTLP requests (e.g., auth tokens) | - |
Examples:
# File sink (serverless mode)
VISOR_TELEMETRY_ENABLED=true \
VISOR_TELEMETRY_SINK=file \
visor --config ./.visor.yaml
# OTLP sink with Grafana LGTM (or any OTLP-compatible backend)
VISOR_TELEMETRY_ENABLED=true \
VISOR_TELEMETRY_SINK=otlp \
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 \
visor --config ./.visor.yaml
# With static HTML trace report
VISOR_TELEMETRY_ENABLED=true \
VISOR_TRACE_REPORT=true \
visor --config ./.visor.yamlThe easiest way to get a full observability stack locally is Grafana LGTM — a single Docker container with Grafana, Tempo (traces), Loki (logs), Prometheus (metrics), and an OpenTelemetry Collector:
# Start the all-in-one observability stack
docker run -d --name grafana-otel \
-p 3000:3000 \
-p 4317:4317 \
-p 4318:4318 \
-v grafana-otel-data:/data \
grafana/otel-lgtm:latest
# Run Visor with OTLP telemetry
VISOR_TELEMETRY_ENABLED=true \
VISOR_TELEMETRY_SINK=otlp \
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 \
visor --config .visor.yaml
# Open Grafana at http://localhost:3000 (admin/admin)
# - Explore → Tempo for traces
# - Explore → Loki for logs (correlated with trace IDs)
# - Explore → Prometheus for metricsTelemetry can also be configured via the telemetry section in your config file:
version: "1.0"
telemetry:
enabled: true
sink: file # otlp | file | console
otlp:
protocol: http
endpoint: ${OTEL_EXPORTER_OTLP_ENDPOINT}
headers: ${OTEL_EXPORTER_OTLP_HEADERS}
file:
dir: output/traces
ndjson: true
tracing:
auto_instrumentations: true
trace_report:
enabled: trueNote: Environment variables take precedence over config file settings.
When using VISOR_TELEMETRY_SINK=file (the default), Visor writes NDJSON simplified spans to output/traces/run-<timestamp>.ndjson. This is ideal for serverless/CI environments where you can't run a persistent collector.
You can then ingest these files using the OTel Collector filelog receiver:
# otel-collector-config.yaml
receivers:
filelog:
include: [ "/work/output/traces/*.ndjson" ]
operators:
- type: json_parser
parse_from: body
exporters:
otlphttp:
endpoint: http://tempo:4318
service:
pipelines:
traces:
receivers: [filelog]
exporters: [otlphttp]For real-time streaming of traces, metrics, and logs to a collector:
export VISOR_TELEMETRY_ENABLED=true
export VISOR_TELEMETRY_SINK=otlp
export OTEL_EXPORTER_OTLP_ENDPOINT=https://collector.example.com
# Optional: authentication headers
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer your-token"When using the OTLP sink, Visor automatically enables all three observability signals:
- Traces — Exported via
@opentelemetry/exporter-trace-otlp-http. Shows full execution flow with spans for checks, routing, and AI calls. - Metrics — Exported via
@opentelemetry/exporter-metrics-otlp-httpand@opentelemetry/sdk-metrics. Includes histograms and counters for check durations, provider durations, forEach items, issues, and fail_if triggers. - Logs — Exported via
@opentelemetry/exporter-logs-otlp-httpand@opentelemetry/sdk-logs. All Visor logger output (info, warn, error, debug) is bridged to the OTel Logs pipeline with trace context correlation — click a log line in Grafana to jump to the associated trace.
Each signal can use a separate endpoint if needed:
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://tempo.example.com/v1/traces
export OTEL_EXPORTER_OTLP_METRICS_ENDPOINT=https://mimir.example.com/v1/metrics
export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT=https://loki.example.com/v1/logsEnable with VISOR_TELEMETRY_AUTO_INSTRUMENTATIONS=true or in config:
telemetry:
tracing:
auto_instrumentations: trueThis activates @opentelemetry/auto-instrumentations-node (http/undici/child_process/etc.) and correlates external calls with Visor spans via context propagation.
Note: Auto-instrumentations require
@opentelemetry/auto-instrumentations-nodeas an optional dependency. If not installed, Visor skips auto‑instrumentation gracefully.
Enable with VISOR_TRACE_REPORT=true or in config:
telemetry:
tracing:
trace_report:
enabled: trueThis outputs two files per run to your trace directory:
*.trace.json— simplified span JSON*.report.html— self‑contained HTML timeline (open locally in your browser)
Visor emits spans with detailed attributes for debugging:
visor.check.id— Check identifiervisor.check.type— Provider type (ai, command, etc.)visor.check.input.context— Liquid template context (sanitized)visor.check.output— Check result (truncated if large)visor.foreach.index— Index for forEach iterations
wave— Current execution wave numberwave_kind— Wave typesession_id— Session identifierlevel_size— Number of checks in wavelevel_checks_preview— Preview of checks in wave
trigger— What triggered the routing decisionaction— Routing action (retry, goto, run)source— Source checktarget— Target check(s)scope— Execution scopegoto_event— Event override for goto
By default, sensitive environment variables (containing api_key, secret, token, password, auth, credential, private_key) are automatically redacted in span attributes.
To capture full AI prompts and responses (for debugging), set:
export VISOR_TELEMETRY_FULL_CAPTURE=trueWarning: Full capture may include sensitive data. Use only in secure debugging environments.
Visor wraps each execution in a root span (visor.run). You can correlate traces with GitHub workflow runs by publishing the trace_id in logs or checks.
Example workflow step:
- name: Run Visor with tracing
run: |
export VISOR_TELEMETRY_ENABLED=true
export VISOR_TELEMETRY_SINK=otlp
export OTEL_EXPORTER_OTLP_ENDPOINT=${{ secrets.OTEL_ENDPOINT }}
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer ${{ secrets.OTEL_TOKEN }}"
npx -y @probelabs/visor@latest --config ./.visor.yaml --output jsonFor file-based tracing in CI (useful for artifact upload):
- name: Run Visor with file traces
run: |
export VISOR_TELEMETRY_ENABLED=true
export VISOR_TELEMETRY_SINK=file
export VISOR_TRACE_DIR=./traces
npx -y @probelabs/visor@latest --config ./.visor.yaml
- name: Upload traces
uses: actions/upload-artifact@v4
with:
name: visor-traces
path: ./traces/*.ndjson- No spans? Verify
VISOR_TELEMETRY_ENABLED=trueand check that OpenTelemetry packages are installed. - Missing metrics? Install
@opentelemetry/exporter-metrics-otlp-httpand@opentelemetry/sdk-metrics. - Auto-instrumentations not working? Install
@opentelemetry/auto-instrumentations-node. - Large span attributes? Visor truncates attributes at 10,000 characters. For full capture, use
VISOR_TELEMETRY_FULL_CAPTURE=true.
- Telemetry Reference — Complete reference of all spans, metrics, and events
- Grafana Dashboards — Pre-built dashboards for Visor telemetry
- Debugging Guide — Comprehensive debugging techniques
- Debug Visualizer — Live execution visualization with
--debug-server - Telemetry RFC — Design rationale and architecture