You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Adds a second log-severity threshold that runs at the persist boundary
inside the async ingest pipeline. Logs above INGEST_MIN_SEVERITY but
below STORE_MIN_SEVERITY are dropped from the BatchCreateAll write but
still flow through LogCallback so in-memory enrichers (vectordb, GraphRAG
Drain template mining, span/trace correlation) keep seeing them.
Use case: keep SQLite small while letting in-memory anomaly detection
benefit from the verbose stream. Example:
INGEST_MIN_SEVERITY=DEBUG STORE_MIN_SEVERITY=WARN
Default behavior is unchanged — empty STORE_MIN_SEVERITY means "use the
ingest threshold", so the gate is a no-op out of the box. Setting the
store threshold ≤ ingest threshold is also a no-op (logged as warning).
Implementation
- internal/config/config.go: add StoreMinSeverity field + STORE_MIN_SEVERITY env
- internal/ingest/pipeline.go: storeMinSeverity field, SetStoreMinSeverity
setter, filter inside process() that splits b.Logs into persisted vs
callback-only, atomic StoreFiltered counter, exposed via PipelineStats
- internal/ingest/otlp.go: export ParseSeverity wrapper for main.go wiring
- main.go: parse both thresholds, only enable the gate when store > ingest,
warn on misconfig
- CLAUDE.md: document both thresholds with use-case examples
Tests
- TestPipeline_StoreMinSeverity_DropsBelowThresholdFromPersist: WARN gate
drops INFO from persist but INFO still reaches LogCallback; StoreFiltered=1
- TestPipeline_StoreMinSeverity_Disabled_PersistsAllLogs: legacy path
unchanged when SetStoreMinSeverity not called
Verification
- go vet ./... clean
- go build ./... clean
- go test -race -timeout 180s ./... — 518 passed in 28 packages (+2 new)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Copy file name to clipboardExpand all lines: CLAUDE.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -217,6 +217,7 @@ Key settings in `internal/config/config.go`:
217
217
-`LOG_FTS_ENABLED` (false) — when truthy (`true`/`yes`/`on`/`1`), provisions the SQLite FTS5 `logs_fts` virtual table + sync triggers at startup; when false, log-search uses vectordb (semantic) plus a 24h-clamped LIKE fallback. Toggle off and reclaim disk via `POST /api/admin/drop_fts` (refused while the flag is on).
-`GRAPHRAG_WORKER_COUNT` (16), `GRAPHRAG_EVENT_QUEUE_SIZE` (100000) — sized for 100–200 services; raise further if `otelcontext_graphrag_events_dropped_total` climbs
220
+
-`INGEST_MIN_SEVERITY` (`INFO`), `STORE_MIN_SEVERITY` (`""` = same as ingest) — two-tier log severity gate. The ingest gate runs at the OTLP receiver and **drops the log entirely** below the threshold (no in-memory enrichment either). The store gate runs at the persist boundary inside the async pipeline (`internal/ingest/pipeline.go:process`) and **only skips the DB row write** — the log still flows through `LogCallback` so vectordb indexing, GraphRAG Drain template mining, and span/trace correlation see it. Use case: `INGEST_MIN_SEVERITY=DEBUG STORE_MIN_SEVERITY=WARN` keeps SQLite small while letting in-memory anomaly detection benefit from the verbose stream. Setting `STORE_MIN_SEVERITY` ≤ `INGEST_MIN_SEVERITY` is a no-op (logged as a warning at startup). Drops surface via `Pipeline.Stats().StoreFiltered`.
220
221
-`INGEST_ASYNC_ENABLED` (true), `INGEST_PIPELINE_QUEUE_SIZE` (50000), `INGEST_PIPELINE_WORKERS` (8) — async ingest pipeline (`internal/ingest/pipeline.go`). Hybrid backpressure: <90% accept all, 90–100% drop healthy batches (errors/slow always pass), 100% return gRPC `RESOURCE_EXHAUSTED`. Set `INGEST_ASYNC_ENABLED=false` to revert to synchronous DB writes inside `Export()`. Drops surface as `otelcontext_ingest_pipeline_dropped_total{signal,reason}`.
221
222
-`GRPC_MAX_RECV_MB` (16), `GRPC_MAX_CONCURRENT_STREAMS` (1000) — OTLP gRPC server caps, validated to 1..256 and 1..1_000_000
222
223
-`RETENTION_BATCH_SIZE` (50000), `RETENTION_BATCH_SLEEP_MS` (1) — purge pacing; raise the sleep on busy production DBs
0 commit comments