Skip to main content

Backends overview

hermes-otel speaks plain OTLP/HTTP, so any OTLP-compatible backend should work — but these are the ones that ship with first-class support, docker-compose files, and (where relevant) smoke-test coverage.

Supported today

BackendSignalsDeploymentAccount / cost
PhoenixTraces + metricsLocal (single container) · Arize AX cloudOSS, no account · commercial cloud
LangfuseTracesLocal (docker compose) · CloudOSS, no account · free tier + paid
LangSmithTracesCloud only (self-host = enterprise)Free personal tier · paid tiers
SigNozTraces + metrics + logsLocal (docker compose) · CloudOSS, no account · free tier + paid cloud
JaegerTracesLocal (single container)OSS, no account needed
Grafana TempoTracesLocal (docker compose) · Grafana CloudOSS, no account · free tier + paid cloud
Grafana LGTMTraces + metrics + logsLocal (single container)OSS, no account
UptraceTraces + metrics + logsLocal (docker compose) · Self-hostedOSS · premium license for HA features
OpenObserveTraces + metrics + logsLocal (single container) · Self-hosted HAOSS, no account
Generic OTLPDepends on collectorAnywhere

Quick picks

"I just want to see a trace, right now, on my laptop"Phoenix — one container, open the UI on port 6006, done.

"I want pretty LLM-specific UI and I'm fine running a stack"Langfuse — polished UI for LLM traces, free cloud tier, robust self-host.

"I want traces and the token/tool/cost metrics dashboard"Phoenix or SigNoz — both accept OTLP metrics as well as traces.

"I want all three signals — traces, metrics, AND logs — in Grafana, in one container"Grafana LGTMgrafana/otel-lgtm bundles Grafana + Tempo + Loki + Mimir + a collector. Pair with capture_logs: true and you get trace-id-correlated logs out of the box.

"I'm already on LangChain / LangSmith"LangSmith — free personal tier, zero extra infra.

"Standard distributed tracing stack, no LLM-specific UI needed"Jaeger or Grafana Tempo — both are traces-only; pair with Prometheus if you need metrics.

"My company already has an OTel collector / Honeycomb / New Relic / Datadog"Generic OTLP — point at its ingest endpoint and it just works.

"I want several of the above simultaneously"Multi-backend fan-out — same spans, parallel, non-blocking.

Signal support

Backends differ in which OTel signals they accept. The plugin auto-skips signals a backend can't take — you don't need to configure anything.

BackendTracesMetricsLogs
Phoenix
Langfuse
LangSmith✅ (via HTTP Run API, not OTLP)
SigNoz
Jaeger
Grafana Tempo
Grafana LGTM
Uptrace
OpenObserve
Generic OTLPdepends on collectordepends on collector

If you care about token / tool / cost metrics on a traces-only backend, pair it with a Prometheus-compatible sink or fan out to Phoenix / SigNoz / LGTM alongside. See OTel logs for the logs pipeline.

Selecting a single backend

Single-backend selection is env-var-driven. First match wins:

  1. LANGSMITH_TRACING=true → LangSmith
  2. OTEL_LANGFUSE_PUBLIC_API_KEY + OTEL_LANGFUSE_SECRET_API_KEY set → Langfuse
  3. OTEL_SIGNOZ_ENDPOINT set → SigNoz
  4. OTEL_JAEGER_ENDPOINT set → Jaeger
  5. OTEL_TEMPO_ENDPOINT set → Tempo
  6. OTEL_PHOENIX_ENDPOINT set → Phoenix

Setting backends: in config.yaml overrides the env-var flow entirely — see Multi-backend fan-out.

Planned

These are OTLP-compatible and should work today with the generic OTLP backend — first-class docs, docker-compose files, and smoke tests are on the roadmap:

File an issue if you've tried one of these and hit friction — we'll prioritise.