Backends overview
hermes-otel speaks plain OTLP/HTTP, so any OTLP-compatible backend should work — but these are the ones that ship with first-class support, docker-compose files, and (where relevant) smoke-test coverage.
Supported today
| Backend | Signals | Deployment | Account / cost |
|---|---|---|---|
| Phoenix | Traces + metrics | Local (single container) · Arize AX cloud | OSS, no account · commercial cloud |
| Langfuse | Traces | Local (docker compose) · Cloud | OSS, no account · free tier + paid |
| LangSmith | Traces | Cloud only (self-host = enterprise) | Free personal tier · paid tiers |
| SigNoz | Traces + metrics + logs | Local (docker compose) · Cloud | OSS, no account · free tier + paid cloud |
| Jaeger | Traces | Local (single container) | OSS, no account needed |
| Grafana Tempo | Traces | Local (docker compose) · Grafana Cloud | OSS, no account · free tier + paid cloud |
| Grafana LGTM | Traces + metrics + logs | Local (single container) | OSS, no account |
| Uptrace | Traces + metrics + logs | Local (docker compose) · Self-hosted | OSS · premium license for HA features |
| OpenObserve | Traces + metrics + logs | Local (single container) · Self-hosted HA | OSS, no account |
| Generic OTLP | Depends on collector | Anywhere | — |
Quick picks
"I just want to see a trace, right now, on my laptop" → Phoenix — one container, open the UI on port 6006, done.
"I want pretty LLM-specific UI and I'm fine running a stack" → Langfuse — polished UI for LLM traces, free cloud tier, robust self-host.
"I want traces and the token/tool/cost metrics dashboard" → Phoenix or SigNoz — both accept OTLP metrics as well as traces.
"I want all three signals — traces, metrics, AND logs — in Grafana, in one container"
→ Grafana LGTM — grafana/otel-lgtm bundles Grafana + Tempo + Loki + Mimir + a collector. Pair with capture_logs: true and you get trace-id-correlated logs out of the box.
"I'm already on LangChain / LangSmith" → LangSmith — free personal tier, zero extra infra.
"Standard distributed tracing stack, no LLM-specific UI needed" → Jaeger or Grafana Tempo — both are traces-only; pair with Prometheus if you need metrics.
"My company already has an OTel collector / Honeycomb / New Relic / Datadog" → Generic OTLP — point at its ingest endpoint and it just works.
"I want several of the above simultaneously" → Multi-backend fan-out — same spans, parallel, non-blocking.
Signal support
Backends differ in which OTel signals they accept. The plugin auto-skips signals a backend can't take — you don't need to configure anything.
| Backend | Traces | Metrics | Logs |
|---|---|---|---|
| Phoenix | ✅ | ✅ | ❌ |
| Langfuse | ✅ | ❌ | ❌ |
| LangSmith | ✅ (via HTTP Run API, not OTLP) | ❌ | ❌ |
| SigNoz | ✅ | ✅ | ✅ |
| Jaeger | ✅ | ❌ | ❌ |
| Grafana Tempo | ✅ | ❌ | ❌ |
| Grafana LGTM | ✅ | ✅ | ✅ |
| Uptrace | ✅ | ✅ | ✅ |
| OpenObserve | ✅ | ✅ | ✅ |
| Generic OTLP | ✅ | depends on collector | depends on collector |
If you care about token / tool / cost metrics on a traces-only backend, pair it with a Prometheus-compatible sink or fan out to Phoenix / SigNoz / LGTM alongside. See OTel logs for the logs pipeline.
Selecting a single backend
Single-backend selection is env-var-driven. First match wins:
LANGSMITH_TRACING=true→ LangSmithOTEL_LANGFUSE_PUBLIC_API_KEY+OTEL_LANGFUSE_SECRET_API_KEYset → LangfuseOTEL_SIGNOZ_ENDPOINTset → SigNozOTEL_JAEGER_ENDPOINTset → JaegerOTEL_TEMPO_ENDPOINTset → TempoOTEL_PHOENIX_ENDPOINTset → Phoenix
Setting backends: in config.yaml overrides the env-var flow entirely — see Multi-backend fan-out.
Planned
These are OTLP-compatible and should work today with the generic OTLP backend — first-class docs, docker-compose files, and smoke tests are on the roadmap:
- Honeycomb — cloud, generous free tier
- New Relic — cloud, 100 GB/mo free tier
- Elastic APM — self-host or Elastic Cloud
- Datadog — cloud, trial only
File an issue if you've tried one of these and hit friction — we'll prioritise.