I've found OTel to still have rough edges and it's not really the one stop shop for telemetry we want it to be at $work yet. In particular, no good (sentry-style) exception capturing yet. They also recently changed a lot of metric names(for good reason it seems), which breaks most dashboards you find on the internet.
I have been warned by people that OTel isn't mature yet and I find it to still be true, but it seems the maintainers are trying to do something about this nowadays
Like… has anyone done a Jepsen-like stress test on rsyslogd and shared the results? I’ve half-assedly looked before and not been able to find anything.
> yes it doesn’t solve the UI problem for those, but it does solve collecting your logs
I work for Netdata and over the last couple months, we've developed an external Netdata plugin that can ingest/index OTel logs [1]. The current implementation stores logs in systemd-compatible journal files and our visualization is effectively the same one someone would get when querying systemd journal logs [2]. i > Like… has anyone done a Jepsen-like stress test on rsyslogd and shared the results? I’ve half-assedly looked before and not been able to find anything.
I've not used rsyslogd specifically, but I don't see how you'd have any issues with the log volume you described.
[1] https://github.com/netdata/netdata/tree/master/src/crates/ne...
[2] https://learn.netdata.cloud/docs/logs/systemd-journal-logs/s...
https://grafana.com/docs/pyroscope/latest/configure-client/o...
The OTel profiling standard is more valuable as a client contract than a backend choice. Instrument once with the OTel SDK, then route to Pyroscope, Grafana Cloud, Datadog, or your own Tempo instance, without changing application code. That's the actual pitch.
The current friction isn't the standard itself. It's language coverage gaps. JVM continuous profiling via eBPF is solid and production-ready. Node.js still falls back to the V8 sampling profiler, which adds 2-5% overhead compared to sub-1% for kernel-level eBPF approaches. That's the gap worth watching in the alpha.
It suprises me that anything designed by the OTel community could ever meet 'low-overhead' expectations.
Some of the OGs from that team later founded Zymtrace [2] and they're doing the same for profiling what happens inside GPUs now!
[1] https://github.com/open-telemetry/opentelemetry-ebpf-profile...
[2] https://zymtrace.com/article/zero-friction-gpu-profiler/
This is not an accurate summary of what they developed.
Using .eh_frame to unwind stacks without frame pointers is not novel - it is exactly what it is for and perf has had an implementation doing it since ~2010. The problem is the kernel support for this was repeatedly rejected so the kernel samples kilobytes of stack and then userspace does the unwind
What they developed is an implementation of unwinding from an eBPF program running in the kernel using data from eh_frame.
Their invention is about pushing down the .eh_frame walking to kernel space, so you don't need to ship large chunks of stack memory to userspace for post-processing. And eBPF code is the executor of that "pushed down" .eh_frame walking.
The GitHub page mentions a patent on this too: https://patents.google.com/patent/US11604718B1/en
Please let us know if you find any issues with what we are shipping right now.
This layout allows us to quickly merge hundreds of millions of samples into a single profile. The only practical limit is protobuf's 2GB message size cap.