Replies: 2 comments 1 reply
-
usually AWS/GCP stats could be helpful already for these high-level metrics (# requests + latencies) not sure if building an internal OSS observability logic which exposes metrics is worth the effort. Maybe migrating to OTel and allowing to add a custom collector is a good approach here. No timeline on this though as it'd require to change the observability stack used to operate Langfuse. can you link to discord thread here? |
Beta Was this translation helpful? Give feedback.
-
What I would like in a metrics endpoint is not for debugging langfuse (which is what OTEL would typically be for) but would be for usage-related metrics. For example:
Overall, what we're looking for is to take the data collected by LangFuse and to "action" and review it in a metrics and observability system. I am not sure that it adds tons of value for LangFuse to invent their own metrics/graphing/alerting stack, but simply exposing various datum to Prometheus allows for medium to large self-hosted installations of LangFuse to capture, alert, and action on these metrics. PS. For what the original author of this is asking for, I do recommend separately to simply implement OTEL and provide some env vars for configuring OTEL to send traces/metrics to. I personally believe the Prometheus metrics should be things that you can't/don't typically capture via OTEL, for custom metrics. These can be different techs means for different dedicated purposes. Prom for precision usage metrics (aka, data inside/about the traces) for actioning usage data (via Prometheus alerts) and OTEL for debugging and optimizing LangFuse (aka, data about how LangFuse is running). |
Beta Was this translation helpful? Give feedback.
-
Describe the feature or potential improvement
Expose an endpoint with technical metrics (response time, number of requests..?) to monitor Langfuse deployment
Additional information
Discussed on Discord :)
Beta Was this translation helpful? Give feedback.
All reactions