You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently it seems the exporters do not record any metrics of their own, for example successful exports, # of time series exported, etc. It would help in debugging ingestion issues if the exporter also had its own metrics. For reference, here are some metrics the Java SDK collects for exporters (/cc @jack-berg)
Unfortunately I can't find anything similar to refer to in the Go SDK's exporters, but maybe it can still be worth adding metrics here for GCP users. The easiest way to have some sort of metrics, which is better than none, may be to just use the opentelemetry-go-contrib gRPC instrumentation, though that would still miss useful things like # of time series/samples exported.
For context, we had an issue where looking at "Global - Metric samples ingested" for a project, rpc.server.duration from a Go server which should have ingested ~n/s (60s export interval) was climbing steadily, reaching 600/s, eventually falling down to the expected number. It's unclear what happened and we'll need to track whether the problem happens again, but this is just an example of something that could be better debugged with having exporter metrics available as well (notably having resource labels to better understand where the issue could be).
The text was updated successfully, but these errors were encountered:
I would love to see this added to the Go SDK itself. Something similar to open-telemetry/semantic-conventions#184, but for metrics. That, plus gRPC/HTTP transport metrics would provide a complete picture of export that would be consistent across exporters.
I'm a little hesitant to add metrics to our exporters with (hopefully) new semantic conventions on the way. I wonder if we could write an "exporter wrapper" for metric/trace exporters that records the metrics you are looking for?
Not adding metrics but adding a wrapper, not quite clear on it but do you mean callback functions which then callers can use to add metrics in a bespoke way until the conventions are in? That would definitely be reasonable for me.
Currently it seems the exporters do not record any metrics of their own, for example successful exports, # of time series exported, etc. It would help in debugging ingestion issues if the exporter also had its own metrics. For reference, here are some metrics the Java SDK collects for exporters (/cc @jack-berg)
https://github.com/open-telemetry/opentelemetry-java/blob/main/exporters/common/src/main/java/io/opentelemetry/exporter/internal/ExporterMetrics.java#L83
Unfortunately I can't find anything similar to refer to in the Go SDK's exporters, but maybe it can still be worth adding metrics here for GCP users. The easiest way to have some sort of metrics, which is better than none, may be to just use the opentelemetry-go-contrib gRPC instrumentation, though that would still miss useful things like # of time series/samples exported.
For context, we had an issue where looking at "Global - Metric samples ingested" for a project,
rpc.server.duration
from a Go server which should have ingested ~n/s (60s export interval) was climbing steadily, reaching 600/s, eventually falling down to the expected number. It's unclear what happened and we'll need to track whether the problem happens again, but this is just an example of something that could be better debugged with having exporter metrics available as well (notably having resource labels to better understand where the issue could be).The text was updated successfully, but these errors were encountered: