-
Notifications
You must be signed in to change notification settings - Fork 452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enableDependencyTracking: false in host.json is ignored, leading to excessive debug logs in Application Insights #10770
Comments
@gunzip this is by design. There are two things here:
@RohitRanjanMS - do you know the otel equivalent for suppressing certain trace sources? |
@jviau thanks for the answer! Let me add some observations. Regarding your point:
In the official documentation on Using OpenTelemetry with Azure Functions, the configuration still references the applicationInsights block: Proper sampling settings (e.g., to track host incoming HTTP requests) are essential. If the documentation is incorrect, how should we configure sampling without relying on the applicationInsights block? The traces I've previously shared originate from the host logs. The .NET runtime utilizes the Application Insights SDK and initiates OpenTelemetry here: Additionally, there is a TelemetryProcessor intended to filter out debug traces: However, despite this, it appears that the debug traces are still being logged and they come from the dotnet runtime: Could you please clarify if there's an alternative approach to ensure these debug traces are effectively filtered out? Thanks again for your assistance! |
That is a good point, we will need to remove all of that from the sample. @RohitRanjanMS - can you confirm my statement that the @RohitRanjanMS, is there documentation on how to configure otel sampling for the host? Or is that not available yet? |
Hi @gunzip , thanks for reaching out. I've submitted a PR to update the documentation. Additionally, I have another PR to remove all blob-related dependencies, as they've been quite frustrating. I'm hoping opentelemetry-configuration will handle the configuration effectively. I'm trying to avoid creating a solution specific to Functions. |
Hi @RohitRanjanMS, I'm impressed by how quickly this issue -which is a showstopper for us- has been addressed! Do you have an estimated timeline for when these changes will be available in the Azure Functions runtime? Also, I’ve noticed additional debug logs (specifically for HTTP requests) that probably should be filtered as well. IDK if they come from the worker this time: Regarding the configuration of sampling for the host (i.e., legitimate HTTP requests from triggers), how can we align it with the sampling settings we define programmatically using the SDK? Thanks again for your help! |
Hi @gunzip, you can check the SDK Version property in the logs. We set two different values for the host and the worker, and these seem to be from the host process. Regarding sampling, are you referring to defining it programmatically using the SDK as configuring sampling on the node worker? The trace context is propagated from the host process to the worker, so you can utilize the sampling decision from the incoming request (traceparent and trace flag) to ensure consistency between the host and worker processes. |
My PR is merged, I don't have a date, but you can tentatively expect this to be available sometime in March. |
Thanks again, @RohitRanjanMS. I confirm that these traces originate from the host (SDK version: Regarding sampling: since we need to trace HTTP requests within the host and the |
You can use the environment variables - https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/ |
I am using Azure Functions with the Node.js runtime. I would like to enable end-to-end tracing of requests using OpenTelemetry. I have enabled the default integration with Application Insights by setting the
APPLICATIONINSIGHTS_CONNECTION_STRING
variable. Additionally, I have enabled Node.js process instrumentation using the following code:So far, everything works fine, and calls are successfully traced end-to-end. The host traces incoming HTTP requests (which cannot be intercepted in the worker), while the Node.js process traces everything else (outgoing HTTP calls, connections to Redis, CosmosDB, etc.).
Investigative information
Repro steps
APPLICATIONINSIGHTS_CONNECTION_STRING
.Expected behavior
I would expect that setting the
enableDependencyTracking
option tofalse
in thehost.json
file would prevent the host from logging external dependencies (such as calls to Azure Blob Storage forazure-webjobs-hosts
), reducing the noise in traces and optimizing costs.Actual behavior
Despite setting the following configuration in the
host.json
file:The host continues to log a large amount of debug information (screenshot attached), which we want to exclude. These logs are unnecessary, make trace analysis difficult, and increase costs. The
enableDependencyTracking
setting does not seem to have any effect and appears to be ignored.Known workarounds
Suppressing all host logs. However, this is undesirable as it would also suppress HTTP request logs, which are valuable.
Additional concerns
Even if it were possible to suppress dependency logs, I am concerned that it might also remove useful dependency traces, such as those related to output bindings managed by the host.
The text was updated successfully, but these errors were encountered: