-
Notifications
You must be signed in to change notification settings - Fork 21
Run source code deterministic profiling tests
The Profiler plugin can be used whenever you make changes to grafana-bridge to get a quick sanity check on the overall performance of the source code. It can help you answer questions such as how many times a particular function was called, or how much total time was spent within that function. The simplest way to create a profile runtime statistics dump of a particular function is to call the Profiler.run() method in the corresponding function unit test.
profiler = Profiler()
opentsdb = OpenTsdbApi(logger, md_instance, '9999')
resp = profiler.run(opentsdb.format_response,*(data, jreq))
assert resp is not None
assert os.path.exists(os.path.join(profiler.path, "profiling_format_response.prof"))
It is also possible to create a runtime statistics dump in the production environment during operation. This can be achieved by setting the runtime_profiling variable to True in the analytics.py file during the startup of the grafana-bridge application. For all methods that have been decorated with the get_runtime_statistics function, the Python deterministic profiling set will be called.
@get_runtime_statistics(enabled=analytics.runtime_profiling)
def __parseHeader(self):
item = self.json['header']
return HeaderData(**item)
A profile dump for each method will be written to a file with the signature 'profiling_.prof'. Each method execution results in the overwriting of the file with a new dump.
The statistics report over all captured dumps can be generated and accessed via the /profiling endpoint.
However, this should only be done under supervision and for a short period of time. Furthermore, the Profiler plugin adds a noticeable runtime overhead due to the additional instrumentation code that needs to register and track certain events. Especially in a production environment that is already suffering from poor performance, the constant collection of profiler statistics can put the application into a critical state.
Visit the IBM Storage Scale Knowledge Center for getting more info about the latest product updates
-
- Setup classic Grafana
- Make usage of Grafana Provisioning feature
-
- Installing RedHat community-powered Grafana operator from OperatorHub
- Creating Grafana instance using the RedHat community-powered Grafana-operator
- Creating Grafana Datasorce instance from Custom Resource managed by the RedHat community powered Grafana operator
- Importing the predefined dashboard from the example dashboards collection
- Exploring Grafana WEB interface for CNSA project in a k8s OCP environment
- How to setup Grafana instance to monitor multiple IBM Storage Scale clusters running in a cloud or mixed environment
- API key authentication
- Configurable bridge settings
- CherryPy builtin HTTP server settings
- How to setup HTTPS(SSL) connection
- Start and stop grafana-bridge with systemd
- Grafana Dashboard Panel shows no metric values for a particular entity
- Missing Grafana-Operator on an OpenShift cluster
- Missing CherryPy packages
- What to do if your system is on Python < 3.8
- Grafana-bridge fails to start with Python3.8
- Grafana-bridge container time is different from a host time
- Verify that the grafana-bridge returns data as expected