- Our ever-developing in-house observability toolkit alongside our engineers' watchful eyes will ensure that your systems are correctly and completely monitored. Nothing escapes our eyes! + Our ever-developing in-house observability toolkit alongside our engineers' watchful eyes will ensure that your systems are correctly and completely monitored. Nothing escapes our eyes!
- Metrics, distributed traces, profiles, logs, dashboards, and alerts. Always know how well your applications are running in the Cloud, in containers, on virtual machines, or bare metal. Canonical offers security and support for the best- in- class open source monitoring tools and dashboards, and can also run these tools for you on private and public clouds. With the operational experience of industry experts encoded into the stack, you can rest assured that when something happens, you’ll know. -
- {%- endif -%} - {%- if slot == 'cta' -%} - Contact us - Learn more about observability › - {%- endif -%} - {%- if slot == 'image' -%} -- Future-proof your open source observability with Ubuntu Pro: a single, per-node subscription of the most comprehensive open source security and support, including a variety of best-in-class monitoring tools and dashboards. -
-- Your monitoring tools and monitoring dashboards, run reliably and efficiently by Canonical, wherever your applications are. Keep your software observable, without the added effort. -
- -- Uptime for monitoring tools is quintessential for your business. Without reliable monitoring tools, you cannot know what else is not working right. 81% of CIOs are planning to increase their observability budgets, you could be using yours more efficiently. -
-- Focus on driving your unique business, and we will run the best-in-class open source monitoring tools you need for the observability of your applications. -
-- One practitioner out of five thinks that observability is a key challenge for cloud native environments. One in two practitioners says that lack of manpower is the worst inhibitor to their cloud journey. With today’s many components of the cloud native ecosystem, engineers spend a lot of time and effort setting up and maintaining their monitoring tools. Let’s change that. -
- -- Your monitoring systems must be up and running to tell you what is not. The availability of your business applications is effectively capped by the availability of the systems monitoring them. Get the best OSS monitoring tools run with spectacular uptime for you, by Canonical. -
-- Guaranteed 99.9% production-grade SLAs and best practice solution design. Have your monitoring tools managed by technology-specific experts. -
- {%- endif -%} - - {%- if slot == 'list_item_title_2' -%} -Active management and break/fix response by Canonical engineers
- {%- endif -%} - - {%- if slot == 'list_item_title_3' -%} -- Fault-tolerant and ready for real-world challenges. Your monitoring tools will be there for you when you need them. -
- {%- endif -%} - - {%- if slot == 'list_item_title_4' -%} -Industry-wide security best practices and compliance certifications to keep your cloud secure at all times.
- {%- endif -%} - - {%- if slot == 'list_item_title_5' -%} -Transparent and predictable pricing to suit your needs and help you grow your cloud with predictable costs.
- {%- endif -%} - - {%- if slot == 'list_item_title_6' -%} -- Build a flexible and agnostic cloud with no lock-in and leverage a simplified open-source approach. We give you the keys when you are ready to drive. -
- {%- endif -%} - {%- endcall -%} -- All Canonical managed services come with Extended Security Maintenance: future-proofed open-source observability run at scale, reliably. -
-- We manage your monitoring tools anywhere for you. Whether you are running your workloads across multiple clouds or might need to leverage a multi-cloud environment in the future, we can easily scale our managed service to adapt to your IT strategy. -
-- All prices are annual - | -Per monitoring backend VM | -Per monitoring backend host | -Per monitored VM / host | -
---|---|---|---|
Managed monitoring tools on public clouds, VMware, OpenStack or Kubernetes | -$3,099 | -- $6,861* - | -$0 | -
Managed monitoring tools on bare metal | -N/A | -$9,470 | -$0 | -
- |
- - - All prices are dependent on current cloud architecture and requirements. - - - |
-
- Our Managed Monitoring portfolio -
- - is constantly evolving and expanding. - Contact us about your unique requirements › -- Your software system is observable when the telemetry you collect and the way you process it enables you to know and investigate in a timely fashion how your software system is performing, what issues are occurring and what their impact is. -
- {%- endif -%} - {%- if slot == 'image' -%} -- Rather, observability through monitoring -
-- The term “monitoring” is sometimes used to denote the collection and processing of metrics, and especially timeseries. Instead, we define monitoring more generally as the act and practice of collecting and processing useful telemetry to gain insight in the behavior of a system. -
-- In the current practice of software, and especially in distributed systems and cloud-native software, monitoring is the commonplace means of achieving observability. Tools like Prometheus, OpenTelemetry, Jaeger, Elasticsearch or Graylog document the relevant workings of software systems by collecting and processing various telemetry types, such as metrics, distributed traces and logs. -
-- Read our “Observability vs. monitoring debate: An irreverent view” blog post for an in-depth look at the debate around “observability vs. monitoring”. -
-In today’s open source monitoring practices, there are several types of telemetry being collected:
- {%- endif -%} - - {%- if slot == 'list_item_title_1' -%} -- Metrics are sets of related measurements of attributes of systems. In the scope of monitoring, usually metrics are collected as time series data with a numerical value (that is, numbers associated with timestamps), like how many requests have been served with a certain status code over the last minute. Besides numerical timeseries, boolean timeseries are also nowadays commonplace, like "is the service X up or not", as well as others with string-like values to represent, for example, "the value of configuration X at time Y." -
- {%- endif -%} - - {%- if slot == 'list_item_title_2' -%} -- Logs, structured (usually as JSON objects) or unstructured (as plain text following some pattern), are time-stamped entries that document the occurrence of an event like "the following error was caught” or "that payment process has been completed.” -
- {%- endif -%} - - {%- if slot == 'list_item_title_3' -%} -- Distributed traces document what work is done by various systems to process specific requests, like serving a Web page or executing a batch job. Nowadays, most distributed traces are modeled as a tree of spans. Spans are similar to structured logs, each describing some processing like "Serving request XYZ" or "Querying database 123", have a duration that reports how long it took to perform the processing, and have hierarchical relations to other spans in the form of child-parent relations. Usually, multiple systems contribute spans to one distributed trace, ranging from the API servers to databases and messaging queues and (micro)services in between. -
- {%- endif -%} - - {%- if slot == 'list_item_title_4' -%} -- Production profiling identifies hot spots of resource consumption by continuously sampling, with limited performance overhead, which parts of your application consume CPU cycles, memory and, depending on the runtime, the amount of wait time, or asynchronous wait. Production profiling is usually very useful in getting you started when troubleshooting latency and memory usage spikes in the absence of more detailed telemetry. The "production" reference is due to the tradeoffs in the design of the profiler to optimize for minimal overhead, as opposed to profilers used in development that tend to optimize for precision. -
- {%- endif -%} - - {%- if slot == 'list_item_title_5' -%} -- Real user monitoring is about collecting telemetry about user sessions occurring via interfaces like web pages, mobile applications and sometimes IoT devices; real user monitoring is closely related with distributed tracing, in which a user session is correlated with one or more distributed traces, each describing, for example, how one asset or XHR request is served to the user interface. -
- {%- endif -%} - - {%- if slot == 'list_item_title_6' -%} -- Synthetic monitoring checks are continuous test systems, mostly production ones, to ensure that particular APIs or even entire user flows in UIs, like a checkout process, work as intended; in other words, synthetic monitoring checks are smoke tests for APIs and UIs. -
- {%- endif -%} - {%- endcall -%} -
- Making the most out of your metrics, logs, distributed traces and more
-
- The various telemetry types provide you with more insight when they are correlated with one another.
-
- For example, Prometheus exemplars allow you to tag metrics with some sample distributed trace identifiers, so that you can drill down into some of the traces that affected specific metrics. -
-- Another commonplace correlation of telemetry is adding the trace context identifier to the metadata of logs, so that you can go from viewing a single log to seeing the entire execution flow across all components and services involved in an event. -
-- An often underrated aspect of telemetry is its context; that is, which system is the telemetry coming from, where is that system running and, by extension, what is the relation (or lack thereof) between this piece of telemetry and others. You could think of this as correlating telemetry with the topology of the monitored systems. -
-- Telemetry context is increasingly encoded via tagging: adding consistent, curated tags to timeseries metrics, logs, distributed traces and so on, so that one can filter all the available telemetry for that generated by a specific set of machines, processes or other system components in a specific time interval. -
-- Another important function of telemetry context is to avoid confusion or outright mistakes when processing similar telemetry from unrelated deployments. For example, most alerting rules one may want to define for production databases should focus on specific clusters in specific regions, rather than “alert me when the 95th percentile of CPU usage across all my MySQL instances is above a threshold”. -
-- The forgotten twin to observability -
-- The concept of observability originates in the field of modern control system theory, and its formulation has well withstood the test of time. And it was not a standalone concept either. Rather, observability had a twin concept, called controllability, which can be roughly defined as: "the property of a system to regulate itself and related systems to reliably produce the correct outputs given the provided inputs." -
-- Nowadays, controllability is not a term many software engineers use daily, but its essence is embodied, for example, by operators implemented with Juju or other frameworks, which steer the software they operate and the infrastructure underneath to provide seamless configuration management and achieve scalability, reliability and graceful degradation. -
- -- Canonical can support your open source observability stack with security fixes and LTS support, as well as run your best-of-breed, open source monitoring tools reliably and at scale. -
- -- Included in Charmed OpenStack is the observability stack, providing alerting, capacity planning and continuous service checks to ensure your cloud is healthy and stays that way. + Included in Charmed OpenStack is the observability stack, providing alerting, capacity planning and continuous service checks to ensure your cloud is healthy and stays that way.
A member of our team will be in touch within one working day.
-A member of our team will be in touch within one working day.
-