Releases: EnterpriseDB/kubectl-cnp
v1.12.0
Release date: 11 January 2022
Features:
- Add Kubernetes 1.23 to the list of supported Kubernetes distributions and remove end-to-end tests for 1.17,
which ended support by the Kubernetes project in Dec 2020 - Improve the responsiveness of pod status checks in case of network issues by adding a connection timeout of 2 seconds and a communication timeout of 30 seconds. This change sets a limit on the time the operator waits for a pod to report its status before declaring it as failed, enhancing the robustness and predictability of a failover operation
- Introduce the
.spec.inheritedMetadata
field to the Cluster allowing the user to specify labels and annotations that will apply to all objects generated by the Cluster - Reduce the number of queries executed when calculating the status of an instance
- Add a readiness probe for PgBouncer
- Add support for custom Certification Authority of the endpoint of Barman’s backup object store when using Azure protocol
Fixes:
- During a failover, wait to select a new primary until all the WAL streaming connections are closed. The operator now sets by default
wal_sender_timeout
andwal_receiver_timeout
to 5 seconds to make sure standby nodes will quickly notice if the primary has network issues - Change WAL archiving strategy in replica clusters to fix rolling updates by setting "archive_mode" to "always" for any PostgreSQL instance in a replica cluster. We then restrict the upload of the WAL only from the current and target designated primary. A WAL may be uploaded twice during switchovers, which is not an issue
- Fix support for custom Certification Authority of the endpoint of Barman’s backup object store in replica clusters source
- Use a fixed name for default monitoring config map in the cluster namespace
- If the defaulting webhook is not working for any reason, the operator now updates the Cluster with the defaults also during the reconciliation cycle
- Fix the comparison of resource requests and limits to fix a rare issue leading to an update of all the pods on every reconciliation cycle
- Improve log messages from webhooks to also include the object namespace
- Stop logging a “default” message at the start of every reconciliation loop
- Stop logging a PodMonitor deletion on every reconciliation cycle if
enablePodMonitor
is false - Do not complain about possible architecture mismatch if a pod is not reachable
v1.11.0
Release date: 15 December 2021
Features:
- Parallel WAL archiving and restore: allow the database to keep up with WAL generation on high write systems by introducing the
backupObjectStore.maxParallel
option to set the maximum number of parallel jobs to be executed during both WAL archiving (by PostgreSQL’sarchive_command
) and WAL restore (byrestore_command
). Using parallel restore option can allow newly promoted Standbys to get to a ready state faster by fetching needed WAL files to replay in parallel rather than sequentially - Default set of metrics for monitoring: a new
ConfigMap
calleddefault-monitoring
is automatically deployed in the same namespace of the operator and, by default, added to any existing Postgres cluster. Such behavior can be changed globally by setting theMONITORING_QUERIES_CONFIGMAP
parameter in the operator’s configuration, or at cluster level through the.spec.monitoring.disableDefaultQueries
option (by default set tofalse
) - Introduce the
enablePodMonitor
option in the monitoring section of a cluster to automatically manage aPodMonitor
resource and seamlessly integrate with Prometheus - Improve the PostgreSQL shutdown procedure by trying to execute a smart shutdown for the first half of the desired
stopDelay
time, and a fast shutdown for the remaining half, before the pod is killed by Kubernetes - Add the
switchoverDelay
option to control the time given to the former primary to shut down gracefully and archive all the WAL files before promoting the new primary (by default, Cloud Native PostgreSQL waits indefinitely to privilege data durability) - Handle changes to resource requests and limits for a PostgreSQL
Cluster
by issuing a rolling update - Improve the
status
command of thecnp
plugin forkubectl
with additional information: streaming replication status, total size of the database, role of an instance in the cluster - Enhance support of workloads with many parallel workers by enabling configuration of the
dynamic_shared_memory_type
andshared_memory_type
parameters for PostgreSQL’s management of shared memory - Propagate labels and annotations defined at cluster level to the associated resources, including pods (deletions are not supported)
- Automatically remove pods that have been evicted by the Kubelet
- Manage automated resizing of persistent volumes in Azure through the
ENABLE_AZURE_PVC_UPDATES
operator configuration option, by issuing a rolling update of the cluster if needed (disabled by default) - Introduce the
k8s.enterprisedb.io/reconciliationLoop
annotation that, when set todisabled
on a given Postgres cluster, prevents the reconciliation loop from running - Introduce the
postInitApplicationSQL
option as part of theinitdb
bootstrap method to specify a list of SQL queries to be executed on the main application database as a superuser immediately after the cluster has been created - Support for EDB Postgres Advanced 14.1
Fixes:
- Liveness probe now correctly handles the startup process of a PostgreSQL server. This fixes an issue reported by a few customers and affects a restarted standby server that needs to recover WAL files to reach a consistent state, but it was not able to do it before the timeout of liveness probe would kick in, leaving the pods in
CrashLoopBackOff
status. - Liveness probe now correctly handles the case of a former primary that needs to use
pg_rewind
to re-align with the current primary after a timeline diversion. This fixes the pod of the new standby from repeatedly being killed by Kubernetes. - Reduce client-side throttling from Postgres pods (e.g.
Waited for 1.182388649s due to client-side throttling, not priority and fairness, request: GET
) - Disable Public Key Infrastructure (PKI) initialization on OpenShift and OLM installations, by using the provided one
- When changing configuration parameters that require a restart, always leave the primary as last
- Mark a PVC to be ready only after a job has been completed successfully, preventing a race condition in PVC initialization
- Use the correct public key when renewing the expired webhook TLS secret.
- Fix an overflow when parsing an LSN
- Remove stale PID files at startup
- Let the
Pooler
resource inherit theimagePullSecret
defined in the operator, if exists
v1.10.0
Release date: 11 November 2021
Features:
- Connection Pooling with PgBouncer: introduce the
Pooler
resource and controller to automatically manage a PgBouncer deployment to be used as a connection pooler for a local PostgreSQLCluster
. The feature includes TLS client/server connections, password authentication, High Availability, pod templates support, configuration of key PgBouncer parameters,PAUSE
/RESUME
, logging in JSON format, Prometheus exporter for stats, pools, and lists - Backup Retention Policies: support definition of recovery window retention policies for backups (e.g. ‘30d’ to ensure a recovery window of 30 days)
- In-Place updates of the operator: introduce an in-place online update of the instance manager, which removes the need to perform a rolling update of the entire cluster following an update of the operator. By default this option is disabled (please refer to the documentation for more detailed information)
- Limit the list of options that can be customized in the
initdb
bootstrap method todataChecksums
,encoding
,localeCollate
,localeCType
,walSegmentSize
. This makes theoptions
array obsolete and planned to be removed in the v2 API - Introduce the
postInitTemplateSQL
option as part of theinitdb
bootstrap method to specify a list of SQL queries to be executed on thetemplate1
database as a superuser immediately after the cluster has been created. This feature allows you to include default objects in all application databases created in the cluster - New default metrics added to the instance Prometheus exporter: Postgres version, cluster name, and first point of recoverability according to the backup catalog
- Retry taking a backup after a failure
- Build awareness about Barman Cloud capabilities in order to prevent the operator from invoking recently introduced features (such as retention policies, or Azure Blob Container storage) that are not present in operand images that are not frequently updated
- Integrate the output of the
status
command of thecnp
plugin with information about the backup - Introduce a new annotation that reports the status of a PVC (being initialized or ready)
- Set the cluster name in the
k8s.enterprisedb.io/cluster
label for every object generated in aCluster
, includingBackup
objects - Drop support for deprecated API version
postgresql.k8s.enterprisedb.io/v1alpha1
on theCluster
,Backup
, andScheduledBackup
kinds - Set default operand image to PostgreSQL 14.1
Security:
- Set allowPrivilegeEscalation to
false
for the operator containers securityContext
Fixes:
- Disable primary PodDisruptionBudget during maintenance in single-instance clusters
- Use the correct certificate certification authority (CA) during recovery operations
- Prevent Postgres connection leaking when checking WAL archiving status before taking a backup
- Let WAL archive/restore sleep for 100ms following transient errors that would flood logs otherwise
v1.9.2
Release date: 15 October 2021
Features:
- Enhance JSON log with two new loggers:
wal-archive
for PostgreSQL'sarchive_command
, andwal-restore
forrestore_command
in a standby
Fixes:
- Enable WAL archiving during the standby promotion (prevented
.history
files from being archived) - Pass the
--cloud-provider
option to Barman Cloud tools only when using Barman 2.13 or higher to avoid errors with older operands - Wait for the pod of the primary to be ready before triggering a backup
v1.9.1
Release date: 30 September 2021
This release is to celebrate the launch of PostgreSQL 14 by making it the default major version when a new Cluster
is created without defining a specific image name.
Fixes:
- Fix issue causing
Error while getting barman endpoint CA secret
message to appear in the logs of the primary pod, which prevented the backup to work correctly - Properly retry requesting a new backup in case of temporary communication issues with the instance manager
v1.9.0
Release date: 28 September 2021
Features:
- Add Kubernetes 1.22 to the list of supported Kubernetes distributions, and remove 1.16
- Introduce support for the
--restore-target-wal
option inpg_rewind
, in order to fetch WAL files from the backup archive, if necessary (available only with PostgreSQL/EPAS 13+) - Expose a default metric for the Prometheus exporter that estimates the number of pages in the
pg_catalog.pg_largeobject
table in each database - Enhance the performance of WAL archiving and fetching, through local in-memory cache
Fixes:
- Explicitly set the
postgres
user when invokingpg_isready
- required by restricted SCC in OpenShift - Properly update the
FirstRecoverabilityPoint
in the status - Set
archive_mode = always
on the designated primary if backup is requested - Minor bug fixes
v1.8.0
Release date: 13 September 2021
Features:
-
Bootstrap a new cluster via full or Point-In-Time Recovery directly from an object store defined in the external cluster section, eliminating the previous requirement to have a Backup CR defined
-
Introduce the
immediate
option in scheduled backups to request a backup immediately after the first Postgres instance running, adding the capability to rewind to the very beginning of a cluster when Point-In-Time Recovery is configured -
Add the
firstRecoverabilityPoint
in the cluster status to report the oldest consistent point in time to request a recovery based on the backup object store’s content -
Enhance the default Prometheus exporter for a PostgreSQL instance by exposing the following new metrics:
- number of WAL files and computed total size on disk
- number of
.ready
and.done
files in the archive status folder - flag for replica mode
- number of requested minimum/maximum synchronous replicas, as well as the expected and actually observed ones
-
Add support for the
runonserver
option when defining custom metrics in the Prometheus exporter to limit the collection of a metric to a range of PostgreSQL versions -
Natively support Azure Blob Storage for backup and recovery, by taking advantage of the feature introduced in Barman 2.13 for Barman Cloud
-
Rely on
pg_isready
for the liveness probe -
Support RFC3339 format for timestamp specification in recovery target times
-
Introduce
.spec.imagePullPolicy
to control the pull policy of image containers for all pods and jobs created for a cluster -
Add support for OpenShift 4.8, which replaces OpenShift 4.5
-
Support PostgreSQL 14 (beta)
-
Enhance the replica cluster feature with cross-cluster replication from an object store defined in an external cluster section, without requiring a streaming connection (experimental)
-
Introduce
logLevel
option to the cluster's spec to specify one of the following levels: error, info, debug or trace
Security Enhancements:
- Introduce
.spec.enableSuperuserAccess
to enable/disable network access with thepostgres
user through password authentication - Enable specification of a license key in a secret with
spec.licenseKeySecret
Fixes:
- Properly inform users when a cluster enters an unrecoverable state and requires human intervention
v1.7.1
Release date: 11 August 2021
Features:
- Prefer self-healing over configuration with regards to synchronous replication, empowering the operator to temporarily override
minSyncReplicas
andmaxSyncReplicas
settings in case the cluster is not able to meet the requirements during self-healing operations - Introduce the
postInitSQL
option as part of theinitdb
bootstrap method to specify a list of SQL queries to be executed as a superuser immediately after the cluster has been created
Fixes:
- Allow the operator to failover when the primary is not ready (bug introduced in 1.7.0)
- Execute administrative queries using the
LOCAL
synchronous commit level - Correctly parse multi-line log entries in PGAudit
v1.7.0
Release date: 28 July 2021
Features:
-
Add native support to PGAudit with a new type of
logger
calledpgaudit
directly available in the JSON output -
Enhance monitoring and observability capabilities through:
- Native support for the
pg_stat_statements
andauto_explain
extensions - The
target_databases
option in the Prometheus exporter to run a user-defined metric query on one or more databases (including auto-discovery of databases through shell-like pattern matching) - Exposure of the
manual_switchover_required
metric to promptly report whether a cluster withprimaryUpdateStrategy
set tosupervised
requires a manual switchover
- Native support for the
-
Transparently handle
shared_preload_libraries
forpg_audit
,auto_explain
andpg_stat_statements
- Automatic configuration of
shared_preload_libraries
for PostgreSQL whenpg_stat_statements
,pgaudit
orauto_explain
options are added to thepostgresql
parameters section
- Automatic configuration of
-
Support the
k8s.enterprisedb.io/reload
label to finely control the automated reload of config maps and secrets, including those used for custom monitoring/alerting metrics in the Prometheus exporter or to store certificates -
Add the
reload
command to thecnp
plugin forkubectl
to trigger a reconciliation loop on the instances -
Improve control of pod affinity and anti-affinity configurations through
additionalPodAffinity
andadditionalPodAntiAffinity
-
Introduce a separate
PodDisruptionBudget
for primary instances, by requiring at least a primary instance to run at any time
Security Enhancements:
- Add the
.spec.certificates.clientCASecret
andspec.certificates.replicationTLSSecret
options to define custom client Certification Authority and certificate for the PostgreSQL server, to be used to authenticate client certificates and secure communication between PostgreSQL nodes - Add the
.spec.backup.barmanObjectStore.endpointCA
option to define the custom Certification Authority bundle of the endpoint of Barman’s backup object store
Fixes:
- Correctly parse histograms in the Prometheus exporter
- Reconcile services created by the operator for a cluster
v1.6.0
Release date: 12 July 2021
Features:
- Replica mode (EXPERIMENTAL): allow a cluster to be created as a replica of a source cluster. A replica cluster has a designated primary and any number of standbys.
- EDB Audit support on EDB Postgres Advanced Server images.
- Add the
.spec.postgresql.promotionTimeout
parameter to specify the maximum amount of seconds to wait when promoting an instance to primary, defaulting to 40000000 seconds. - Add the
.spec.affinity.podAntiAffinityType
parameter. It can be set topreferred
(default), resulting inpreferredDuringSchedulingIgnoredDuringExecution
being used, or torequired
, resulting inrequiredDuringSchedulingIgnoredDuringExecution
.
Security Enhancements:
- Prevent license keys from appearing in the logs.
Changes:
- Fixed a race condition when deleting a PVC and a pod which prevented the operator from creating a new pod.
- Fixed a race condition preventing the manager from detecting the need for a PostgreSQL restart on a configuration change.
- Fixed a panic in
kubectl-cnp
on clusters without annotations. - Lowered the level of some log messages to
debug
. - E2E tests for server CA and TLS injection.