Skip to content

Releases: percona/percona-server-mongodb-operator

v1.18.0

15 Nov 07:52
Compare
Choose a tag to compare

Release Highlights

Enhancements of the declarative user management

The declarative management of custom MongoDB users was improved compared to its initial implementation in the previous release, where the Operator did not track and sync user-related changes in the Custom Resource and the database. Also, starting from now you can create custom MongoDB roles on various databases just like users in the deploy/cr.yaml manifest:

...
roles:
  - name: clusterAdmin
    db: admin
  - name: userAdminAnyDatabase
    db: admin

See the documentation to find more details about this feature.

Support for selective restores

Percona Backup for MongoDB 2.0.0 has introduced a new functionality that allows partial restores, which means selectively restoring only with the desired subset of data. Now the Operator also supports this feature, allowing you to restore a specific database or a collection from a backup. You can achieve this by using an additional selective section in the PerconaServerMongoDBRestore Custom Resource:

spec:
  selective:
    withUsersAndRoles: true
    namespaces:
    - "db.collection"

You can find more on selective restores and their limitations in our documentation.

Splitting the replica set of the database cluster over multiple Kubernetes clusters

Recent improvements in cross-site replication made it possible to keep the replica set of the database cluster in different data centers. The Operator itself cannot deploy MongoDB replicas to other data centers, but this still can be achieved with a number of Operator deployments, equal to the size of your replica set: one Operator to control the replica set via cross-site replication, and at least two Operators to bootstrap the unmanaged clusters with other MongoDB replica set instances. Splitting the replica set of the database cluster over multiple Kubernetes clusters can be useful to get a fault-tolerant system in which all replicas are in different data centers.
You can find more about configuring such a multi-datacenter MongoDB cluster and the limitations of this solution on the dedicated documentation page.

New Features

K8SPSMDB-894: It is now possible to restore a subset of data (a specific database or a collection) from a backup which is useful to reduce time on restore operations when fixing corrupted data fragment
K8SPSMDB-1113: The new percona.com/delete-pitr-chunks finalizer allows the deletion of PITR log files from the backup storage when deleting a cluster so that leftover data does not continue to take up space in the cloud
K8SPSMDB-1124 and K8SPSMDB-1146: Declarative user management now covers creating and managing user roles, and syncs user-related changes between the Custom Resource and the database
K8SPSMDB-1140 and K8SPSMDB-1141: Multi-datacenter cluster deployment is now possible

Improvements

K8SPSMDB-739: A number of Service exposure options in the replsets, sharding.configsvrReplSet, and sharding.mongos were renamed for unification with other Percona Operators
K8SPSMDB-1002: New Custom Resource options under the replsets.primaryPreferTagSelector` subsection allow providing Primary instance selection preferences based on specific zone and region, which may be especially useful within the planned zone switchover process (Thanks to sergelogvinov for contribution)
K8SPSMDB-1096: Restore logs were improved to contain pbm-agent logs in mongod containers, useful to debug failures in the backup restoration process
K8SPSMDB-1135: Split-horizon DNS for external (unmanaged) nodes is now configurable via the replsets.externalNodes subsection in Custom Resource
K8SPSMDB-1152: Starting from now, the Operator uses multi-architecture images of Percona Server for MongoDB and Percona Backup for MongoDB, making it easier to deploy a cluster on ARM
K8SPSMDB-1160: The PVC resize feature introduced in previous release can now be enabled or disabled via the enableVolumeExpansion Custom Resource option (false by default), which protects the cluster from storage resize triggered by mistake
K8SPSMDB-1132: A new secrets.keyFile Custom Resource option allows to configure custom name for the Secret with the MongoDB internal auth key file

Bugs Fixed

K8SPSMDB-912: Fix a bug where the full backup connection string including the password was visible in logs in case of the Percona Backup for MongoDB errors
K8SPSMDB-1047: Fix a bug where the Operator was changing writeConcernMajorityJournalDefault to “true” during the replica set reconfiguring, ignoring the value set by user
K8SPSMDB-1168: Fix a bug where successful backups could obtain a failed state in case of the Operator configured with watchAllNamespaces: true and having the same name for MongoDB clusters across multiple namespaces (Thanks to Markus Küffner for contribution)
K8SPSMDB-1170: Fix a bug that prevented deletion of a cluster with the active percona.com/delete-psmdb-pods-in-order finalizer in case of the cluster error state (e.g. when mongo replset failed to reconcile)
K8SPSMDB-1184: Fix a bug where the Operator failed to reconcile when using the container security context with readOnlyRootFilesystem set to true (Thanks to applejag for contribution)

Deprecation, Rename and Removal

  • The new enableVolumeExpansion Custom Resource option allows users to disable the automated storage scaling with Volume Expansion capability. The default value of this option is false, which means that the automated scaling is turned off by default.

  • A number of Service exposure Custom Resource options in the replsets, sharding.configsvrReplSet, and sharding.mongos subsections were renamed to provide a unified experience with other Percona Operators:

    • expose.serviceAnnotations option renamed to expose.annotations
    • expose.serviceLabels option renamed to expose.labels
    • expose.exposeType option renamed to expose.type

Supported Platforms

The Operator was developed and tested with Percona Server for MongoDB 5.0.29-25,
6.0.18-15, and 7.0.14-8. Other options may also work but have not been tested. The
Operator also uses Percona Backup for MongoDB 2.7.0.

The following platforms were tested and are officially supported by the Operator
1.18.0:

This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.

v1.17.0

09 Sep 15:15
Compare
Choose a tag to compare

Release Highlights

Declarative user management (technical preview)

Before the Operator version 1.17.0 custom MongoDB users had to be created manually. Now the declarative creation of custom MongoDB users is supported via the users subsection in the Custom Resource. You can specify a new user in deploy/cr.yaml manifest, setting the user’s login name and database, PasswordSecretRef (a reference to a key in a Secret resource containing user’s password) and as well as MongoDB roles on various databases which should be assigned to this user:

...
users:
- name: my-user
  db: admin
  passwordSecretRef: 
    name: my-user-password
    key: my-user-password-key
  roles:
    - name: clusterAdmin
      db: admin
    - name: userAdminAnyDatabase
      db: admin

See documentation to find more details about this feature with additional explanations and the list of current limitations.

Liveness check improvements

Several improvements in logging were made related to the liveness checks, to allow getting more information for debugging, and to make these logs persist on failures to allow further examination.

Liveness check logs are stored in the /data/db/mongod-data/logs/mongodb-healthcheck.log file, which can be accessed in the corresponding Pod if needed. Starting from now, Liveness check generates more log messages, and the default log level is set to DEBUG.

Each time the health check fails, the current log is saved to a gzip compressed file named mongodb-healthcheck-<timestamp>.log.gz, and the mongodb-healthcheck.log log file is reset.
Logs older than 24 hours are automatically deleted.

New Features

  • K8SPSMDB-253: It is now possible to create and manage users via the Custom Resource

Improvements

  • K8SPSMDB-899: Add Labels for all Kubernetes objects created by Operator (backups/restores, Secrets, Volumes, etc.) to make them clearly distinguishable
  • K8SPSMDB-919: The Operator now checks if the needed Secrets exist and connects to the storage to check the validity of credentials and the existence of a backup before starting the restore process
  • K8SPSMDB-934: Liveness checks are providing more debug information and keeping separate log archives for each failure with the 24 hours retention
  • K8SPSMDB-1057: Finalizers were renamed to contain fully qualified domain names (FQDNs), avoiding potential conflicts with other finalizer names in the same Kubernetes environment
  • K8SPSMDB-1108: The new Custom Resource option allows setting custom containerSecurityContext for PMM containers
  • K8SPSMDB-994: Remove a limitation where it wasn’t possible to create a new cluster with splitHorizon enabled, leaving the only way to enable it later on the running cluster

Bugs Fixed

  • K8SPSMDB-925: Fix a bug where the Operator generated “failed to start balancer” and “failed to get mongos connection” log messages when using Mongos with servicePerPod and LoadBalancer services, while the cluster was operating properly
  • K8SPSMDB-1105: The memory requests and limits for backups were increased in the deploy/cr.yaml configuration file example to reflect the Percona Backup for MongoDB minimal pbm-agents requirement of 1 Gb RAM needed for stable operation
  • K8SPSMDB-1074: Fix a bug where MongoDB Cluster could not failover in case of all Pods downtime and exposeType Custom Resource option set to either NodePort or LoadBalancer
  • K8SPSMDB-1089: Fix a bug where it was impossible to delete a cluster in error state with finalizers present
  • K8SPSMDB-1092: Fix a bug where Percona Backup for MongoDB log messages during physical restore were not accessible with the kubectl logs command
  • K8SPSMDB-1094: Fix a bug where it wasn’t possible to create a new cluster with upgradeOptions.setFCV Custom Resource option set to true
  • K8SPSMDB-1110: Fix a bug where nil Custom Resource annotations were causing the Operator panic

Deprecation, Rename and Removal

Finalizers were renamed to contain fully qualified domain names to comply with the Kubernetes standards.

  • PerconaServerMongoDB Custom Resource:
    • delete-psmdb-pods-in-order finalizer renamed to percona.com/delete-psmdb-pods-in-order
    • delete-psmdb-pvc finalizer renamed to percona.com/delete-psmdb-pvc
  • PerconaServerMongoDBBackup Custom Resource:
    • delete-backup finalizer renamed to percona.com/delete-backup

Supported Platforms

The Operator was developed and tested with Percona Server for MongoDB 5.0.28-24,
6.0.16-13, and 7.0.12-7. Other options may also work but have not been tested. The
Operator also uses Percona Backup for MongoDB 2.5.0.

The following platforms were tested and are officially supported by the Operator
1.17.0:

This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.