Skip to content

v2.7.0

Latest
Compare
Choose a tag to compare
@nastena1606 nastena1606 released this 18 Jul 16:28
· 23 commits to main since this release

Release Highlights

This release provides the following features and improvements:

PMM3 support

The Operator is natively integrated with PMM 3, enabling you to monitor the health and performance of your Percona Distribution for PostgreSQL deployment and at the same time enjoy enhanced performance, new features, and improved security that PMM 3 provides.

Note that the Operator supports both PMM2 and PMM3. The decision on what PMM version is used depends on the authentication method you provide in the Operator configuration: PMM2 uses API keys while PMM3 uses service account token. If the Operator configuration contains both authentication methods with non-empty values, PMM3 takes the priority.

To use PMM, ensure that the PMM client image is compatible with the PMM Server version. Check Percona certified images for the correct client image.

For how to configure monitoring with PMM see the documentation.

Improved monitoring for clusters in multi-region or multi-namespace deployments in PMM

Now you can define a custom name for your clusters deployed in different data centers. This name helps Percona Management and Monitoring (PMM) Server to correctly recognize clusters as connected and monitor them as one deployment. Similarly, PMM Server identifies clusters deployed with the same names in different namespaces as separate ones and correctly displays performance metrics for you on dashboards.

To assign a custom name, define this configuration in the Custom Resource manifest for your cluster:

spec:
  pmm:
    customClusterName: postgresql-cluster

Added labels to identify the version of the Operator

Custom Resource Definition (CRD) is compatible with the last three Operator versions. To know which Operator version is attached to it, we've added labels to all Custom Resource Definitions. The labels help you identify the current Operator version and decide if you need to update the CRD.
To view the labels, run: kubectl get crd perconapgclusters.pgv2.percona.com --show-labels.

Grant users access to a public schema

Starting with PostgreSQL 15, a non-database owner cannot access the default public schema and cannot create tables in it. We have improved this behavior so that the Operator creates a user and a schema with the name matching the username for all databases listed for this user. This custom schema is set by default enabling you to work in the database right away.

You can explicitly grant access to a public schema for a non-superuser setting the grantPublicSchemaAccess option to true. This grants the user permission to create tables and update in the public schema of every database they own. If multiple users are granted access to the public schema in the same database, each user can only access the tables they have created themselves. If you want one user to access tables created by another user in the public schema, the owner of those tables must connect to PostgreSQL and explicitly grant the necessary privileges to the other user.

Superusers have access to the public schema for their databases by default.

Improved troubleshooting with the ability to override Patroni configuration

You can now override Patroni configuration for the whole cluster as well as for an individual Pod. This gives you more control over the database and simplifies troubleshooting.

Also, you can redefine what method the Operator will use when it creates replica instances in your PostgreSQL cluster. For example, to force the Operator to use pgbasebackup, edit the deploy/cr.yaml manifest:

patroni:
  createReplicaMethods:
    - basebackup
    - pgbackrest

Note that after you apply this configuration, the Operator updates the Patroni ConfigMap, but it doesn't apply this configuration to Patroni. You must manually reload the Patroni configuration of every database instance for it to come into force.

Read more about these troubleshooting methods in the documentation.

Changelog

New features

  • K8SPG-615 - Introduced a custom delay on the entrypoint of the backup pod. The backup process waits the defined time before connecting to the API server

  • K8SPG-708, K8SPG-663 - Added the sleep-forever feature to keep a database container running.

  • K8SPG-712 - Added the ability to control every parameter supported by Patroni configuration.

  • K8SPG-725 - Added the ability to configure resources for the repo-host container

  • K8SPG-719 - Added support for PMM v3

Improvements

  • K8SPG-571 - Added the ability to access to a public schema for a non-superuser custom user for every database listed for them.

  • K8SPG-612 - Updated the pgBouncer image to use the official percona-pgbouncer Docker image

  • K8SPG-613 - Updated the pgBackRest image to use the official percona-pgbackrest Docker image

  • K8SPG-654 - Added the ability to add custom parameters in the Custom Resource and pass them to PMM.

  • K8SPG-675 - Added the ability to define resource requests for CPU and memory

  • K8SPG-704 - Added the ability to configure create_replica_methods for Patroni

  • K8SPG-710 - Added the ability to disable backups

  • K8SPG-715 - Improved custom-extensions e2e test by adding pgvector

  • K8SPG-726 - Added ability to define security context for all sidecar containers

  • K8SPG-729 - Added Labels for Custom Resource Definitions (CRD) to identify the Operator version attached to them

  • K8SPG-732 - Enhanced readability of pgbackrest debug logs by printing log messages on separate lines

  • K8SPG-738 - Added startup log to the Operator Pod to print commit hash, branch and build time

  • K8SPG-743 - Disabled client-side rate limiting in the Kubernetes Go client to avoid throttling errors when managing multiple clusters with a single operator. This change leverages Kubernetes' server-side Priority and Fairness mechanisms introduced in v1.20 and later. (Thank you Joshua Sierles for contributing to this issue)

  • K8SPG-744 - Improved Contributing guide with the steps how to build the Operator for development purposes

  • K8SPG-717, K8SPG-750 - Added the ability to define a custom cluster name for PMM for filtering

  • K8SPG-753 - Added the ability to enable pg_stat_statements instead of pg_stat_monitor

  • K8SPG-761 - Added the ability to add concurrent reconciliation workers

  • K*SPG-828 - Added registry name to images due to Openshift 4.19 changes

Bugs Fixed

  • K8SPG-532 - Improved log visibility to include logs about missing data source to INFO logs

  • K8SPG-574 - Added pg_repack to the list of built-in extensions in the Custom Resource

  • K8SPG-661 - Added documentation about replica reinitialization in the Operator

  • K8SPG-677 - Made the imagePullPolicy in pg-db Helm chart configurable

  • K8SPG-680 - Prevent scheduled backups to start until the volume expansion is completed with success.

  • K8SPG-698 - Fixed the issue with pgbackrest service account not being created and reconciliation failing by creating the StatefulSet for this service account first

  • K8SPG-703 - Fixed the issue with the backup Pod being stuck in a running state due to running jobs being deleted because of the TTL expiration by adding an internal finalizer to keep the job running until it finishes

  • K8SPG-722 - Documented the replica reinitialization behavior.

  • K8SPG-772 - Fixed the issue with WAL watcher panicking if some backups have no CompletedAt status field by using CreationTimestamp as fallback.

  • K8SPG-782 - Fixed the issue with crashing WALWatcher by assigning Patroni version to status when Patroni label is configured through the Custom resource option

  • K8SPG-785 - Fixed PMM template in Helm chart (Thank you user Nik for reporting this issue)

  • K8SPG-792 - Add the ability to configure and use images defined in environment variables when starting a cluster (Thank you Jakub Jaruszewski for reporting this issue)

  • K8SPG-799 - Fixed the issue with the cluster being blocked due to inability to pull the image fot the Patroni Version Detector Pod if imagePullSecrets in configured. The issue is fixed by respecting the configuration for the patroni version check pod. (Thank you Baptiste Balmon for reporting this issue)

  • K8SPG-804 - Fixed an issue where outdated cluster state could cause a duplicate backup job to be created, blocking new backups. The issue was fixed by ensuring reconcileManualBackup fetches the latest postgrescluster state.

  • K8SPG-812 - Fixed image in PerconaPGUpgrade example

Deprecation, Change, Rename and Removal

  • New repositories for pgBouncer and pgBackRest

    Now the Operator uses the official Percona Docker images for pgBouncer and pgBackRest components. Pay attention to the new image repositories when you upgrade the Operator and the database. Check the Percona certified images for exact image names.

  • Changes in image pulling on OpenShift

    Starting with OpenShift version 4.19, the way Operator images are pulled has changed. Now the registry name must be specified for image paths to ensure the images are pulled successfully from DockerHub.

    All Custom Resource manifests now include the registry name in image paths. This enables you to successfully install the Operator using the default manifests from Git repositories. If you upgrade the Operator and the database cluster via the command line interface, add the docker.io registry name to image paths for all components in the format:

    "docker.io/percona/percona-postgresql-operator:2.7.0-ppg17.5.2-postgres"
    

    Follow our upgrade documentation for update guidelines.

Supported software

The Operator 2.7.0 is developed, tested and based on:

  • PostgreSQL 13.21, 14.18, 15.13, 16.9, 17.5.2 as the database. Other versions may also work but have not been tested.
  • pgBouncer 1.24.1 for connection pooling
  • Patroni version 4.0.5 for high-availability
  • PostGIS version 3.3.8

Supported platforms

Percona Operators are designed for compatibility with all CNCF-certified Kubernetes distributions.

Our release process includes targeted testing and validation on major cloud provider platforms and OpenShift, as detailed below for Operator version 2.7.0:

This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.