Releases: percona/percona-postgresql-operator
v2.7.0
Release Highlights
This release provides the following features and improvements:
PMM3 support
The Operator is natively integrated with PMM 3, enabling you to monitor the health and performance of your Percona Distribution for PostgreSQL deployment and at the same time enjoy enhanced performance, new features, and improved security that PMM 3 provides.
Note that the Operator supports both PMM2 and PMM3. The decision on what PMM version is used depends on the authentication method you provide in the Operator configuration: PMM2 uses API keys while PMM3 uses service account token. If the Operator configuration contains both authentication methods with non-empty values, PMM3 takes the priority.
To use PMM, ensure that the PMM client image is compatible with the PMM Server version. Check Percona certified images for the correct client image.
For how to configure monitoring with PMM see the documentation.
Improved monitoring for clusters in multi-region or multi-namespace deployments in PMM
Now you can define a custom name for your clusters deployed in different data centers. This name helps Percona Management and Monitoring (PMM) Server to correctly recognize clusters as connected and monitor them as one deployment. Similarly, PMM Server identifies clusters deployed with the same names in different namespaces as separate ones and correctly displays performance metrics for you on dashboards.
To assign a custom name, define this configuration in the Custom Resource manifest for your cluster:
spec:
pmm:
customClusterName: postgresql-cluster
Added labels to identify the version of the Operator
Custom Resource Definition (CRD) is compatible with the last three Operator versions. To know which Operator version is attached to it, we've added labels to all Custom Resource Definitions. The labels help you identify the current Operator version and decide if you need to update the CRD.
To view the labels, run: kubectl get crd perconapgclusters.pgv2.percona.com --show-labels
.
Grant users access to a public schema
Starting with PostgreSQL 15, a non-database owner cannot access the default public
schema and cannot create tables in it. We have improved this behavior so that the Operator creates a user and a schema with the name matching the username for all databases listed for this user. This custom schema is set by default enabling you to work in the database right away.
You can explicitly grant access to a public
schema for a non-superuser setting the grantPublicSchemaAccess
option to true
. This grants the user permission to create tables and update in the public
schema of every database they own. If multiple users are granted access to the public
schema in the same database, each user can only access the tables they have created themselves. If you want one user to access tables created by another user in the public
schema, the owner of those tables must connect to PostgreSQL and explicitly grant the necessary privileges to the other user.
Superusers have access to the public
schema for their databases by default.
Improved troubleshooting with the ability to override Patroni configuration
You can now override Patroni configuration for the whole cluster as well as for an individual Pod. This gives you more control over the database and simplifies troubleshooting.
Also, you can redefine what method the Operator will use when it creates replica instances in your PostgreSQL cluster. For example, to force the Operator to use pgbasebackup
, edit the deploy/cr.yaml
manifest:
patroni:
createReplicaMethods:
- basebackup
- pgbackrest
Note that after you apply this configuration, the Operator updates the Patroni ConfigMap, but it doesn't apply this configuration to Patroni. You must manually reload the Patroni configuration of every database instance for it to come into force.
Read more about these troubleshooting methods in the documentation.
Changelog
New features
-
K8SPG-615 - Introduced a custom delay on the entrypoint of the backup pod. The backup process waits the defined time before connecting to the API server
-
K8SPG-708, K8SPG-663 - Added the sleep-forever feature to keep a database container running.
-
K8SPG-712 - Added the ability to control every parameter supported by Patroni configuration.
-
K8SPG-725 - Added the ability to configure resources for the repo-host container
-
K8SPG-719 - Added support for PMM v3
Improvements
-
K8SPG-571 - Added the ability to access to a public schema for a non-superuser custom user for every database listed for them.
-
K8SPG-612 - Updated the
pgBouncer
image to use the officialpercona-pgbouncer
Docker image -
K8SPG-613 - Updated the
pgBackRest
image to use the officialpercona-pgbackrest
Docker image -
K8SPG-654 - Added the ability to add custom parameters in the Custom Resource and pass them to PMM.
-
K8SPG-675 - Added the ability to define resource requests for CPU and memory
-
K8SPG-704 - Added the ability to configure
create_replica_methods
for Patroni -
K8SPG-710 - Added the ability to disable backups
-
K8SPG-715 - Improved custom-extensions e2e test by adding
pgvector
-
K8SPG-726 - Added ability to define security context for all sidecar containers
-
K8SPG-729 - Added Labels for Custom Resource Definitions (CRD) to identify the Operator version attached to them
-
K8SPG-732 - Enhanced readability of
pgbackrest debug logs
by printing log messages on separate lines -
K8SPG-738 - Added startup log to the Operator Pod to print commit hash, branch and build time
-
K8SPG-743 - Disabled client-side rate limiting in the Kubernetes Go client to avoid throttling errors when managing multiple clusters with a single operator. This change leverages Kubernetes' server-side Priority and Fairness mechanisms introduced in v1.20 and later. (Thank you Joshua Sierles for contributing to this issue)
-
K8SPG-744 - Improved Contributing guide with the steps how to build the Operator for development purposes
-
K8SPG-717, K8SPG-750 - Added the ability to define a custom cluster name for PMM for filtering
-
K8SPG-753 - Added the ability to enable
pg_stat_statements
instead ofpg_stat_monitor
-
K8SPG-761 - Added the ability to add concurrent reconciliation workers
-
K*SPG-828 - Added registry name to images due to Openshift 4.19 changes
Bugs Fixed
-
K8SPG-532 - Improved log visibility to include logs about missing data source to INFO logs
-
K8SPG-574 - Added
pg_repack
to the list of built-in extensions in the Custom Resource -
K8SPG-661 - Added documentation about replica reinitialization in the Operator
-
K8SPG-677 - Made the
imagePullPolicy
inpg-db
Helm chart configurable -
K8SPG-680 - Prevent scheduled backups to start until the volume expansion is completed with success.
-
K8SPG-698 - Fixed the issue with
pgbackrest
service account not being created and reconciliation failing by creating the StatefulSet for this service account first -
K8SPG-703 - Fixed the issue with the backup Pod being stuck in a running state due to running jobs being deleted because of the TTL expiration by adding an internal finalizer to keep the job running until it finishes
-
K8SPG-722 - Documented the replica reinitialization behavior.
-
K8SPG-772 - Fixed the issue with WAL watcher panicking if some backups have no
CompletedAt
status field by usingCreationTimestamp
as fallback. -
K8SPG-782 - Fixed the issue with crashing WALWatcher by assigning Patroni version to status when Patroni label is configured through the Custom resource option
-
K8SPG-785 - Fixed ...
v2.6.0
Release Highlights
Backup improvements
This release implemented several improvements to the backup/restore process:
-
A new delete-backups finalizer was implemented to automatically remove all backups when deleting the cluster. This finalizer is off by default. It's experimental and, therefore, is not recommended for production environments.
-
Backup logic was improved and now allows retrying a failed backup in the same backup Pod for a specified number of times before deleting this Pod and creating a new one. This should be beneficial in case of short connectivity issues or timeouts. This behavior is controlled by the new backups.pgbackrest.jobs.backoffLimit and backups.pgbackrest.jobs.restartPolicy Custom Resource options.
-
You can now overwrite the default restore command for
pgBackRest
via the patroni.dynamicConfiguration Custom Resource option. Particularly, this allows to control and filter files restored topg_wal
directory without editing these files in the backup repository storage.
PostgreSQL 17 support
PostgreSQL 17 is now supported by the Operator in addition to versions 13 - 16. The appropriate images are now included in the list of Percona-certified images. See these blogposts for details about the latest PostgreSQL 17 features with the added security and functionality improvements:
- Encrypt PostgreSQL Data at Rest on Kubernetes by Ege Gunes
- The Powerful Features Released in PostgreSQL 17 Beta 2 by Shivam Dhapatkar
- PostgreSQL 17: Two Small Improvements That Will Have a Major Impact by David Stokes.
PostgreSQL 17 is currently not recommended for production environments due to the known limitation.
Update from April 1, 2025: We have added PostgreSQL 17.4 image and database cluster components based on this image. It is now production ready and we recommend updating the database cluster from PostgreSQL 17.2 to 17.4. Check the upgrade instructions for steps
pgvector
is added to the PostgreSQL image
To support you with your AI journey, we've added the pgvector
extension to the PostgreSQL images shipped with our Operator. Now, you can easily use Percona Distribution for PostgreSQL as a vector database by simply enabling it in your Custom Resource options. No more custom extension installations needed.
New features
- K8SPG-628: The custom restore_command can be now passed to pgBackRest via the patroni.dynamicConfiguration Custom Resource option
- K8SPG-619: New backups.pgbackrest.jobs.backoffLimit and backups.pgbackrest.jobs.restartPolicy Custom Resource options allow to retry backup in the backup Pod for a specified number of times before abandoning the Pod and creating the new one
- K8SPG-648: PostgreSQL 17 is now supported by the Operator
Improvements
- K8SPG-487: New spec.metadata.labels and spec.metadata.annotations Custom Resource options allow setting labels and annotation globally for all Kubernetes objects created by the Operator
- K8SPG-554: New tlsOnly Custom Resource option allows the user to enforce TLS connections for the database cluster
- K8SPG-586: The new experimental finalizers.delete-backups finalizer (off by default) removes all backups of the cluster at cluster deletion event
- K8SPG-634: The new autoCreateUserSchema Custom Resource option enhances the declarative user management by automatically creating per-user schemas
- K8SPG-652: Improve security and meet compliance requirements by using PostgreSQL images built based on Red Hat Universal Base Image (UBI) 9 instead of UBI 8
- K8SPG-692: Patroni versions 4.x are now supported by the Operator in addition to versions 3.x
- K8SPG-699: The pgvector extension is now included within the PostgreSQL image used by the Operator
- K8SPG-701: The extensions.image Custom Resource option is now optional, and can be omitted for builtin PostgreSQL extensions
- K8SPG-702: A retry logic was implemented to fix intermittent Pod exec failures caused by timeouts (Thanks to dcaputo-harmoni for contribution)
- K8SPG-711: The new README.md explains how to build your own images for the PostgreSQL cluster components used by the Operator
Bugs Fixed
- K8SPG-594: Fix a bug where extension was still appearing in pg_extension table after being removed from Custom Resource and physically deleted by the Operator
- K8SPG-637: Fix a bug where restore was failing with “waiting for another restore to finish” if the pg-restore object of a previous unfinished restore was manually deleted
- K8SPG-638: Fix a bug that caused flooding the logs with no completed backups found error at cluster initialization.
- K8SPG-645: Fix a bug where creating sidecar containers for pgBouncer did not work
- K8SPG-681: Fixed a bug where the “Last Recoverable Time” information field was missing from the output of the kubectl get pg-backup command due to misdetection cases
- K8SPG-713: Fix a bug where the cluster not found errors were appearing in the Operator logs on cluster deletion
- K8SPG-377: Fix a bug where the Operator didn’t make full update of the pg_stat_monitor built-in PostgreSQL extension on database upgrade, requiring manual operations from the end user
Deprecation, Change, Rename and Removal
The new versions of Percona distribution for PostgreSQL used by the Operator come with Patroni 4.x, which introduces breaking changes compared to previously used 3.x versions.
To maintain backward compatibility, the Operator detects the Patroni version used in the image. It is also possible to disable this auto-detection feature by manually setting the Patroni version via the following annotation set in the metadata part of the Custom
Resource:
pgv2.percona.com/custom-patroni-version: "4"
PostgreSQL 12 is no longer supported by the Operator 2.6.0 and newer versions.
Known limitations
PostgreSQL 17.2 image and images for other database cluster components based on PostgreSQL 17 contain the known CVE-2025-1094 - a vulnerability in the libpq PostgreSQL client library, which makes images used by the Operator vulnerable to SQL injection within the PostgreSQL interactive terminal due to the lack of neutralizing quoting. Images for PostgreSQL 17 will be available soon, while images for other PosgreSQL versions have already been fixed.
Supported platforms
The Operator {{ release }} is developed, tested and based on:
-
PostgreSQL 13.18, 14.15, 15.10, 16.8, 17.2 and 17.4 as the database. Other versions may also work but have not been tested.
-
pgBouncer for connection pooling:
- version 1.23.1 - for PostgreSQL 17.2
- version 1.24.0 - for PostgreSQL 13.20, 14.17, 15.12, 16.8, 17.4
-
Patroni for high-availability:
- version 4.0.5 - for PostgreSQL 17.4
- version 4.0.3 - for PostgreSQL 17.2
- version 4.0.4 - for PostgreSQL 13.20, 14.17, 15.12, 16.8
Percona Operators are designed for compatibility with all CNCF-certified Kubernetes distributions.
Our release process includes targeted testing and validation on major cloud provider platforms and OpenShift, as detailed below for Operator version 2.6.0:
- Google Kubernetes Engine (GKE) 1.29 - 1.31
- Amazon Elastic Container Service for Kubernetes (EKS) 1.29 - 1.32
- OpenShift 4....
v2.5.1
Release Highlights
This release fixes the CVE-2025-1094 vulnerability in the libpq PostgreSQL client library, which made images used by the Operator vulnerable to SQL injection within the PostgreSQL interactive terminal due to the lack of neutralizing quoting. For now, the fix includes the image of PostgreSQL 16.8 and other database cluster images based on PostgreSQL 16.8. Fixed images for other PostgreSQL versions are to follow in the upcoming days.
Update from March 04, 2025: images of PostgreSQL 15.12 and other database cluster components based on PostgreSQL 15.12 were added.
Update from March 06, 2025: images of PostgreSQL 14.17 and other database cluster components based on PostgreSQL 14.17 were added.
Update from March 07, 2025: images of PostgreSQL 13.20 and other database cluster components based on PostgreSQL 13.20 were added.
Supported platforms
The Operator was developed and tested with PostgreSQL versions 14.17, 15.12, and 16.8. Other options may also work but have not been tested. The Operator 2.5.1 provides connection pooling based on pgBouncer 1.24.0 and high-availability implementation based on Patroni 3.3.2.
The following platforms were tested and are officially supported by the Operator 2.5.1:
- Google Kubernetes Engine (GKE) 1.28-1.30
- Amazon Elastic Container Service for Kubernetes (EKS) 1.28-1.30
- OpenShift Container Platform 4.13.46 - 4.16.7
- Azure Kubernetes Service (AKS) 1.28-1.30
- Minikube 1.34.0 with Kubernetes 1.31.0
This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.
v2.5.0
Release Highlights
Automated storage scaling
Starting from this release, the Operator is able to detect if the storage usage on the PVC reaches a certain threshold, and trigger the PVC resize. Such autoscaling needs the upstream auto-growable disk feature turned on when deploying the Operator. This is done via the PGO_FEATURE_GATES
environment variable set in the deploy/operator.yaml
manifest (or in the appropriate part of deploy/bundle.yaml
):
- name: PGO_FEATURE_GATES
value: "AutoGrowVolumes=true"
When the support for auto-growable disks is turned on, the spec.instances[].dataVolumeClaimSpec.resources.limits.storage
Custom Resource option sets the maximum value available for the Operator to scale up.
See official documentation for more details and limitations of the feature.
Major versions upgrade improvements
Major version upgrade, introduced in the Operator version 2.4.0 as a tech preview, had undergone some improvements. Now it is possible to upgrade from one PostgreSQL major version to another with custom images for the database cluster components (PostgreSQL, pgBouncer, and pgBackRest). The upgrade is still triggered by applying the YAML manifest with the information about the existing and desired major versions, which now includes image names. The resulting manifest may look as follows:
apiVersion: pgv2.percona.com/v2
kind: PerconaPGUpgrade
metadata:
name: cluster1-15-to-16
spec:
postgresClusterName: cluster1
image: percona/percona-postgresql-operator:2.4.1-upgrade
fromPostgresVersion: 15
toPostgresVersion: 16
toPostgresImage: percona/percona-postgresql-operator:2.5.0-ppg16.4-postgres
toPgBouncerImage: percona/percona-postgresql-operator:2.5.0-ppg16.4-pgbouncer1.23.1
toPgBackRestImage: percona/percona-postgresql-operator:2.5.0-ppg16.4-pgbackrest2.53-1
Azure Kubernetes Service and Azure Blob Storage support
Azure Kubernetes Service (AKS) is now officially supported platform, so developers and vendors of the solutions based on the Azure platform can take advantage of the official support from Percona or just use officially certified Percona Operator for PostgreSQL images; also, Azure Blob Storage can now be used for backups.
New features
- K8SPG-227 and K8SPG-157: Add support for the Azure Kubernetes Service (AKS) platform and allow using Azure Blob Storage for backups
- K8SPG-244: Automated storage scaling is now supported
Improvements
- K8SPG-630: A new backups.trackLatestRestorableTime Custom Resource option allows to disable latest restorable time tracking for users who need reducing S3 API calls usage
- K8SPG-605 and K8SPG-593: Documentation now includes information about upgrading the Operator via Helm and using databaseInitSQL commands
- K8SPG-598: Database major version upgrade now supports custom images
- K8SPG-560: A pg-restore Custom Resource is now automatically created at bootstrapping a new cluster from an existing backup
- K8SPG-555: The Operator now creates separate Secret with CA certificate for each cluster
- K8SPG-553: Users can provide the Operator with their own root CA certificate
- K8SPG-454: Cluster status obtained with kubectl get pg command is now “ready” not only when all Pods are ready, but also takes into account if all StatefulSets are up to date
- K8SPG-577: A new pmm.querySource Custom Resource option allows to set PMM query source
Bugs Fixed
- K8SPG-629: Fix a bug where the Operator was not deleting backup Pods when cleaning outdated backups according to the retention policy
- K8SPG-499: Fix a bug where cluster was getting stuck in the init state if pgBackRest secret didn’t exist
- K8SPG-588: Fix a bug where the Operator didn’t stop WAL watcher if the namespace and/or cluster were deleted
- K8SPG-644: Fix a bug in the pg-db Helm chart which prevented from setting more than one Toleration
Deprecation, Change, Rename and Removal
With the Operator versions prior to 2.5.0, autogenerated TLS certificates for all database clusters were based on the same generated root CA. Starting from 2.5.0, the Operator creates root CA on a per-cluster basis.
Supported platforms
The Operator was developed and tested with PostgreSQL versions 12.20, 13.16, 14.13, 15.8, and 16.4. Other options may also work but have not been tested. The Operator 2.5.0 provides connection pooling based on pgBouncer 1.23.1 and high-availability implementation based on Patroni 3.3.2.
The following platforms were tested and are officially supported by the Operator 2.5.0:
- Google Kubernetes Engine (GKE) 1.28-1.30
- Amazon Elastic Container Service for Kubernetes (EKS) 1.28-1.30
- OpenShift Container Platform 4.13.46 - 4.16.7
- Azure Kubernetes Service (AKS) 1.28-1.30
- Minikube 1.34.0 with Kubernetes 1.31.0
This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.