Skip to content

Releases: CrunchyData/postgres-operator

4.3.2 RC 1

27 May 12:26

Choose a tag to compare

4.3.2 RC 1 Pre-release
Pre-release
v4.3.2-rc.1

4.3.2 release notes

4.3.1

21 May 13:06

Choose a tag to compare

Crunchy Data announces the release of the PostgreSQL Operator 4.3.1 on May 21, 2020.

The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.

The PostgreSQL Operator 4.3.1 release includes the following software versions upgrades:

  • The PostgreSQL containers now use versions 12.3, 11.8, 10.13, 9.6.18, and 9.5.22

PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.

Changes

Initial Support for SCRAM

SCRAM is a password authentication method in PostgreSQL that has been available since PostgreSQL 10 and is considered to be superior to the md5 authentication method. The PostgreSQL Operator now introduces support for SCRAM on the pgo create user and pgo update user commands by means of the --password-type flag. The following values for --password-type will select the following authentication methods:

  • --password-type="", --password-type="md5" => md5
  • --password-type="scram", --password-type="scram-sha-256" => SCRAM-SHA-256

In turn, the PostgreSQL Operator will hash the passwords based on the chosen method and store the computed hash in PostgreSQL.

When using SCRAM support, it is important to note the following observations and limitations:

  • When using one of the password modifications commands on pgo update user (e.g. --password, --rotate-password, --expires) with the desire to keep the persisted password using SCRAM, it is necessary to specify the "--password-type=scram-sha-256" directive.
  • SCRAM does not work with the current pgBouncer integration with the PostgreSQL Operator. pgBouncer presently supports only one password-based authentication type at a time. Additionally, to enable support for SCRAM, pgBouncer would require a list of plaintext passwords to be stored in a file that is accessible to it. Future work can evaluate how to leverage SCRAM support with pgBouncer.

pgo restart and pgo reload

This release introduces the pgo restart command, which allow you to perform a PostgreSQL restart on one or more instances within a PostgreSQL cluster.

You restart all instances at the same time using the following command:

pgo restart hippo

or specify a specific instance to restart using the --target flag (which follows a similar behavior to the --target flag on pgo scaledown and pgo failover):

pgo restart hippo --target=hippo-abcd

The restart itself is performed by calling the Patroni restart REST endpoint on the specific instance (primary or replica) being restarted.

As with the pgo failover and pgo scaledown commands it is also possible to specify the --query flag to query instances available for restart:

pgo restart mycluster --query

With the new pgo restart command, using --query flag with the pgo failover and pgo scaledown commands include the PENDING RESTART information, which is now returned with any replication information.

This release allows for the pgo reload command to properly reloads all instances (i.e. the primary and all replicas) within the cluster.

Dynamic Namespace Mode and Older Kubernetes Versions

The dynamic namespace mode (e.g. pgo create namespace + pgo delete namespace) provides the ability to create and remove Kubernetes namespaces and automatically add them unto the purview of the PostgreSQL Operator. Through the course of fixing usability issues with working with the other namespaces modes (readonly, disabled), a change needed to be introduced that broke compatibility with Kubernetes 1.12 and earlier.

The PostgreSQL Operator still supports managing PostgreSQL Deployments across multiple namespaces in Kubernetes 1.12 and earlier, but only with readonly mode. In readonly mode, a cluster administrator needs to create the namespace and the RBAC needed to run the PostgreSQL Operator in that namespace. However, it is now possible to define the RBAC required for the PostgreSQL Operator to manage clusters in a namespace via a ServiceAccount, as described in the Namespace section of the documentation.

The usability change allows for one to add namespace to the PostgreSQL Operator's purview (or deploy the PostgreSQL Operator within a namespace) and automatically set up the appropriate RBAC for the PostgreSQL Operator to correctly operate.

Other Changes

  • The RBAC required for deploying the PostgreSQL Operator is now decomposed into the exact privileges that are needed. This removes the need for requiring a cluster-admin privilege for deploying the PostgreSQL Operator. Reported by (@obeyler).
  • With namespace modes disabled and readonly, the PostgreSQL Operator will now dynamically create the required RBAC when a new namespace is added if that namespace has the RBAC defined in local-namespace-rbac.yaml. This occurs when PGO_DYNAMIC_NAMESPACE is set to true.
  • If the PostgreSQL Operator has permissions to manage it's own RBAC within a namespace, it will now reconcile and auto-heal that RBAC as needed (e.g. if it is invalid or has been removed) to ensure it can properly interact with and manage that namespace.
  • Add default CPU and memory limits for the metrics collection and pgBadger sidecars to help deployments that wish to have a Pod QoS of Guaranteed. The metrics defaults are 100m/24Mi and the pgBadger defaults are 500m/24Mi. Reported by (@jose-joye).
  • Introduce DISABLE_FSGROUP option as part of the installation. When set to true, this does not add a FSGroup to the Pod Security Context when deploying PostgreSQL related containers or pgAdmin 4. This is helpful when deploying the PostgreSQL Operator in certain environments, such as OpenShift with a restricted Security Context Constraint. Defaults to false.
  • Remove the custom Security Context Constraint (SCC) that would be deployed with the PostgreSQL Operator, so now the PostgreSQL Operator can be deployed using default OpenShift SCCs (e.g. "restricted", though note that DISABLE_FSGROUP will need to be set to true for that). The example PostgreSQL Operator SCC is left in the examples directory for reference.
  • When PGO_DISABLE_TLS is set to true, then PGO_TLS_NO_VERIFY is set to true.
  • Some of the pgo-deployer environmental variables that we not needed to be set by a user were internalized. These include ANSIBLE_CONFIG and HOME.
  • When using the pgo-deployer container to install the PostgreSQL Operator, update the default watched namespace to pgo as the example only uses this namespace.

Fixes

  • Fix for cloning a PostgreSQL cluster when the pgBackRest repository is stored in S3.
  • The pgo show namespace command now properly indicates which namespaces a user is able to access.
  • Ensure the pgo-apiserver will successfully run if PGO_DISABLE_TLS is set to true. Reported by (@zhubx007).
  • Prevent a run of pgo-deployer from failing if it detects the existence of dependent cluster-wide objects already present.
  • Deployments with pgo-deployer using the default file with hostpathstorage will now successfully deploy PostgreSQL clusters without any adjustments.
  • Ensure image pull secrets are attached to deployments of the pgo-client container.
  • Ensure client-setup.sh executes to completion if existing PostgreSQL Operator credentials exist that were created by a different installation method
  • Update the documentation to properly name CCP_IMAGE_PULL_SECRET_MANIFEST and PGO_IMAGE_PULL_SECRET_MANIFEST in the pgo-deployer configuration.
  • Several fixes for selecting default storage configurations and sizes when using the pgo-deployer container. These include #1, #4, and #8 in the STORAGE family of variables.
  • The custom setup example was updated to reflect the current state of bootstrapping the PostgreSQL container.

4.2.3

21 May 13:06

Choose a tag to compare

Crunchy Data announces the release of the PostgreSQL Operator 4.2.3 on May 21, 2020.

The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.

The PostgreSQL Operator 4.2.3 release includes the following software versions upgrades:

  • The PostgreSQL containers now use versions 12.3, 11.8, 10.13, 9.6.18, and 9.5.22

PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.

Changes

  • This now includes support for using the JIT compilation feature introduced in PostgreSQL 11
  • The pgBackRest stanza creation and backup jobs are retried until successful, following the Kubernetes default number of retries (6)
  • POSIX shared memory is now used for the PostgreSQL Deployments.
  • Quote identifiers for the database name and user name in bootstrap scripts for the PostgreSQL containers
  • The pgo-rmdata Job no longer calls the rm command on any data within the PVC, but rather leaves this task to the storage provisioner

Fixes

  • Fix for cloning a PostgreSQL cluster when the pgBackRest repository is stored in S3.
  • The pgo show namespace command now properly indicates which namespaces a user is able to access.
  • Ensre rsync is intalled on the pgo-backrest-repo-sync UBI7 image.
  • Default the recovery action to "promote" when performing a "point-in-time-recovery" (PITR), which will ensure that a PITR process completes.
  • Report errors in a SQL policy at the time pgo apply is executed, which was the previous behavior. Reported by José Joye (@jose-joye).
  • Allow the standard PostgreSQL user created with the Operator to be able to create and manage objects within its own user schema. Reported by Nicolas HAHN (@hahnn).
  • Correctly set the default value for archive_timeout when new PostgreSQL clusters are initialized. Reported by Adrian (@adifri).
  • Allow the original primary to be removed with pgo scaledown after it is failed over.
  • The pgo-rmdata Job will not fail if a PostgreSQL cluster has not been properly initialized.
  • The failover ConfigMap for a PostgreSQL cluster is now removed when the cluster is deleted.
  • The custom setup example was updated to reflect the current state of bootstrapping the PostgreSQL container.
  • The replica Service is now properly managed based on the existence of replicas in a PostgreSQL cluster, i.e. if there are replicas, the Service exists, if not, it is removed.

4.3.0

01 May 22:53

Choose a tag to compare

Crunchy Data announces the release of the PostgreSQL Operator 4.3.0 on May 1, 2020

The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.

The PostgreSQL Operator 4.3.0 release includes the following software versions upgrades:

  • The PostgreSQL containers now use versions 12.2, 11.7, 10.12, 9.6.17, and 9.5.21
    • This now includes support for using the JIT compilation feature introduced in PostgreSQL 11
  • PostgreSQL containers now support PL/Python3
  • pgBackRest is now at version 2.25
  • Patroni is now at version 1.6.5
  • postgres_exporter is now at version 0.7.0
  • pgAdmin 4 is at 4.18

PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.

Major Features

Standby Clusters + Multi-Kubernetes Deployments

A key component of building database architectures that can ensure continuity of operations is to be able to have the database available across multiple data
centers. In Kubernetes, this would mean being able to have the PostgreSQL Operator be able to have the PostgreSQL Operator run in multiple Kubernetes clusters, have PostgreSQL clusters exist in these Kubernetes clusters, and only ensure the "standby" deployment is promoted in the event of an outage or planned switchover.

As of this release, the PostgreSQL Operator now supports standby PostgreSQL clusters that can be deployed across namespaces or other Kubernetes or Kubernetes-enabled clusters (e.g. OpenShift). This is accomplished by leveraging the PostgreSQL Operator's support for
pgBackRest and leveraging an intermediary, i.e. S3, to provide the ability for the standby cluster to read in the PostgreSQL archives and replicate the data. This allows a user to quickly promote a standby PostgreSQL cluster in the event that the primary cluster suffers downtime (e.g. data center outage), for planned switchovers such as Kubernetes cluster maintenance or moving a PostgreSQL workload from one data center to another.

To support standby clusters, there are several new flags available on pgo create cluster that are required to set up a new standby cluster. These include:

  • --standby: If set, creates the PostgreSQL cluster as a standby cluster.
  • --pgbackrest-repo-path: Allows the user to override the pgBackRest repository path for a cluster. While this setting can now be utilized when creating any cluster, it is typically required for the creation of standby clusters as the repository path will need to match that of the primary cluster.
  • --password-superuser: When creating a standby cluster, allows the user to specify a password for the superuser that matches the superuser account in the cluster the standby is replicating from.
  • --password-replication: When creating a standby cluster, allows the user to specify a password for the replication user that matches the superuser account in the cluster the standby is replicating from.

Note that the --password flag must be used to ensure the password of the main PostgreSQL user account matches that of the primary PostgreSQL cluster, if you are using Kubernetes to manage the user's password.

For example, if you have a cluster named hippo and wanted to create a standby cluster called hippo and assuming the S3 credentials are using the defaults provided to the PostgreSQL Operator, you could execute a command similar to:

pgo create cluster hippo-standby --standby \
  --pgbackrest-repo-path=/backrestrepo/hippo-backrest-shared-repo
  --password-superuser=superhippo
  --password-replication=replicahippo

To shutdown the primary cluster (if you can), you can execute a command similar to:

pgo update cluster hippo --shutdown

To promote the standby cluster to be able to accept write traffic, you can execute the following command:

pgo update cluster hippo-standby --promote-standby

To convert the old primary cluster into a standby cluster, you can execute the following command:

pgo update cluster hippo --enable-standby

Once the old primary is converted to a standby cluster, you can bring it online with the following command:

pgo update cluster hippo --startup

For information on the architecture and how to
set up a standby PostgreSQL cluster, please refer to the documentation.

At present, streaming replication between the primary and standby clusters are not supported, but the PostgreSQL instances within each cluster do support streaming replication.

Installation via the pgo-deployer container

Installation, alongside upgrading, have long been two of the biggest challenges of using the PostgreSQL Operator. This release makes improvements on both (with upgrading being described in the next section).

For installation, we have introduced a new container called pgo-deployer. For environments that use hostpath storage (e.g. minikube), installing the PostgreSQL Operator can be as simple as:

kubectl create namespace pgo
kubectl apply -f https://raw.githubusercontent.com/CrunchyData/postgres-operator/v4.3.0/installers/kubectl/postgres-operator.yml

The pgo-deployer container can be configured by a manifest called postgres-operator.yml and provides a set of environmental variables that should be familiar from using the other installers.

The pgo-deployer launches a Job in the namespace that the PostgreSQL Operator will be installed into and sets up the requisite Kubernetes objects: CRDs, Secrets, ConfigMaps, etc.

The pgo-deployer container can also be used to uninstall the PostgreSQL Operator. For more information, please see the installation documentation.

Automatic PostgreSQL Operator Upgrade Process

One of the biggest challenges to using a newer version of the PostgreSQL Operator was upgrading from an older version.

This release introduces the ability to automatically upgrade from an older version of the Operator (as early as 4.1.0) to the newest version (4.3.0) using the pgo upgrade command.

The pgo upgrade command follows a process similar to the manual PostgreSQL Operator upgrade process, but instead automates it.

To find out more about how to upgrade the PostgreSQL Operator, please review the upgrade documentation.

Improved Custom Configuration for PostgreSQL Clusters

The ability to customize the configuration for a PostgreSQL cluster with the PostgreSQL Operator can now be easily modified by making changes directly to the ConfigMap that is created with each PostgreSQL cluster. The ConfigMap, which follows the pattern <clusterName>-pgha-config (e.g. hippo-pgha-config for
pgo create cluster hippo), manages the user-facing configuration settings available for a PostgreSQL cluster, and when modified, it will automatically synchronize the settings across all primaries and replicas in a PostgreSQL clu...

Read more

4.3.0 Release Candidate 2

01 May 18:38

Choose a tag to compare

Pre-release
Update pgo client reference documentation

This includes the commands for the pgAdmin functionality.

4.3.0-rc.1

30 Apr 14:14

Choose a tag to compare

4.3.0-rc.1 Pre-release
Pre-release
Update the autogenerated documentation for the commands

This is the usual pre-release run.

4.3.0 Beta 3

29 Apr 14:48

Choose a tag to compare

4.3.0 Beta 3 Pre-release
Pre-release

Crunchy Data is pleased to announce the release of PostgreSQL Operator 4.3.0 Beta 3. We encourage you to download it and try it out.

Changes Since Beta 2:

Breaking Changes

  • pgo-installer is renamed to pgo-deployer
  • The examples files that pgo-deployer have been renamed, i.e. deploy.yml
  • The names of the scheduled backups are shortened to use the pattern <clusterName>-<backupType>-sch-backup
  • The --cpu flags on pgo create cluster, pgo update cluster, pgo create pgbouncer, pgo update pgbouncer, as well as the --pgbackrest-cpu and --pgbouncer-cpu flags on the pgo create cluster and pgo update cluster commands now set both the Resource Request and Limit for CPU

Features

Automatic PostgreSQL Operator Upgrade Process

One of the biggest challenges to using a newer version of the PostgreSQL
Operator was upgrading from an older version.

This release introduces the ability to upgrade from an older version of the
Operator (as early as 4.1.0) to the newest version (4.3.0) using the
pgo upgrade command.

External WAL Volume

An optimization used for improving PostgreSQL performance related to file system
usage is to have the PostgreSQL write-ahead logs (WAL) written to a different
mounted volume than other parts of the PostgreSQL system, such as the data
directory.

To support this, the PostgreSQL Operator nows supports the ability to specify an
external volume for writing the PostgreSQL write-head log (WAL) during cluster
creation, which carries through to replicas and clones. When not specified, the
WAL resides within the PGDATA directory and volume, which is the present
behavior.

To create a PostgreSQL cluster to use an external volume, one can use the
--wal-storage-config flag at cluster creation time to select the storage
configuration to use, e.g.

pgo create cluster --wal-storage-config=nfsstorage hippo

Additionally, it is also possible to specify the size of the WAL storage on all
newly created clusters. When in use, the size of the volume can be overridden
per-cluster. This is specified with the --wal-storage-size flag, i.e.

pgo create cluster --wal-storage-config=nfsstorage --wal-storage-size=10Gi hippo

This implementation does not define the WAL volume in any deployment templates
because the volume name and mount path are constant.

Improved Custom Configuration for PostgreSQL Clusters

The ability to customize the configuration for a PostgreSQL cluster with the
PostgreSQL Operator can now be easily modified by making changes directlry to
the ConfigMap that is created with each PostgreSQL cluster. The ConfigMap, which
follows the pattern <clusterName>-pgha-config (e.g. hippo-pgha-config for
pgo create cluster hippo), manages the user-facing configuration settings
available for a PostgreSQL cluster, and when modified, it will automatically
synchronize the settings across all primaries and replicas in a PostgreSQL
cluster.

Presently, the ConfigMap can be edited using the kubectl edit cm command, and
future iterations will add functionality to the PostgreSQL Operator to make this
process easier.

Other Features

  • A script called client-setup.sh is now included to set up the client libraries after the install process from pgo-deployer is run
  • The pgo-deployer now deploys the metrics stack (Prometheus, Grafana)
  • One can specify the "image prefix" (e.g. crunchydata) for the containers that are deployed by the PostgreSQL Operator. This adds two fields to the pgcluster CRD: CCPImagePrefix and PGOImagePrefix. These are also available on pgo create cluster with the --ccp-image-prefix and --pgo-image-prefix flags respectively
  • The pgBouncer deployment can be scaled. This can be done from the pgo create cluster --pgbouncer-replicas, pgo create pgbouncer --replicas, and pgo update pgbouncer --replicas flags. The default number of replicas is "1", which deploys one pgBouncer Pod. This and the other pgBouncer CRD attributes can also be set declaratively in a pgcluster CR.

Fixes

  • The default PostgreSQL memory request was lowered to 128Mi to avoid crashes, particularly in test environments. This is actually set in the installers so it's easily configurable, and the "default of last resort" is still 512Mi to be consistent with the PostgreSQL defaults.
  • pgo-deployer can now work with external container registries as well as image pull secrets
  • The ConfigMap and ClusterRoleBinding for the pgo-deployer container are now cleaned up
  • The stanza-create Job now waits for both the PostgreSQL cluster and the pgBackRest repository to be ready before executing
  • The Grafana and Prometheus Pods no longer crash on GKE
  • pgo clone --metrics now enables the metrics on the cloned cluster. This fixes a regression with the feature
  • Ensure commands that execute SQL directly on a container (e.g. pgo show user) work if PostgreSQL is deployed to a nonstandard port

4.3.0 Beta 2

09 Apr 16:21
f8de33a

Choose a tag to compare

4.3.0 Beta 2 Pre-release
Pre-release

Crunchy Data announces the release of the 'PostgreSQL Operator 4.3.0 Beta 2](https://www.crunchydata.com/developers/download-postgres/containers/postgres-operator).

Changes since Beta 1:

Breaking Changes

  • "Limit" resource parameters are no longer set on the containers, in particular, the PostgreSQL container, due to undesired behavior stemming from the host machine OOM killer. Further details can be found in the original pull request.
  • Added DefaultInstanceMemory, DefaultBackrestMemory, and DefaultPgBouncerMemory options to the pgo.yaml configuration to allow for the setting of default memory requests for PostgreSQL instances, the pgBackRest repository, and pgBouncer instances respectively.
  • If unset by either the PostgreSQL Operator configuration or one-off, the default memory resource requests for the following applications are:
    • PostgreSQL: 512Mi
    • pgBackRest: 48Mi
    • pgBouncer: 24Mi
  • Remove the Default...ContainerResources set of parameters from the pgo.yaml configuration file.
  • Remove the PreferredFailoverNode feature, as it had already been effictively removed.
  • Remove explicit rm calls when cleaning up PostgreSQL clusters. This behavior is left to the storage provsioner that one deploys with their PostgreSQL instances

Features

Elimination of ClusterRole Requirement for the PostgreSQL Operator

PostgreSQL Operator 4.0 introduced the ability to manage PostgreSQL clusters
across multiple Kubernetes Namespaces. PostgreSQL Operator 4.1 built on this
functionality by allowing users to dynamically control which Namespaces it
managed as well as the PostgreSQL clusters deployed to them. In order to
leverage this feature, one must grant a ClusterRole
level permission via a ServiceAccount to the PostgreSQL Operator.

There are a lot of deployment environments for the PostgreSQL Operator that only
need for it to exists within a single namespace and as such, granting
cluster-wide privileges is superfluous, and in many cases, undesirable. As such,
it should be possible to deploy the PostgreSQL Operator to a single namespace
without requiring a ClusterRole.

To do this, but maintain the aforementioned Namespace functionality for those
who require it, PostgreSQL Operator 4.3 introduces the ability to opt into
deploying it with minimum required ClusterRole privileges and in turn, the
ability to deploy the PostgreSQL Operator without a ClusterRole. To do so, the
PostgreSQL Operator introduces the concept of "namespace operating mode" which
lets one select the type deployment to create. The namespace mode is set at the
install time for the PostgreSQL Operator, and files into one of three options:

  • dynamic: This is the default. This enables full dynamic Namespace
    management capabilities, in which the PostgreSQL Operator can create, delete and
    update any Namespaces within the Kubernetes cluster, while then also having the
    ability to create the Roles, Role Bindings and Service Accounts within those
    Namespaces for normal operations. The PostgreSQL Operator can also listen for
    Namespace events and create or remove controllers for various Namespaces as
    changes are made to Namespaces from Kubernetes and the PostgreSQL Operator's
    management.

  • readonly: In this mode, the PostgreSQL Operator is able to listen for
    namespace events within the Kubernetetes cluster, and then manage controllers
    as Namespaces are added, updated or deleted. While this still requires a
    ClusterRole, the permissions mirror those of a "read-only" environment, and as
    such the PostgreSQL Operator is unable to create, delete or update Namespaces
    itself nor create RBAC that it requires in any of those Namespaces. Therefore,
    while in readonly, mode namespaces must be preconfigured with the proper RBAC
    as the PostgreSQL Operator cannot create the RBAC itself.

  • disabled: Use this mode if you do not want to deploy the PostgreSQL Operator
    with any ClusterRole privileges, especially if you are only deploying the
    PostgreSQL Operator to a single namespace. This disables any Namespace
    management capabilities within the PostgreSQL Operator and will simply attempt
    to work with the target Namespaces specified during installation. If no target
    Namespaces are specified, then the Operator will be configured to work within
    the namespace in which it is deployed. As with the readonly mode, while in
    this mode, Namespaces must be pre-configured with the proper RBAC, since the
    PostgreSQL Operator cannot create the RBAC itself.

Based on the installer you use, the variables to set this mode are either named:

  • PGO_NAMESPACE_MODE
  • namespace_mode

pgo-installer Improvements

The new pgo-installer container can now be run directly from kubectl, i.e.

kubectl apply -f /path/to/install.yml

Other New Features

  • pgo create cluster now supports the --pgbackrest-storage-config flag to specify specific a specific storage configuration for the pgBackRest repository in a cluster
  • pgo create cluster now supports the --pgbackrest-cpu, --pgbackrest-memory, --pgbouncer-cpu, and --pgbouncer-memory resource request flags for the pgBackRest repository and the pgBouncer instances respectively.
  • pgo create pgbouncer now supports the --cpu, and --memory resource request flags for requesting container resources for the pgBouncer instances
  • pgo update cluster now supports the --pgbackrest-cpu, --pgbackrest-memory resource request flags for modifying container resources for the pgBackRest repository.
  • pgo update pgbouncer now supports the --cpu, and --memory resource request flags for requesting container resources for the pgBouncer instances.
  • Specify a different S3 Certificate Authority (CA) with pgo create cluster by using the --pgbackrest-s3-ca-secret flag, which refers to an existing Secret that contains a key called aws-s3-ca.crt that contains the CA. Reported by Aurelien Marie @(aurelienmarie)

Fixes

  • Default the recovery action to "promote" when performing a "point-in-time-recovery" (PITR), which will ensure that a PITR process completes.
  • Fix performance degradation discovered in certain environments that use the dynamic Namespace feature. Reported by (@dadez)
  • Remove backoffLimit from Jobs that can be retried, which is most of them. Reported by Leo Khomenko (@lkhomenk)

4.3.0 Beta 1

26 Mar 13:48

Choose a tag to compare

4.3.0 Beta 1 Pre-release
Pre-release

Crunchy Data announces the release of the PostgreSQL Operator 4.3.0 Beta 1.

Details on the new major features, as well as other features and fixes, proceed
below. While we aim to have our betas feature complete, new features and
bug fixes will be introduced in subsequent beta release of PostgreSQL
Operator 4.3.

We also aim to have quarterly release. Due to some disruptions related to
ongoing world events, we are delaying the release but still aiming to make it
available by in mid-April. This will give us some time to get a few more
features in and tighten up existing functionality. We hope that the wait will be
worth it for you.

With that said, we did want to make the beta available sooner so you could begin
evaluating it for your purposes.

The PostgreSQL Operator 4.3.0 release includes the following software versions upgrades:

  • The PostgreSQL containers now use versions 12.2, 11.7, 10.12, 9.6.17, and 9.5.21
    • This now includes support for using the JIT compilation feature introduced
      in PostgreSQL 11
  • PostgreSQL containers now support PL/Python3
  • pgBackRest is now at version 2.24
  • Patroni is now at version 1.6.4
  • postgres_exporter is now at version 0.7.0

The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.

PostgreSQL Operator is tested with Kubernetes 1.13 - 1.17, OpenShift 3.11+,
Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.

Major Features

  • Standby Clusters + Multi-Kubernetes Deployments
  • Set custom PVC sizes for PostgreSQL clusters on creation and clone
  • Tablespaces
  • All Operator commands now support TLS-only PostgreSQL workflows

Standby Clusters + Multi-Kubernetes Deployments

A key component of building database architectures that can ensure continuity of
operations is to be able to have the database available across multiple data
centers. In Kubernetes, this would mean being able to have the PostgreSQL
Operator be able to have the PostgreSQL Operator run in multiple Kubernetes
clusters, have PostgreSQL clusters exist in these Kubernetes clusters, and only
ensure the "standby" deployment is promoted in the event of an outage or planned
switchover.

As of this release, the PostgreSQL Operator now supports standby PostgreSQL
clusters that can be deployed across namespaces or other Kubernetes or
Kubernetes-enabled clusters (e.g. OpenShift). This is accomplished by leveraging
the PostgreSQL Operator's support for
pgBackRest
and leveraging an intermediary, i.e. S3, to provide the ability for the standby
cluster to read in the PostgreSQL archives and replicate the data. This allows a
user to quickly promote a standby PostgreSQL cluster in the event that the
primary cluster suffers downtime (e.g. data center outage), for planned
switchovers such as Kubernetes cluster maintenance or moving a PostgreSQL
workload from one data center to another.

To support standby clusters, there are several new flags available on
pgo create cluster that are required to set up a new standby cluster. These
include:

  • --standby: If set, creates the PostgreSQL cluster as a standby cluster
  • --pgbackrest-repo-path: Allows the user to override the pgBackRest
    repository path for a cluster. While this setting can now be utilized when
    creating any cluster, it is typically required for the creation of standby
    clusters as the repository path will need to match that of the primary cluster.
  • --password-superuser: When creating a standby cluster, allows the user to
    specify a password for the superuser that matches the superuser account in the
    cluster the standby is replicating from
  • --password-replication: When creating a standby cluster, allows the user to
    specify a password for the replication user that matches the superuser account
    in the cluster the standby is replicating from

Note that the --password flag must be used to ensure the password of the main
PostgreSQL user account matches that of the primary PostgreSQL cluster, if you
are using Kubernetes to manage the user's password.

For example, if you have a cluster named hippo and wanted to create a standby
cluster called hippo and assuming the S3 credentials are using the defaults
provided to the PostgreSQL Operator, you could execute a command similar to:

pgo create cluster hippo-standby --standby \
  --pgbackrest-repo-path=/backrestrepo/hippo-backrest-shared-repo
  --password-superuser=superhippo
  --password-replication=replicahippo

To shutdown the primary cluster (if you can), you can execute a command similar
to:

pgo update cluster hippo --shutdown

To promote the standby cluster to be able to accept write traffic, you can
execute the following command:

pgo update cluster hippo-standby --promote-standby

To convert the old primary cluster into a standby cluster, you can execute the
following command:

pgo update cluster hippo --enable-standby

Once the old primary is converted to a standby cluster, you can bring it online
with the following command:

pgo update cluster hippo --startup

For information on the architecture and how to
set up a standby PostgreSQL cluster, please visit url

At present, streaming replication between the primary and standby clusters are
not supported, but the PostgreSQL instances within each cluster do support
streaming replication.

Customize PVC Size on PostgreSQL cluster Creation & Clone

The PostgreSQL Operator provides the ability to set customization for how large
the a PVC can be via the "storage config" options available in the PostgreSQL
Operator configuration file (aka pgo.yaml). While these provide a baseline
level of customizability, it is often important to be able to set the size
of the PVC that a PostgreSQL cluster should use at cluster creation time. In
other words, users should be able to choose exactly how large they want their
PostgreSQL PVCs to be.

PostgreSQL Operator 4.3 introduces the ability to set the PVC sizes for the
PostgreSQL cluster, the pgBackRest repository for the PostgreSQL cluster, and
the PVC size for each tablespace at cluster creation time. Additionally,
this behavior has been extended to the clone functionality as well, which is
helpful when trying to resize a PostgreSQL cluster. Here is some information on
the flags that have been added:

pgo create cluster

--pvc-size - sets the PVC size for the PostgreSQL data directory
--pgbackrest-pvc-size - sets the PVC size for the PostgreSQL pgBackRest
repository

For tablespaces, one can use the pvcsize option to set the PVC size for that
tablespace.

pgo clone cluster

--pvc-size - sets the PVC size for the PostgreSQL data directory for the newly
created cluster
--pgbackrest-pvc-size - sets the PVC size for the PostgreSQL pgBackRest
repository for the newly created cluster

Tablespaces

Tablespaces can be used to spread out PostgreSQL workloads across
multiple volumes, which can be used for a variety of use cases:

  • Partitioning larger data sets
  • Putting data onto archival systems
  • Utilizing hardware (or a storage class) for a particular database
    object, e.g. an index

and more.

Tablespaces can be created via the pgo create cluster command using
the --tablespace flag. The arguments to --tablespace can be passed
in using one of several key/value pairs, including:

  • name (required) - the name of the tablespace
  • storageconfig (required) - the storage configuration to use for the tablespace
  • pvcsize - if specified, the size of the PVC. Defaults to the PVC size in the storage configuration

Each value is separated by a :, for example:

pgo create cluster hacluster --tablespace=name=ts:storageconfig=nfsstorage

All tablespaces are mounted in the /tablespaces directory. The
PostgreSQL Operator manages the mount points and persistent volume
claims (PVCs) for the tablespaces, and ensures they are available
throughout all of the PostgreSQL lifecycle operations, including:

  • Provisioning
  • Backup & Restore
  • High-Availability, Failover, Healing
  • Clone

etc.

One additional value is added to the pgcluster CRD:

  • TablespaceMounts: a map of the name of the tablespace and its
    associated storage.

Tablespaces are automatically created in the PostgreSQL cluster. You can
access them as soon as the cluster is initialized. For example, using
the tablespace created above, you could create a table on the tablespace
ts with the following SQL:

CREATE TABLE (id int) TABLESPACE ts;

Tablespaces can also be added to existing PostgreSQL clusters by using the
pgo update cluster command. The syntax is similar to that of creating a
PostgreSQL cluster with a tablespace, i.e.:

pgo update cluster hacluster --tablespace=name=ts2:storageconfig=nfsstorage

As additional volumes need to be mounted to the Deployments, this action can
cause downtime, though it expectation is that the downtime is brief.

Based on usage, future work will look to making this
more flexible. Dropping tablespaces can be tricky as no objects must
exist on a tablespace in order for PostgreSQL to drop it (i.e. there is
no DROP TABLESPACE .. CASCADE command).

Enhanced pgo df

pgo df provides information on the disk utilization of a PostgreSQL cluster,
and previously, this was not reporting accurate numbers. The new pgo df looks
at each PVC that is mounted to each PostgreSQL instance in a cluster, including
the PVCs for tablespaces, and computers the overall utilization. Even better,
the data is returned in a structured format for easy scraping. This
implementation also...

Read more

4.2.2

18 Feb 17:30

Choose a tag to compare

Crunchy Data announces the release of the PostgreSQL Operator 4.2.2 on February, 18, 2020.

The PostgreSQL Operator 4.2.2 release provides bug fixes and continued support to the 4.2 release line.

This release includes updates for several packages supported by the PostgreSQL Operator, including:

  • The PostgreSQL containers now use versions 12.2, 11.7, 10.12, 9.6.17, and 9.5.21
  • The PostgreSQL containers now support PL/Python3
  • Patroni is updated to version 1.6.4

The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.

PostgreSQL Operator is tested with Kubernetes 1.13+, OpenShift 3.11+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.

Changes since 4.2.1

  • Added the --enable-autofail flag to pgo update to make it clear how the auto-failover mechanism can be re-enabled for a PostgreSQL cluster.
  • Remove using expenv in the add-targeted-namespace.sh script

Fixes since 4.2.1

  • Ensure PostgreSQL clusters can be successfully restored via pgo restore after 'pgo scaledown' is executed
  • Ensure all replicas are listed out via the --query flag in pgo scaledown and pgo failover. This now follows the pattern outlined by the Kubernetes safe random string generator (#1247)
  • Honor the value of "PasswordLength" when it is set in the pgo.yaml file for password generation. The default is now set at 24
  • Set UsePAM yes in the sshd_config file to fix an issue with using SSHD in newer versions of Docker
  • The backup task listed in the pgtask CRD is now only deleted if one already exists
  • Ensure that a successful "rmdata" Job does not delete all cluster pgtasks listed in the CRD after a successful run
  • Only add Operator labels to a managed namespace if the namespace already exists when executing the add-targeted-namespace.sh script
  • Remove logging of PostgreSQL user credentials in the PostgreSQL Operator logs
  • Consolidation of the Dockerfiles for RHEL7/UBI7 builds
  • Several fixes to the documentation (#1233)