-
Notifications
You must be signed in to change notification settings - Fork 64
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Report
Summary
When creating a PerconaPGCluster CR, one of the first step of reconciliation is to create a <cluster-name>-patroni-version-check
Pod here.
The patroni-version-check
Pod is created with partial configuration from cr.Spec.InstanceSets[0]
, including:
Resources
SecurityContext
However, it does not include the Affinity
field, which may be configured in the InstanceSet specification.
Impact
- Pod fails to start on clusters with mixed architectures
- Inconsistent scheduling behavior between the version check pod and actual instances
- Deployment failures due to architecture incompatibility
Expected Behavior
The patroni-version-check
Pod should inherit the Affinity configuration from the InstanceSet to ensure the following
- Schedules on the same type of nodes as the actual PostgreSQL instances
- Respects any node selectors or affinity rules defined by the user
- Avoids scheduling on incompatible architectures (non-amd64)
More about the problem
<cluster-name>-patroni-version-check
Pod logs
exec /usr/bin/bash: exec format error
Controller logs:
{"error": "check patroni version: exec: failed to execute command in pod: unable to upgrade connection: container not found (\"patroni-version-check\")"}
Steps to reproduce
- Create a PerconaPGCluster in a k8s cluster that contains non amd64 nodes
- If the version check is scheduled on a non amd64 node, the pg cluster isn't provisioned
Versions
- Kubernetes: 1.32
- Operator: 2.6.0
- Database: 17.4
Anything else?
We are using Karpenter for cluster autoscaling.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working