Skip to content

K8SPG-460: Mount hugepage volumes #1230

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

K8SPG-460: Mount hugepage volumes #1230

wants to merge 1 commit into from

Conversation

egegunes
Copy link
Contributor

@egegunes egegunes commented Jul 18, 2025

K8SPG-460 Powered by Pull Request Badge

CHANGE DESCRIPTION

Upstream controller already enables huge_pages in PostgreSQL if instance has hugepage resource requests but PostgreSQL is not able to actually use hugepages. Because Kubernetes requires volumes and volume mounts for hugepages to make them usable in the container.

To test hugepages on GKE:

$ cat node-pool-config.yaml
linuxConfig:
  hugepageConfig:
    hugepage_size2m: 1024

$ gcloud container node-pools update default-pool --cluster <cluster> --location <region> --system-config-from-file=./node-pool-config.yaml

after the above operation completes create cluster with these resources:

    resources:
      limits:
        memory: 4Gi
        hugepages-2Mi: 2Gi

in my testing I needed to increase shared_buffers too:

  patroni:
    dynamicConfiguration:
      postgresql:
        parameters:
          shared_buffers: 1GB

you should see reserved hugepages in /proc/meminfo in database containers.

CHECKLIST

Jira

  • Is the Jira ticket created and referenced properly?
  • Does the Jira ticket have the proper statuses for documentation (Needs Doc) and QA (Needs QA)?
  • Does the Jira ticket link to the proper milestone (Fix Version field)?

Tests

  • Is an E2E test/test case added for the new feature/change?
  • Are unit tests added where appropriate?

Config/Logging/Testability

  • Are all needed new/changed options added to default YAML files?
  • Are all needed new/changed options added to the Helm Chart?
  • Did we add proper logging messages for operator actions?
  • Did we ensure compatibility with the previous version or cluster upgrade process?
  • Does the change support oldest and newest supported PG version?
  • Does the change support oldest and newest supported Kubernetes version?

@egegunes egegunes added this to the 2.8.0 milestone Jul 18, 2025
@JNKPercona
Copy link
Collaborator

Test Name Result Time
backup-enable-disable passed 00:07:02
custom-extensions passed 00:08:57
custom-tls passed 00:05:03
database-init-sql passed 00:03:27
demand-backup passed 00:25:15
finalizers passed 00:03:37
init-deploy passed 00:03:08
monitoring passed 00:12:13
monitoring-pmm3 passed 00:10:42
one-pod passed 00:07:57
operator-self-healing passed 00:08:23
pgvector-extension passed 00:02:41
pitr passed 00:11:20
scaling passed 00:04:38
scheduled-backup passed 00:31:30
self-healing passed 00:08:54
sidecars passed 00:02:24
start-from-backup passed 00:13:54
tablespaces passed 00:07:45
telemetry-transfer passed 00:03:35
upgrade-consistency passed 00:05:33
upgrade-minor failure 00:03:14
users passed 00:04:28
We run 23 out of 23 03:15:50

commit: 2abc8fc
image: perconalab/percona-postgresql-operator:PR-1230-2abc8fca8

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants