forked from postgres/postgres
-
Notifications
You must be signed in to change notification settings - Fork 0
Dev/oauth fix openbsd #22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
jchampio
wants to merge
91
commits into
master
Choose a base branch
from
dev/oauth-fix-openbsd
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Use of a RowCompare key makes nbtree index scans ineligible to use pstate.forcenonrequired following recent bugfix commit 5f4d98d. There's no longer any need for _bt_check_rowcompare to accept a forcenonrequired argument, so remove it.
Refine wording from commit 01463e1. Author: Noah Misch <noah@leadboat.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/20250605163441.2f.nmisch@google.com
Reported-by: Daria Shanina <vilensipkdm@gmail.com> Author: Daria Shanina <vilensipkdm@gmail.com> Author: Robert Haas <robertmhaas@gmail.com> Backpatch-through: 13
Delay calling BufferGetLSNAtomic() until we finish reading a page that actually contains items that btgettuple will return to the executor. This reduces the number of calls during plain index scans (we'll only call BufferGetLSNAtomic() when _bt_readpage returns true), and totally eliminates calls during index-only scans, bitmap index scans, and plain index scans of an unlogged relation. Currently, when checksums (or wal_log_hints) are enabled, acquiring a page's LSN in BufferGetLSNAtomic() involves locking the buffer header (which involves the use of spinlocks). Testing has shown that enabling page-level checksums causes large regressions with certain workloads, especially on larger multi-socket systems. The regression isn't tied to any Postgres 18 commit. However, Postgres 18 commit 04bec89 made initdb use checksums by default, so it seems prudent to address the problem now. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Tomas Vondra <tomas@vondra.me> Discussion: https://postgr.es/m/941f0190-e3c6-4622-9ac7-c04e936e5fdb@vondra.me Discussion: https://postgr.es/m/CAH2-Wzk-Dg5XWs_jDuiHt4_7ryrSY+n=vxmHY51EVqPDFsKXmg@mail.gmail.com
Oversight in commit 8b2bcf3. Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/aECi_gSD9JnVWQ8T%40nathan Backpatch-through: 17
Commit 5fe08c0 fixed this for calls to dshash_create(). This commit fixes calls to dshash_attach() and dsa_create_in_place(). Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/aECi_gSD9JnVWQ8T%40nathan
Discussion: https://postgr.es/m/73959a14-267b-49c1-8293-291b175682cb@manitou-mail.org Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Move plpython_error_5.out to plpython_error.out, since the pre-3.5 version is no longer needed, since we raised the Python requirement to 3.6 (commit 45363fc). Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/d620e7c6-becc-4a8e-9b43-eea0da55faf2@eisentraut.org
And move to the top of the incompatibility list. This will impact users more than any other incompatibility item because of pg_upgrade.
Reported-by: Noah Misch Discussion: https://postgr.es/m/20250603172123.5f.nmisch@google.com
…le modes." We concluded that commit e5a3c9d is a feature rather than a fix; since it was added after feature freeze, revert it. Reported-by: Fujii Masao <masao.fujii@oss.nttdata.com> Reported-by: Michael Paquier <michael@paquier.xyz> Reported-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://postgr.es/m/ed2296f1-1a6b-4932-b870-5bb18c2591ae%40oss.nttdata.com
pg_restore failed to restore large objects (blobs) out of directory-format dumps made by versions before PG v12. That's because, due to a bug fixed in commit 548e509, those old versions put the wrong filename into the BLOBS TOC entry. Said bug was harmless before v17, because we ignored the incorrect filename field --- but commit a45c78e assumed it would be correct. Reported-by: Pavel Stehule <pavel.stehule@gmail.com> Author: Pavel Stehule <pavel.stehule@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CAFj8pRCrZ=_e1Rv1N+6vDaH+6gf=9A2mE2J4RvnvKA1bLiXvXA@mail.gmail.com Backpatch-through: 17
On macOS, when building with the make system, the exported symbols list $(SHLIB_EXPORTS) was ignored. This was probably not intentional, it was probably just forgotten, since that combination has never actually been used until now (for libpq-oauth). The meson build system handles this correctly. Also, other platforms have been doing this correctly. This fixes it. It also does a bit of refactoring to make the code match the layout for other platforms. Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/c70ca32e-b109-460d-9810-6e23ebb4473f%40eisentraut.org
Avoid dependence on setlocale(). No behavior change. Discussion: https://postgr.es/m/9875f7f9-50f1-4b5d-86fc-ee8b03e8c162@eisentraut.org Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Avoid dependence on setlocale(). No behavior change. Discussion: https://postgr.es/m/9875f7f9-50f1-4b5d-86fc-ee8b03e8c162@eisentraut.org Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Avoid dependence on setlocale(). No behavior change. Discussion: https://postgr.es/m/9875f7f9-50f1-4b5d-86fc-ee8b03e8c162@eisentraut.org Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Avoid dependence on setlocale(). No behavior change. Discussion: https://postgr.es/m/9875f7f9-50f1-4b5d-86fc-ee8b03e8c162@eisentraut.org Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Traditionally, libpq's pqPutMsgEnd has rounded down the amount-to-send to be a multiple of 8K when it is eagerly writing some data. This still seems like a good idea when sending through a Unix socket, as pipes typically have a buffer size of 8K or some fraction/multiple of that. But there's not much argument for it on a TCP connection, since (a) standard MTU values are not commensurate with that, and (b) the kernel typically applies its own packet splitting/merging logic. Worse, our SSL and GSSAPI code paths both have API stipulations that if they fail to send all the data that was offered in the previous write attempt, we mustn't offer less data in the next attempt; else we may get "SSL error: bad length" or "GSSAPI caller failed to retransmit all data needing to be retried". The previous write attempt might've been pqFlush attempting to send everything in the buffer, so pqPutMsgEnd can't safely write less than the full buffer contents. (Well, we could add some more state to track exactly how much the previous write attempt was, but there's little value evident in such extra complication.) Hence, apply the round-down only on AF_UNIX sockets, where we never use SSL or GSSAPI. Interestingly, we had a very closely related bug report before, which I attempted to fix in commit d053a87. But the test case we had then seemingly didn't trigger this pqFlush-then-pqPutMsgEnd scenario, or at least we failed to recognize this variant of the bug. Bug: #18907 Reported-by: Dorjpalam Batbaatar <htgn.dbat.95@gmail.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/18907-d41b9bcf6f29edda@postgresql.org Backpatch-through: 13
CREATE UNLOGGED TABLE was still being recommended by psql's tab completion as a possible pattern, but the backend is rejecting this option since e2bab2d. Reported-by: Shinya Kato <shinya11.kato@gmail.com> Reviewed-by: Nathan Bossart <nathandbossart@gmail.com> Reviewed-by: Shinya Kato <shinya11.kato@gmail.com> Discussion: https://postgr.es/m/CAOzEurQZ1a+6d1K8b=+Ww1NFQVwAt9KSCQsBWXYBaPnYCenK3g@mail.gmail.com
Teach nbtree's _bt_killitems to leave the so->currPos page that it sets LP_DEAD items on in whatever state it was in when _bt_killitems was called. In particular, make sure that so->dropPin scans don't acquire a pin whose reference is saved in so->currPos.buf. Allowing _bt_killitems to change so->currPos.buf like this is wrong. The immediate consequence of allowing it is that code in _bt_steppage (that copies so->currPos into so->markPos) will behave as if the scan is a !so->dropPin scan. so->markPos will therefore retain the buffer pin indefinitely, even though _bt_killitems only needs to acquire a pin (along with a lock) for long enough to mark known-dead items LP_DEAD. This issue came to light following a report of a failure of an assertion from recent commit e6eed40. The test case in question involves the use of mark and restore. An initial call to _bt_killitems takes place that leaves so->currPos.buf in a state that is inconsistent with the scan being so->dropPin. A subsequent call to _bt_killitems for the same position (following so->currPos being saved in so->markPos, and then restored as so->currPos) resulted in the failure of an assertion that tests that so->currPos.buf is InvalidBuffer when the scan is so->dropPin (non-assert builds got a "resource was not closed" WARNING instead). The same problem exists on earlier releases, though the issue is far more subtle there. Recent commit e6eed40 introduced the so->dropPin field as a partial replacement for testing so->currPos.buf directly. Earlier releases won't get an assertion failure (or buffer pin leak), but they will allow the second _bt_killitems call from the test case to behave as if a buffer pin was consistently held since the original call to _bt_readpage. This is wrong; there will have been an initial window during which no pin was held on the so->currPos page, and yet the second _bt_killitems call will neglect to check if so->currPos.lsn continues to match the page's now-current LSN. As a result of all this, it's just about possible that _bt_killitems will set the wrong items LP_DEAD (on release branches). This could only happen with merge joins (the sole user of nbtree mark/restore support), when a concurrently inserted index tuple used a recently-recycled TID (and only when the new tuple was inserted onto the same page as a distinct concurrently-removed tuple with the same TID). This is exactly the scenario that _bt_killitems' check of the page's now-current LSN against the LSN stashed in currPos was supposed to prevent. A follow-up commit will make nbtree completely stop conditioning whether or not a position's pin needs to be dropped on whether the 'buf' field is set. All call sites that might need to drop a still-held pin will be taught to rely on the scan-level so->dropPin field recently introduced by commit e6eed40. That will make bugs of the same general nature as this one impossible (or make them much easier to detect, at least). Author: Peter Geoghegan <pg@bowt.ie> Reported-By: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/545be1e5-3786-439a-9257-a90d30f8b849@gmail.com Backpatch-through: 13
This addresses an oversight in commit 4ac2a9b, which introduced the REJECT_LIMIT option to the COPY command. Author: Atsushi Torikoshi <torikoshia@oss.nttdata.com> Reviewed-by: Yugo Nagata <nagata@sraoss.co.jp> Discussion: https://postgr.es/m/ac23e824d1d602f113a89c91ee56fb23@oss.nttdata.com
- 4c787a2 - 78bd364 - 7a6880f - 8898082 Suggested-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://postgr.es/m/CA+TgmoZ=J=PVNZUNKaxULu+KUVSt3Y-aJ1DZ9Y3Co6mu0z62jA@mail.gmail.com Discussion: https://postgr.es/m/60e8c6d0a6c08e67f15dbbe9e53df0119c710065.camel@j-davis.com
This reverts commit 54c6ea8. Further analysis has shown that the forcenonrequired row compare behavior is in fact necessary, despite the new restrictions on RowCompares imposed by _bt_set_startikey following commit 5f4d98d. Discussion: https://postgr.es/m/CAH2-Wzm3bKcz3TbHGem3_+SinEyG=VZVPbApQghp7YiZj+MM3g@mail.gmail.com
This commit reverts the two following commits: - 499edb0, track more precisely query locations for nested statements. - 06450c7, a follow-up fix of 499edb0 with query locations. The test introduced in this commit is not reverted. This is proving useful to track a problem that only pgaudit was able to detect. These prove to have issues with the tracking of SELECT statements, when these use multiple parenthesis which is something supported by the grammar. Incorrect location and lengths are causing pg_stat_statements to become confused, failing its job in query normalization with potential out-of-bound writes because the location and the length may not match with what can be handled. A lot of the query patterns discussed when this issue was reported have no test coverage in the main regression test suite, or the recovery test 027_stream_regress.pl would have caught the problems as pg_stat_statements is loaded by the node running the regression tests. A first step would be to improve the test coverage to stress more the query normalization logic. A different portion of this work was done in 45e0ba3, with the addition of tests for nested queries. These can be left in the tree. They are useful to track the way inner queries are currently tracked by PGSS with non-top-level entries, and will be useful when reconsidering in the future the work reverted here. Reported-by: Alexander Kozhemyakin <a.kozhemyakin@postgrespro.ru> Discussion: https://postgr.es/m/18947-cdd2668beffe02bf@postgresql.org
…functions. Database object statistics manipulation functions were introduced in PostgreSQL 18 and are permitted under the MAINTAIN privilege. However, the documentation previously did not mention these functions in the list of allowed operations. This commit updates the MAINTAIN privilege documentation to explicitly include statistics manipulation functions, clarifying what the privilege covers. Author: Fujii Masao <masao.fujii@gmail.com> Reviewed-by: Robert Treat <rob@xzilla.net> Discussion: https://postgr.es/m/7c7e1ad5-fdf9-486f-bc63-40ac99b0461d@oss.nttdata.com
The algorithm to squash lists of constants added by commit 62d712e was a bit too simplistic; we wanted to avoid adding unnecessary complexity, but cases like direct function calls of typecasting functions (and others) were missed, and bogus SQL syntax was being shown in pg_stat_statements normalized query text field. To fix normalization for those cases, we need the parser to transmit information about were each list of constant values starts and ends, so add that to a couple of nodes. Also add a few more test cases to make sure we're doing the right thing. The patch initially submitted by Sami added a new private struct in gram.y to carry the start/end information for A_Expr, but I (Álvaro) decided that a better fix was to remove the parser indirection via the in_expr production, and instead create separate components in the a_expr rule. I'm surprised that this works and doesn't require more changes, but I assume (without checking) that the grammar used to be more complex and got simplified at some point. Bump catversion. Author: Sami Imseih <samimseih@gmail.com> Author: Dmitry Dolgov <9erthalion6@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/CAA5RZ0tRXoPG2y6bMgBCWNDt0Tn=unRerbzYM=oW0syi1=C1OA@mail.gmail.com
… options. Commit bde2fb7 added the --with-schema, --with-data, and --with-statistics options to pg_restore. These options control whether to restore schema, data, or statistics if present in the archive. However, the help message and documentation incorrectly described them as affecting what gets dumped. This commit corrects those descriptions to clarify that the options control restoration, not dumping. Bug: #18952 Reported-by: TAKATSUKA Haruka <harukat@sraoss.co.jp> Author: Fujii Masao <masao.fujii@gmail.com> Reviewed-by: TAKATSUKA Haruka <harukat@sraoss.co.jp> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/18952-be40a620f8b1e755@postgresql.org
This is a continuation of 15a79c7, cleaning up the AIO io_uring code that has been committed after that while still using %llu. The code changed here is new in v18, so cleaning things now means less conflicts if this area of the code changes on backpatch once the 18 stable branch is created. Reviewed-by: Nathan Bossart <nathandbossart@gmail.com> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Discussion: https://postgr.es/m/aEZcGCnYFq642q8k@paquier.xyz
Running COPY within a pipeline can break protocol synchronization in multiple ways. psql is limited in terms of result processing if mixing COPY commands with normal queries while controlling a pipeline with the new meta-commands, as an effect of the following reasons: - In COPY mode, the backend ignores additional Sync messages and will not send a matching ReadyForQuery expected by the frontend. Doing a \syncpipeline just after COPY will leave the frontend waiting for a ReadyForQuery message that won't be sent, leaving psql out-of-sync. - libpq automatically sends a Sync with the Copy message which is not tracked in the command queue, creating an unexpected synchronisation point that psql cannot really know about. While it is possible to track such activity for a \copy, this cannot really be done sanely with plain COPY queries. Backend failures during a COPY would leave the pipeline in an aborted state while the backend would be in a clean state, ready to process commands. At the end, fixing those issues would require modifications in how libpq handles pipeline and COPY. So, rather than implementing workarounds in psql to shortcut the libpq internals (with command queue handling for one), and because meta-commands for pipelines in psql are a new feature with COPY in a pipeline having a limited impact compared to other queries, this commit forbids the use of COPY within a pipeline to avoid possible break of protocol synchronisation within psql. If there is a use-case for COPY support within pipelines in libpq, this could always be added in the future, if necessary. Most of the changes of this commit impacts the tests for psql pipelines, removing the tests related to COPY. Some TAP tests still exist for COPY TO/FROM and \copy to/from, to check that that connections are aborted when this operation is attempted. Reported-by: Nikita Kalinin <n.kalinin@postgrespro.ru> Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com> Discussion: https://postgr.es/m/AC468509-06E8-4E2A-A4B1-63046A4AC6AB@postgrespro.ru
Reword the documentation around the default value to make interaction between WATCH_INTERVAL and the \watch command clearer. While there, also remove a stray parenthesis left over from a previous version of the patch. Reported-by: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: David G. Johnston <david.g.johnston@gmail.com> Discussion: https://postgr.es/m/c34a650b-6f8b-4da7-9ebb-b6df03ce009d@eisentraut.org
…N TABLE. Commit 302cf15 added support for LIKE in CREATE FOREIGN TABLE. In this feature, since indexes are not created for foreign tables, comments on indexes are not copied either. However, the documentation incorrectly stated that index comments would be copied when using INCLUDING COMMENTS. This commit corrects that by removing the mention of index comments. Author: Fujii Masao <masao.fujii@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/f86cd84f-a6a3-4451-bae7-5cca9e63b06d@oss.nttdata.com
Commit 8492feb added support for parallel CREATE INDEX on GIN indexes. However, previously two places in the documentation and two in the source code comments still stated that only B-tree and BRIN indexes support parallel builds. This commit updates those references to correctly include GIN indexes. Author: Fujii Masao <masao.fujii@gmail.com> Reviewed-by: Robert Treat <rob@xzilla.net> Discussion: https://postgr.es/m/7d27d068-90e2-4022-9bd7-09b0fd3d4f47@oss.nttdata.com
Improve the clarity of LOG messages when a failover logical slot synchronization fails, making the reasons more explicit for easier debugging. Update the documentation to outline scenarios where slot synchronization can fail, especially during the initial sync, and emphasize that pg_sync_replication_slot() is primarily intended for testing and debugging purposes. We also discussed improving the functionality of pg_sync_replication_slot() so that it can be used reliably, but we would take up that work for next version after some more discussion and review. Reported-by: Suraj Kharage <suraj.kharage@enterprisedb.com> Author: shveta malik <shveta.malik@gmail.com> Reviewed-by: Zhijie Hou <houzj.fnst@fujitsu.com> Reviewed-by: Peter Smith <smithpb2250@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Backpatch-through: 17, where it was introduced Discussion: https://postgr.es/m/CAF1DzPWTcg+m+x+oVVB=y4q9=PYYsL_mujVp7uJr-_oUtWNGbA@mail.gmail.com
Increase consistency of --help and man page synopses between pg_dump and pg_dumpall. These should now be very similar, as pg_dumpall can now also produce non-text dump output. But actually, they had drifted further apart. - Use verb "export" consistently, instead of "dump" or "extract". - Use "SQL script" instead of just "script" or "text file". - Maintain consistent distinction between SQL script and other formats/archives (which is relevant for pg_restore). Reviewed-by: Robert Treat <rob@xzilla.net> Discussion: https://www.postgresql.org/message-id/flat/3f71d8a7-095b-4829-9b0b-fce09e9866b3%40eisentraut.org
to be used for PG 18 release notes
In version 17 we added support for cross-partition EXCLUDE constraints, as long as they included all partition key columns and compared them with equality (see 8c852ba). I updated the docs for exclusion constraints, but I missed that the docs for CREATE TABLE still said that they were not supported. This commit fixes that. Author: Paul A. Jungwirth <pj@illuminatedcomputing.com> Co-authored-by: Jeff Davis <pgsql@j-davis.com> Discussion: https://postgr.es/m/c955d292-b92d-42d1-a2a0-1ec6715a2546@illuminatedcomputing.com Backpatch-through: 17
The TAP tests that verify logical and physical replication slot behavior during checkpoints (046_checkpoint_logical_slot.pl and 047_checkpoint_physical_slot.pl) inserted two batches of 2 million rows each, generating approximately 520 MB of WAL. On slow machines, or when compiled with '-DRELCACHE_FORCE_RELEASE -DCATCACHE_FORCE_RELEASE', this caused the tests to run for 8-9 minutes and occasionally time out, as seen on the buildfarm animal prion. This commit modifies the mentioned tests to utilize the $node->advance_wal() function, thereby reducing runtime. Once we do not use the generated data, the proposed function is a good alternative, which cuts the total wall-clock run time. While here, remove superfluous '\n' characters from several note() calls; these appeared literally in the build-farm logs and looked odd. Also, remove excessive 'shared_preload_libraries' GUC from the config and add a check for 'injection_points' extension availability. Reported-by: Alexander Lakhin <exclusion@gmail.com> Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Author: Alexander Korotkov <aekorotkov@gmail.com> Author: Vitaly Davydov <v.davydov@postgrespro.ru> Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Discussion: https://postgr.es/m/fbc5d94e-6fbd-4a64-85d4-c9e284a58eb2%40gmail.com Backpatch-through: 17
We never create regress.def, and if we did this code would fail to delete it, because "win" is not the correct PORTNAME for Windows. This thinko seems to have originated in commit 7a6b562 from 1999, although it got moved around multiple times since then. Author: Christoph Berg <myon@debian.org> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/aFVR7R7VDX7y2ruc@msg.df7cb.de
While choosing an autogenerated name for an index, look for pre-existing relations using a SnapshotDirty snapshot, instead of the previous behavior that considered only committed-good pg_class rows. This allows us to detect and avoid conflicts against indexes that are still being built. It's still possible to fail due to a race condition, but the window is now just the amount of time that it takes DefineIndex to validate all its parameters, call smgrcreate(), and enter the index's pg_class row. Formerly the race window covered the entire time needed to create and fill an index, which could be very long if the table is large. Worse, if the conflicting index creation is part of a larger transaction, it wouldn't be visible till COMMIT. So this isn't a complete solution, but it should greatly ameliorate the problem, and the patch is simple enough to be back-patchable. It might at some point be useful to do the same for pg_constraint entries (cf. ChooseConstraintName, ConstraintNameExists, and related functions). However, in the absence of field complaints, I'll leave that alone for now. The relation-name test should be good enough for index-based constraints, while foreign-key constraints seem to be okay since they require exclusive locks to create. Bug: #18959 Reported-by: Maximilian Chrzan <maximilian.chrzan@here.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com> Discussion: https://postgr.es/m/18959-f63b53b864bb1417@postgresql.org Backpatch-through: 13
Commit 85e5e22, which added (a forerunner of) this logic, argued that Adding the necessary complexity to make this work doesn't seem like it would be repaid in significantly better plans, because in cases where such a PHV exists, there is probably a corresponding join order constraint that would allow a good plan to be found without using the star-schema exception. The flaw in this claim is that there may be other join-order restrictions that prevent us from finding a join order that doesn't involve a "dangerous" PHV. In particular we now recognize that small join_collapse_limit or from_collapse_limit could prevent it. Therefore, let's bite the bullet and make the case work. We don't have to extend the executor's support for nestloop parameters as I thought at the time, because we can instead push the evaluation of the placeholder's expression into the left-hand input of the NestLoop node. So there's not really a lot of downside to this solution, and giving the planner more join-order flexibility should have value beyond just avoiding failure. Having said that, there surely is a nonzero risk of introducing new bugs. Since this failure mode escaped detection for ten years, such cases don't seem common enough to justify a lot of risk. Therefore, let's put this fix into master but leave the back branches alone (for now anyway). Bug: #18953 Reported-by: Alexander Lakhin <exclusion@gmail.com> Diagnosed-by: Richard Guo <guofenglinux@gmail.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/18953-1c9883a9d4afeb30@postgresql.org
Specify whether the bucket bounds are inclusive or exclusive, and improve some other vague language. Explain the behavior that occurs when the "low" bound is greater than the "high" bound. Make width_bucket_numeric's comment more like that for width_bucket_float8, in particular noting that infinite bounds are rejected (since they became possible in v14). Reported-by: Ben Peachey Higdon <bpeacheyhigdon@gmail.com> Author: Robert Treat <rob@xzilla.net> Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com> Discussion: https://postgr.es/m/2BD74F86-5B89-4AC1-8F13-23CED3546AC1@gmail.com Backpatch-through: 13
WARNING: You should add the boolean check kwarg to the run_command call. It currently defaults to false, but it will default to true in meson 2.0. Introduced by commit bc46104. (This only happens in the msvc branch. All the other run_command calls are ok.) Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://www.postgresql.org/message-id/flat/42e13eb0-862a-441e-8d84-4f0fd5f6def0%40eisentraut.org
The problem that led to the workaround in f83f148 was not in fact a compiler bug, but a failure to zero the upper bits of the vector register containing the initial scalar CRC value. Fix that and revert the workaround. Diagnosed-by: Nathan Bossart <nathandbossart@gmail.com> Diagnosed-by: Raghuveer Devulapalli <raghuveer.devulapalli@intel.com> Tested-by: Andy Fan <zhihuifan1213@163.com> Tested-by: Soumyadeep Chakraborty <soumyadeep2007@gmail.com> Reviewed-by: Nathan Bossart <nathandbossart@gmail.com> Reviewed-by: Raghuveer Devulapalli <raghuveer.devulapalli@intel.com> Discussion: https://postgr.es/m/PH8PR11MB82866B07AA6758D12F699C00FB70A@PH8PR11MB8286.namprd11.prod.outlook.com
Commit 43da394 added a dependency on this intrinsic to our AVX-512 CRC code. It turns out this intrinsic was added to gcc later than the other ones we were using, so that there are platforms where the new code fails to compile. Since only relatively old (pre-gcc-10) compilers are affected, it doesn't seem worth trying to make the AVX-512 CRC code actually work on these platforms. Just add the new intrinsic to the configure probe, so that we'll conclude the code can't be built. Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Nathan Bossart <nathandbossart@gmail.com> Discussion: https://postgr.es/m/3350336.1750690281@sss.pgh.pa.us
ca307d5 introduced keeping WAL segments by slot's last saved restart LSN. It also added an assertion that the slot's restart LSN never goes backward. However, situations when the restart LSN goes backward have been spotted by buildfarm animals and investigated in the thread. When pg_receivewal starts the replication, it sets the last replayed LSN to the beginning of the segment, which is older than what ReplicationSlotReserveWal() set for the slot. A similar situation can happen to pg_basebackup. When standby reconnects to the primary, it sends the last replayed LSN, which might be older than the last confirmed flush LSN. In both these situations, a concurrent checkpoint may trigger an assert trap. Based on ideas from Vitaly Davydov <v.davydov@postgrespro.ru>, Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>, Vignesh C <vignesh21@gmail.com>, Amit Kapila <amit.kapila16@gmail.com>. Reported-by: Vignesh C <vignesh21@gmail.com> Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CALDaNm3s-jpQTe1MshsvQ8GO%3DTLj233JCdkQ7uZ6pwqRVpxAdw%40mail.gmail.com Reviewed-by: Vignesh C <vignesh21@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
This new test was intended to check the handling of the replication slot's restart lsn fixed in ca307d5. However, it also reveals another issue related to logical decoding. This commit temporarily removes this test to keep the buildfarm and CFbot green and avoid distorting others' work. This test will be restored once we investigate and fix the issue. Discussion: https://postgr.es/m/CAAKRu_ZCOzQpEumLFgG_%2Biw3FTa%2BhJ4SRpxzaQBYxxM_ZAzWcA%40mail.gmail.com
\close has been introduced in d55322b to be able to close a prepared statement using the extended protocol in psql. Per discussion, the name "close" is ambiguous. At the SQL level, CLOSE is used to close a cursor. At protocol level, the close message can be used to either close a statement or a portal. This patch renames \close to \close_prepared to avoid any ambiguity and make it clear that this is used to close a prepared statement. This new name has been chosen based on the feedback from the author and the reviewers. Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl> Discussion: https://postgr.es/m/3e694442-0df5-4f92-a08f-c5d4c4346b85@eisentraut.org
Remove the part of comment that says we don't allow toggling two_phase option as that is supported in commit 1462aad. Author: Hayato Kuroda <kuroda.hayato@fujitsu.com> Author: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/OSCPR01MB1496656725F3951AEE8749EBDF579A@OSCPR01MB14966.jpnprd01.prod.outlook.com
Previously, the UUID functions documentation defined the "UUID" index entry to link to the UUID data type page, even though that entry already exists there. Instead, the UUID functions page should define its own index entry linking to itself. This commit updates the UUID index entry in the UUID functions documentation to point to the correct section, improving navigation and avoiding duplication. Back-patch to all supported versions. Author: Fujii Masao <masao.fujii@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/f33e0493-5773-4296-87c5-7ce459054cfe@oss.nttdata.com Backpatch-through: 13
Virtual generated columns have some special checks in CheckAttributeType(), mainly to check that domains are not used. But this check was only applied during CREATE TABLE, not during ALTER TABLE. This fixes that. Reported-by: jian he <jian.universality@gmail.com> Discussion: https://www.postgresql.org/message-id/CACJufxE0KHR__-h=zHXbhSNZXMMs4LYo4-dbj8H3YoStYBok1Q@mail.gmail.com
The link returns 404 and no replacement is available in the project on Sourceforge where the content once was. Since we already link to resources for both beginner and experienced docs hackers, remove the the dead link. Backpatch to all supported versions as the link was added in 8.1. Author: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Magnus Hagander <magnus@hagander.net> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reported-by: jian he <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxH=YzQPDOe+2WuYZ7seD-BOyjCBmP6JiErpoSiVZWDRnw@mail.gmail.com Backpatch-through: 13
If vacuum fails to prune a tuple killed before OldestXmin, it will decide to freeze its xmax and later error out in pre-freeze checks. Add a test reproducing this scenario to the recovery suite which creates a table on a primary, updates the table to generate dead tuples for vacuum, and then, during the vacuum, uses a replica to force GlobalVisState->maybe_needed on the primary to move backwards and precede the value of OldestXmin set at the beginning of vacuuming the table. This test is coverage for a case fixed in 83c39a1. The test was originally committed to master in aa60798 but later reverted in efcbb76 due to test instability. The test requires multiple index passes. In Postgres 17+, vacuum uses a TID store for the dead TIDs that is very space efficient. With the old minimum maintenance_work_mem of 1 MB, it required a large number of dead rows to generate enough dead TIDs to force multiple index vacuuming passes. Once the source code changes were made to allow a minimum maintenance_work_mem value of 64kB, the test could be made much faster and more stable. Author: Melanie Plageman <melanieplageman@gmail.com> Reviewed-by: John Naylor <johncnaylorls@gmail.com> Reviewed-by: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAAKRu_ZJBkidusDut6i%3DbDCiXzJEp93GC1%2BNFaZt4eqanYF3Kw%40mail.gmail.com Backpatch-through: 17
In b0635bf, I added an early header check to the Meson OAuth support, which was intended to duplicate the later checks for HAVE_SYS_[EVENT|EPOLL]_H. However, I implemented the new test via check_header() -- which tries to compile -- rather than has_header(), which just looks for the file's existence. The distinction matters on OpenBSD, where <sys/event.h> can't be compiled without including prerequisite headers, so -Dlibcurl=enabled failed on that platform. Switch to has_header() to fix this.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.