forked from postgres/postgres
-
Notifications
You must be signed in to change notification settings - Fork 0
Rel 13 stable changes #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
davecramer
wants to merge
2,253
commits into
REL_12_STABLE
Choose a base branch
from
REL_13_STABLE
base: REL_12_STABLE
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
If restart_lsn of logical replication slot gets behind more than max_slot_wal_keep_size from the current LSN, the logical replication slot would be invalidated and its restart_lsn is reset to an invalid LSN. If this logical replication slot with an invalid restart_lsn was specified as the source slot in pg_copy_logical_replication_slot(), the function caused the assertion failure unexpectedly. This assertion was added because restart_lsn should not be invalid before. But in v13, it can be invalid thanks to max_slot_wal_keep_size. So since this assertion is no longer useful, this commit removes it. This commit also changes the errcode in the error message that pg_copy_logical_replication_slot() emits when the slot with an invalid restart_lsn is specified, to more appropriate one. Back-patch to v13 where max_slot_wal_keep_size was added and the assertion was no longer valid. Author: Fujii Masao Reviewed-by: Alvaro Herrera, Kyotaro Horiguchi Discussion: https://postgr.es/m/f91de4fb-a7ab-b90e-8132-74796e049d51@oss.nttdata.com
…ples. Daniel Gustafsson and Erik Rijkers, per report from nick@cleaton Discussion: https://postgr.es/m/159275646273.679.16940709892308114570@wrigleys.postgresql.org
Commit 0d861bb, which added deduplication to nbtree, had _bt_check_unique() pass a TID to table_index_fetch_tuple_check() that isn't safe to mutate. table_index_fetch_tuple_check()'s tid argument is modified when the TID in question is not the latest visible tuple in a hot chain, though this wasn't documented. To fix, go back to using a local copy of the TID in _bt_check_unique(), and update comments above table_index_fetch_tuple_check(). Backpatch: 13-, where B-Tree deduplication was introduced.
That is, those database permissions set by GRANT. Diagnosed-by: Joseph Nahmias Discussion: https://postgr.es/m/20200614072613.GA21852@nahmias.net Backpatch-through: 9.5
Reported-by: petermpallesen@gmail.com Discussion: https://postgr.es/m/159195294959.673.5752624528747900508@wrigleys.postgresql.org Backpatch-through: 9.5
Back-patch to v13; before that, there's not really space for this kind of detail. Discussion: https://postgr.es/m/c1696f68-fa8d-7759-6a9c-eb293ab1bbc9@gmx.net
Clarify --include-foreign-data option addition. Reported-by: Masahiko Sawada, Alvaro Herrera Discussion: https://postgr.es/m/CA+fd4k62hYtce8VrEMGm6Y+1c24QBgCksXvOaH5kE8PbY+68sA@mail.gmail.com Backpatch-through: 13 only
We failed to save slot to disk after invalidating it, so the state was lost in case of server restart or crash. Fix by marking it dirty and flushing. Also, if the slot is known invalidated we don't need to reason about the LSN at all -- it's known invalidated. Only test the LSN if the slot is known not invalidated. Author: Fujii Masao <masao.fujii@oss.nttdata.com> Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/17a69cfe-f1c1-a416-ee25-ae15427c69eb@oss.nttdata.com
When we initially created this parameter, in commit ff8ca5f, we left the default as "allow any protocol version" on grounds of backwards compatibility. However, that's inconsistent with the backend's default since b1abfec; protocol versions prior to 1.2 are not considered very secure; and OpenSSL has had TLSv1.2 support since 2012, so the number of PG servers that need a lesser minimum is probably quite small. On top of those things, it emerges that some popular distros (including Debian and RHEL) set MinProtocol=TLSv1.2 in openssl.cnf. Thus, far from having "allow any protocol version" behavior in practice, what we actually have as things stand is a platform-dependent lower limit. So, change our minds and set the min version to TLSv1.2. Anybody wanting to connect with a new libpq to a pre-2012 server can either set ssl_min_protocol_version=TLSv1 or accept the fallback to non-SSL. Back-patch to v13 where the aforementioned patches appeared. Patch by me, reviewed by Daniel Gustafsson Discussion: https://postgr.es/m/a9408304-4381-a5af-d259-e55d349ae4ce@2ndquadrant.com
OpenSSL's native reports about problems related to protocol version restrictions are pretty opaque and inconsistent. When we get an SSL error that is plausibly due to this, emit a hint message that includes the range of SSL protocol versions we (think we) are allowing. This should at least get the user thinking in the right direction to resolve the problem, even if the hint isn't totally accurate, which it might not be for assorted reasons. Back-patch to v13 where we increased the default minimum protocol version, thereby increasing the risk of this class of failure. Patch by me, reviewed by Daniel Gustafsson Discussion: https://postgr.es/m/a9408304-4381-a5af-d259-e55d349ae4ce@2ndquadrant.com
Apparently 1.0.1 lacks SSL_R_VERSION_TOO_HIGH and SSL_R_VERSION_TOO_LOW. Per buildfarm.
Warnings start 10M transactions before xidStopLimit, which is 11M transactions before wraparound. The sample WARNING output showed a value greater than 11M, and its HINT message predated commit 25ec228. Hence, the sample was impossible. Back-patch to 9.5 (all supported versions).
The IANA tzcode library has a feature to read a time zone file named "posixrules" and apply the daylight-savings transition dates and times therein, when it is given a POSIX-style time zone specification that lacks an explicit transition rule. However, there's a problem with that code: it doesn't work for dates past the Y2038 time_t rollover. (Effectively, all times beyond that point are treated as standard time.) The IANA crew regard this feature as legacy, so their plan is to remove it not fix it. The time frame in which that will happen is unclear, but presumably it'll happen well before 2038. Moreover, effective with the next IANA data update (probably this fall), the recommended default will be to not install a "posixrules" file in the first place. The time frame in which tzdata packagers might adopt that suggestion is likewise unclear, but at least some platforms will probably do it in the next year or so. While we could ignore that recommendation so far as PG-supplied tzdata trees are concerned, builds using --with-system-tzdata will be subject to whatever the platform's tzdata packager decides to do. Thus, whether or not we do anything, some increasing fraction of Postgres users will be exposed to the behavior observed when there is no "posixrules" file; and if we do nothing, we'll have essentially no control over the timing of that change. The best thing to do to ameliorate the uncertainty seems to be to proactively remove the posixrules-reading feature. If we do that in a scheduled release then at least we can release-note the behavioral change, rather than having users be surprised by it after a routine tzdata update. The change in question is fairly minor anyway: to be affected, you have to be using a POSIX-style timezone spec, it has to not have an explicit rule, and it has to not be one of the four traditional continental-USA zone names (EST5EDT, CST6CDT, MST7MDT, or PST8PDT), as those are special-cased. Since the default "posixrules" file provides USA DST rules, the number of people who are likely to find such a zone spec useful is probably quite small. Moreover, the fallback behavior with no explicit rule and no "posixrules" file is to apply current USA rules, so the only thing that really breaks is the DST transitions in years before 2007 (and you get the countervailing fix that transitions after 2038 will be applied). Now, some installations might have replaced the "posixrules" file, allowing e.g. EU rules to be applied to a POSIX-style timezone spec. That won't work anymore. But it's not exactly clear why this solution would be preferable to using a regular named zone. In any case, given the Y2038 issue, we need to be pushing users to stop depending on this. Back-patch into v13; it hasn't been released yet, so it seems OK to change its behavior. (Personally I think we ought to back-patch further, but I've been outvoted.) Discussion: https://postgr.es/m/1390.1562258309@sss.pgh.pa.us Discussion: https://postgr.es/m/20200621211855.6211-1-eggert@cs.ucla.edu
Previously it was specified to be only replica. Discussion: https://postgr.es/m/20200618180058.GK7349@momjian.us Backpatch-through: 9.5
In a few cases, the documented syntax specified storage parameter values as required. Reported-by: galiev_mr@taximaxim.ru Discussion: https://postgr.es/m/159283163235.684.4482737698910467437@wrigleys.postgresql.org Backpatch-through: 9.5
Author: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>
The "Disk Usage" and "HashAgg Batches" properties in the EXPLAIN ANALYZE output for HashAgg were previously only shown if the number of batches was greater than 0. Here we change this so that these properties are always shown for EXPLAIN ANALYZE formats other than "text". The idea here is that since the HashAgg could have spilled to disk if there had been more data or groups to aggregate, then it's relevant that we're clear in the EXPLAIN ANALYZE output when no spilling occurred in this particular execution of the given plan. For the "text" EXPLAIN format, we still hide these properties when no spilling occurs. This EXPLAIN format is designed to be easy for humans to read. To maintain the readability we have a higher threshold for which properties we display for this format. Discussion: https://postgr.es/m/CAApHDvo_dmNozQQTmN-2jGp1vT%3Ddxx7Q0vd%2BMvD1cGpv2HU%3DSg%40mail.gmail.com Backpatch-through: 13, where the hashagg spilling code was added.
001_ssltests.pl and 002_scram.pl both generated an extra file for a client key used in the tests that were not removed. In Debian, this causes repeated builds to fail. The code refactoring done in 4dc6355 broke the cleanup done in 001_ssltests.pl, and the new tests added in 002_scram.pl via d6e612f forgot the removal of one file. While on it, fix a second issue introduced in 002_scram.pl where we use the same file name in 001 and 002 for the temporary client key whose permissions are changed in the test, as using the same file name in both tests could cause failures with parallel jobs of src/test/ssl/ if one test removes a file still needed by the second test. Reported-by: Felix Lechner Author: Daniel Gustafsson, Felix Lechner Reviewed-by: Tom Lane, Michael Paquier Discussion: https://postgr.es/m/CAFHYt543sjX=Cm_aEeoejStyP47C+Y3+Wh6WbirLXsgUMaw7iw@mail.gmail.com Backpatch-through: 13
Use separate functions to save and restore error context information as that made code easier to understand. Also, make it clear that the index information required for error context is sane. Author: Andres Freund, Justin Pryzby, Amit Kapila Backpatch-through: 13, where it was introduced Discussion: https://postgr.es/m/CAA4eK1LWo+v1OWu=Sky27GTGSCuOmr7iaURNbc5xz6jO+SaPeA@mail.gmail.com
Since v13 pg_stat_statements is allowed to track the planning time of statements when track_planning option is enabled. Its default was on. But this feature could cause more terrible spinlock contentions in pg_stat_statements. As a result of this, Robins Tharakan reported that v13 beta1 showed ~45% performance drop at high DB connection counts (when compared with v12.3) during fully-cached SELECT-only test using pgbench. To avoid this performance regression by the default setting, this commit changes default of pg_stat_statements.track_planning to off. Back-patch to v13 where pg_stat_statements.track_planning was introduced. Reported-by: Robins Tharakan Author: Fujii Masao Reviewed-by: Julien Rouhaud Discussion: https://postgr.es/m/2895b53b033c47ccb22972b589050dd9@EX13D05UWC001.ant.amazon.com
Previously the document explained that restart_lsn indicates the LSN of oldest WAL won't be automatically removed during checkpoints. But since v13 this was no longer true thanks to max_slot_wal_keep_size. Back-patch to v13 where max_slot_wal_keep_size was added. Author: Fujii Masao Discussion: https://postgr.es/m/6497f1e9-3148-c5da-7e49-b2fddad9a42f@oss.nttdata.com
A likely copy/paste error in 98e8b48 from back in 2004 would cause temp tablespace to be reset to InvalidOid if temp_tablespaces was set to the same value as the primary tablespace in the database. This would cause shared filesets (such as for parallel hash joins) to ignore them, putting the temporary files in the default tablespace instead of the configured one. The bug is in the old code, but it appears to have been exposed only once we had shared filesets. Reviewed-By: Daniel Gustafsson Discussion: https://postgr.es/m/CABUevExg5YEsOvqMxrjoNvb3ApVyH+9jggWGKwTDFyFCVWczGQ@mail.gmail.com Backpatch-through: 9.5
Commit ecd9e9f fixed the problem in the wrong place, causing unwanted side-effects on the behavior of GetNextTempTableSpace(). Instead, let's make SharedFileSetInit() responsible for subbing in the value of MyDatabaseTableSpace when the default tablespace is called for. The convention about what is in the tempTableSpaces[] array is evidently insufficiently documented, so try to improve that. It also looks like SharedFileSetInit() is doing the wrong thing in the case where temp_tablespaces is empty. It was hard-wiring use of the pg_default tablespace, but it seems like using MyDatabaseTableSpace is more consistent with what happens for other temp files. Back-patch the reversion of PrepareTempTablespaces()'s behavior to 9.5, as ecd9e9f was. The changes in SharedFileSetInit() go back to v11 where that was introduced. (Note there is net zero code change before v11 from these two patch sets, so nothing to release-note.) Magnus Hagander and Tom Lane Discussion: https://postgr.es/m/CABUevExg5YEsOvqMxrjoNvb3ApVyH+9jggWGKwTDFyFCVWczGQ@mail.gmail.com
…ity. After running GetForeignRelSize for a foreign table, adjust rel->tuples to be at least as large as rel->rows. This prevents bizarre behavior in estimate_num_groups() and perhaps other places, especially in the scenario where rel->tuples is zero because pg_class.reltuples is (suggesting that ANALYZE has never been run for the table). As things stood, we'd end up estimating one group out of any GROUP BY on such a table, whereas the default group-count estimate is more likely to result in a sane plan. Also, clarify in the documentation that GetForeignRelSize has the option to override the rel->tuples value if it has a better idea of what to use than what is in pg_class.reltuples. Per report from Jeff Janes. Back-patch to all supported branches. Patch by me; thanks to Etsuro Fujita for review Discussion: https://postgr.es/m/CAMkU=1xNo9cnan+Npxgz0eK7394xmjmKg-QEm8wYG9P5-CcaqQ@mail.gmail.com
read_binary_file(), used by SQL functions pg_read_file() and friends, uses stat to determine file length to read, when not passed an explicit length as an argument. This is problematic, for example, if the file being read is a virtual file with a stat-reported length of zero. Arrange to read until EOF, or StringInfo data string lenth limit, is reached instead. Original complaint and patch by me, with significant review, corrections, advice, and code optimizations by Tom Lane. Backpatched to v11. Prior to that only paths relative to the data and log dirs were allowed for files, so no "zero length" files were reachable anyway. Reviewed-By: Tom Lane Discussion: https://postgr.es/m/flat/969b8d82-5bb2-5fa8-4eb1-f0e685c5d736%40joeconway.com Backpatch-through: 11
The cfbot and some BF animals are complaining about the previous read_binary_file commit because of ignoring return value of ‘fread’. So let's make everyone happy by testing the return value even though not strictly needed. Reported by Justin Pryzby, and suggested patch by Tom Lane. Backpatched to v11 same as the previous commit. Reported-By: Justin Pryzby Reviewed-By: Tom Lane Discussion: https://postgr.es/m/flat/969b8d82-5bb2-5fa8-4eb1-f0e685c5d736%40joeconway.com Backpatch-through: 11
Author: James Coleman <jtc331@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/df652910-e985-9547-152c-9d4357dc3979%402ndquadrant.com
This error has survived for 22 years, and has been introduced by da63386. Reported-by: Erwin Brandstetter Discussion: https://postgr.es/m/CAGHENJ57wogGOvGXo5LgWYcqswxafLck8ELqHDR+zrkTPgs_OQ@mail.gmail.com Backpatch-through: 9.5
Author: Vignesh C Reviewed-by: Sawada Masahiko Backpatch-through: 13, where it was introduced Discussion: https://postgr.es/m/CALDaNm3Ppt71NafGY5mk3V2i3Q+mm93pVibDq-0NpW7WU67Jcg@mail.gmail.com
Commit d114cc5 overlooked the fact that pg_upgrade'd B-Tree indexes have leaf page high keys whose offset numbers do not match the one from the copy of the tuple one level up (the copy stored with a downlink for leaf page's right sibling page). This led to false positive reports of corruption from bt_index_parent_check() when it was called to verify a pg_upgrade'd index. To fix, skip comparing the offset number on pg_upgrade'd B-Tree indexes. Author: Anastasia Lubennikova <a.lubennikova@postgrespro.ru> Author: Peter Geoghegan <pg@bowt.ie> Reported-By: Andrew Bille <andrewbille@gmail.com> Diagnosed-By: Anastasia Lubennikova <a.lubennikova@postgrespro.ru> Bug: #16619 Discussion: https://postgr.es/m/16619-aaba10f83fdc1c3c@postgresql.org Backpatch: 13-, where child check was enhanced.
Since commit fd5942c (2012, 9.3-era), walsender has been sending completion tags for certain replication commands twice -- and they're not even consistent. Apparently neither libpq nor JDBC have a problem with it, but it's not kosher. Fix by remove the EndCommand() call in the common code path for them all, and inserting specific calls to EndReplicationCommand() specifically in those places where it's needed. EndReplicationCommand() is a new simple function to send the completion tag for replication commands. Do this instead of sending a generic SELECT completion tag for them all, which was also pretty bogus (if innocuous). While at it, change StartReplication() to use EndReplicationCommand() instead of pg_puttextmessage(). In commit 2f96613, I failed to realize that replication commands are not close-enough kin of regular SQL commands, so the DROP_REPLICATION_SLOT tag I added is undeserved and a type pun. Take it out. Backpatch to 13, where the latter commit appeared. The duplicate tag has been sent since 9.3, but since nothing is broken, it doesn't seem worth fixing. Per complaints from Tom Lane. Discussion: https://postgr.es/m/1347966.1600195735@sss.pgh.pa.us
For parallel btree scan to work for array of scan keys, it should reach BTPARALLEL_DONE state once for every distinct combination of array keys. This is required to ensure that the parallel workers don't try to seize blocks at the same time for different scan keys. We missed to update this state when we discovered that the scan keys can't be satisfied. Author: James Hunter Reviewed-by: Amit Kapila Tested-by: Justin Pryzby Backpatch-through: 10, where it was introduced Discussion: https://postgr.es/m/4248CABC-25E3-4809-B4D0-128E1BAABC3C@amazon.com
After commits 85f6b49 and 3ba59cc, we can allow parallel inserts which was earlier not possible as parallel group members won't conflict for relation extension and page lock. In those commits, we forgot to update comments at few places. Author: Amit Kapila Reviewed-by: Robert Haas and Dilip Kumar Backpatch-through: 13 Discussion: https://postgr.es/m/CAFiTN-tMrQh5FFMPx5aWJ+1gi1H6JxktEhq5mDwCHgnEO5oBkA@mail.gmail.com
These two SQL functions are aliases for the same C function, so this change has no semantic effect. However, because we dropped the numeric_fac alias in HEAD (commit 76f412a), operator definitions based on that one don't port forward, causing problems for cross-version upgrade tests based on the regression database. Patch all active back branches to dodge the problem. Discussion: https://postgr.es/m/449144.1600439950@sss.pgh.pa.us
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: cdd5cffbddac2869f3eed0a6a37cba71ce2332cd
99% of this is docs, but also a couple of comments. No code changes. Justin Pryzby Discussion: https://postgr.es/m/20200919175804.GE30557@telsasoft.com
The previous text was confusing, per off-list discussion with Bruce Momjian.
tsearch_readline() saves the string pointer it returns to the caller for possible use in the associated error context callback. However, the caller will usually pfree that string sometime before it next calls tsearch_readline(), so that there is a window where an ereport will try to print an already-freed string. The built-in users of tsearch_readline() happen to all do that pfree at the bottoms of their loops, so that the window is effectively empty for them. However, this is not documented as a requirement, and contrib/dict_xsyn doesn't do it like that, so it seems likely that third-party dictionaries might have live bugs here. The practical consequences of this seem pretty limited in any case, since production builds wouldn't clobber the freed string immediately, besides which you'd not expect syntax errors in dictionary files being used in production. Still, it's clearly a bug waiting to bite somebody. Fix by pstrdup'ing the string to be saved for the error callback, and then pfree'ing it next time through. It's been like this for a long time, so back-patch to all supported branches. Discussion: https://postgr.es/m/48A4FA71-524E-41B9-953A-FD04EF36E2E7@yesql.se
Harmonize behavior by moving reponsibility for fsyncing directories down into slru.c. In 10 and later, only the multixact directories were missed (see commit 1b02be2), and in older branches all SLRUs were missed. Back-patch to all supported releases. Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/CA%2BhUKGLtsTUOScnNoSMZ-2ZLv%2BwGh01J6kAo_DM8mTRq1sKdSQ%40mail.gmail.com
Parallel pg_dump failed if its -d parameter was a connection string containing any essential information other than host, port, or username. The same was true for pg_restore with --create. The reason is that these scenarios failed to preserve the connection string from the command line; the code felt free to replace that with just the database name when reconnecting from a pg_dump parallel worker or after creating the target database. By chance, parallel pg_restore did not suffer this defect, as long as you didn't say --create. In practice it seems that the error would be obvious only if the connstring included essential, non-default SSL or GSS parameters. This may explain why it took us so long to notice. (It also makes it very difficult to craft a regression test case illustrating the problem, since the test would fail in builds without those options.) Fix by refactoring so that ConnectDatabase always receives all the relevant options directly from the command line, rather than reconstructed values. Inject a different database name, when necessary, by relying on libpq's rules for handling multiple "dbname" parameters. While here, let's get rid of the essentially duplicate _connectDB function, as well as some obsolete nearby cruft. Per bug #16604 from Zsolt Ero. Back-patch to all supported branches. Discussion: https://postgr.es/m/16604-933f4b8791227b15@postgresql.org
This function leaked some memory while loading qual clauses for an RLS policy. While ordinarily negligible, that could build up in some repeated-reload cases, as reported by Konstantin Knizhnik. We can improve matters by borrowing the coding long used in RelationBuildRuleLock: build stringToNode's result directly in the target context, and remember to explicitly pfree the input string. This patch by no means completely guarantees zero leaks within this function, since we have no real guarantee that the catalog- reading subroutines it calls don't leak anything. However, practical tests suggest that this is enough to resolve the issue. In any case, any remaining leaks are similar to those risked by RelationBuildRuleLock and other relcache-loading subroutines. If we need to fix them, we should adopt a more global approach such as that used by the RECOVER_RELATION_BUILD_MEMORY hack. While here, let's remove the need for an expensive PG_TRY block by using MemoryContextSetParent to reparent an initially-short-lived context for the RLS data. Back-patch to all supported branches. Discussion: https://postgr.es/m/21356c12-8917-8249-b35f-1c447231922b@postgrespro.ru
Failure to do this can result in errors during evaluation of the bound expression, as illustrated by the new regression test. Back-patch to v12 where the ability for partition bounds to be expressions was added. Discussion: https://postgr.es/m/CAJV4CdrZ5mKuaEsRSbLf2URQ3h6iMtKD=hik8MaF5WwdmC9uZw@mail.gmail.com
We have a dozen or so places that need to iterate over all but the first cell of a List. Prior to v13 this was typically written as for_each_cell(lc, lnext(list_head(list))) Commit 1cff1b9 changed these to for_each_cell(lc, list, list_second_cell(list)) This patch introduces a new macro for_each_from() which expresses the start point as a list index, allowing these to be written as for_each_from(lc, list, 1) This is marginally more efficient, since ForEachState.i can be initialized directly instead of backing into it from a ListCell address. It also seems clearer and less typo-prone. Some of the remaining uses of for_each_cell() look like they could profitably be changed to for_each_from(), but here I confined myself to changing uses of list_second_cell(). Also, fix for_each_cell_setup() and for_both_cell_setup() to const-ify their arguments; that's a simple oversight in 1cff1b9. Back-patch into v13, on the grounds that (1) the const-ification is a minor bug fix, and (2) it's better for back-patching purposes if we only have two ways to write these loops rather than three. In HEAD, also remove list_third_cell() and list_fourth_cell(), which were also introduced in 1cff1b9, and are unused as of cc99baa. It seems unlikely that any third-party code would have started to use them already; anyone who has can be directed to list_nth_cell instead. Discussion: https://postgr.es/m/CAApHDvpo1zj9KhEpU2cCRZfSM3Q6XGdhzuAS2v79PH7WJBkYVA@mail.gmail.com
This addresses a couple of issues with the so-said subject: - Report the correct parent relation with the index actually being rebuilt or validated. Previously, the command status remained set to the last index created for the progress of the index build and validation, which would be incorrect when working on a table that has more than one index. - Use the correct phase when waiting before the drop of the old indexes. Previously, this was reported with the same status as when waiting before the old indexes are marked as dead. Author: Matthias van de Meent, Michael Paquier Discussion: https://postgr.es/m/CAEze2WhqFgcwe1_tv=sFYhLWV2AdpfukumotJ6JNcAOQs3jufg@mail.gmail.com Backpatch-through: 12
…always". Previously the standby server didn't archive timeline history files streamed from the primary even when archive_mode is set to "always", while it archives the streamed WAL files. This could cause the PITR to fail because there was no required timeline history file in the archive. The cause of this issue was that walreceiver didn't mark those files as ready for archiving. This commit makes walreceiver mark those streamed timeline history files as ready for archiving if archive_mode=always. Then the archiver process archives the marked timeline history files. Back-patch to all supported versions. Reported-by: Grigory Smolkin Author: Grigory Smolkin, Fujii Masao Reviewed-by: David Zhang, Anastasia Lubennikova Discussion: https://postgr.es/m/54b059d4-2b48-13a4-6f43-95a087c92367@postgrespro.ru
bffe1bd has introduced jsonpath .datetime() method, but default formats for time and timestamp contain excess space between time and timezone. This commit removes this excess space making behavior of .datetime() method standard-compliant. Discussion: https://postgr.es/m/94321be0-cc96-1a81-b6df-796f437f7c66%40postgrespro.ru Author: Nikita Glukhov Backpatch-through: 13
The SQL standard doesn't require jsonpath .datetime() method to support the ISO 8601 format. But our to_json[b]() functions convert timestamps to text in the ISO 8601 format in the sake of compatibility with javascript. So, we add support of the ISO 8601 to the jsonpath .datetime() in the sake compatibility with to_json[b](). The standard mode of datetime parsing currently supports just template patterns and separators in the format string. In order to implement ISO 8601, we have to add support of the format string double quotes to the standard parsing mode. Discussion: https://postgr.es/m/94321be0-cc96-1a81-b6df-796f437f7c66%40postgrespro.ru Author: Nikita Glukhov, revised by me Backpatch-through: 13
When executing a CALL or DO in a non-atomic context (i.e., not inside a function or query), plpgsql creates a new plan each time through, as a rather hacky solution to some resource management issues. But it failed to free this plan until exit of the current procedure or DO block, resulting in serious memory bloat in procedures that called other procedures many times. Fix by remembering to free the plan, and by being more honest about restoring the previous state (otherwise, recursive procedure calls have a problem). There was also a smaller leak associated with recalculation of the "target" list of output variables. Fix that by using the statement- lifespan context to hold non-permanent values. Back-patch to v11 where procedures were introduced. Pavel Stehule and Tom Lane Discussion: https://postgr.es/m/CAFj8pRDiiU1dqym+_P4_GuTWm76knJu7z9opWayBJTC0nQGUUA@mail.gmail.com
Explicitly mention that primary key constraints are also included in the limitation that the constraint columns must be a superset of the partition key columns. Wording suggestion from Tom Lane. Discussion: https://postgr.es/m/64062533.78364.1601415362244@mail.yahoo.com Backpatch-through: 11, where unique constraints on partitioned tables were added
PostgresNode.pm set "max_wal_senders = 5" for replication testing, but this seems to be slightly too low for our current test suite. Slower buildfarm members frequently report "number of requested standby connections exceeds max_wal_senders" failures, due to old walsenders not exiting instantaneously. Usually, the test does not fail overall because of automatic walreceiver restart, but sometimes the failure becomes visible; and in any case such retries slow down the test. That value came in with commit 89ac700, but was soon obsoleted by f6d6d29, which raised the built-in default from zero to 10; so that PostgresNode.pm is actually setting it to less than the conservative built-in default. That seems pretty pointless, so let's remove the special setting and let the default prevail, in hopes of making the TAP tests more robust. Likewise, the setting "max_replication_slots = 5" is obsolete and can be removed. While here, reverse-engineer a comment about why we're choosing less-than-default values for some other settings. (Note: before v12, max_wal_senders counted against max_connections so that the latter setting also needs some fiddling with.) Back-patch to v10 where the subscription tests were added. It's likely that the older branches aren't pushing the boundaries of max_wal_senders, but I'm disinclined to spend time trying to figure out exactly when it started to be a problem. Discussion: https://postgr.es/m/723911.1601417626@sss.pgh.pa.us
Previously, a conversion such as to_date('-44-02-01','YYYY-MM-DD') would result in '0045-02-01 BC', as the code attempted to interpret the negative year as BC, but failed to apply the correction needed for our internal handling of BC years. Fix the off-by-one problem. Also, arrange for the combination of a negative year and an explicit "BC" marker to cancel out and produce AD. This is how the negative-century case works, so it seems sane to do likewise. Continue to read "year 0000" as 1 BC. Oracle would throw an error, but we've accepted that case for a long time so I'm hesitant to change it in a back-patch. Per bug #16419 from Saeed Hubaishan. Back-patch to all supported branches. Dar Alathar-Yemen and Tom Lane Discussion: https://postgr.es/m/16419-d8d9db0a7553f01b@postgresql.org
The error message about columns in the primary key not including all of the partition key was unclear; reword it. Backpatch all the way to pg11, where it appeared. Reported-by: Nagaraj Raj <nagaraj.sf@yahoo.com> Discussion: https://postgr.es/m/64062533.78364.1601415362244@mail.yahoo.com
This has been wrong ever since the support for multi-dimensional arrays as PL/python function arguments and return values was introduced in commit 94aceed. Backpatch-through: 10 Discussion: https://www.postgresql.org/message-id/61647b8e-961c-0362-d5d3-c8a18f4a7ec6%40iki.fi
Commit 151c0c5 neglected the possibility that a TEMP_CONFIG file would explicitly set max_wal_senders=0; as indeed buildfarm member thorntail does, so that it can test wal_level=minimal in other test suites. Hence, rather than assuming that max_wal_senders=10 will prevail if we say nothing, set it explicitly. Set max_replication_slots=10 explicitly too, just to be safe. Back-patch to v10, like the previous patch. Discussion: https://postgr.es/m/723911.1601417626@sss.pgh.pa.us
Reported-by: karimelghazouly@gmail.com Discussion: https://postgr.es/m/159854511172.24991.4373145230066586863@wrigleys.postgresql.org Backpatch-through: 9.5
Reported-by: Alexander Lakhin Discussion: https://postgr.es/m/16486-b9c93d71c02c4907@postgresql.org Backpatch-through: 9.5
I noticed while trying to run the regression tests under a low geqo_threshold that one query on information_schema.columns had unstable (as in, variable from one run to the next) output order. This is pretty unsurprising given the complexity of the underlying plan. Interestingly, of this test's three nigh-identical queries on information_schema.columns, the other two already had ORDER BY clauses guaranteeing stable output. Let's make this one look the same. Back-patch to v10 where this test was added. We've not heard field reports of the test failing, but this experience shows that it can happen when testing under even slightly unusual conditions.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.