Andrew Gierth [Sat, 25 Apr 2020 04:10:19 +0000 (05:10 +0100)]
Fix error case for CREATE ROLE ... IN ROLE.
CreateRole() was passing a Value node, not a RoleSpec node, for the
newly-created role name when adding the role as a member of existing
roles for the IN ROLE syntax.
This mistake went unnoticed because the node in question is used only
for error messages and is not accessed on non-error paths.
In older pg versions (such as 9.5 where this was found), this results
in an "unexpected node type" error in place of the real error. That
node type check was removed at some point, after which the code would
accidentally fail to fail on 64-bit platforms (on which accessing the
Value node as if it were a RoleSpec would be mostly harmless) or give
an "unexpected role type" error on 32-bit platforms.
Fix the code to pass the correct node type, and add an lfirst_node
assertion just in case.
Per report on irc from user m1chelangelo.
Backpatch all the way, because this error has been around for a long
time.
Tom Lane [Fri, 24 Apr 2020 21:21:44 +0000 (17:21 -0400)]
Improve placement of "display name" comment in win32_tzmap[] entries.
Sticking this comment at the end of the last line was a bad idea: it's
not particularly readable, and it tempts pgindent to mess with line
breaks within the comment, which in turn reveals that win32tzlist.pl's
clean_displayname() does the wrong thing to clean up such line breaks.
While that's not hard to fix, there's basically no excuse for this
arrangement to begin with, especially since it makes the table layout
needlessly vary across back branches with different pgindent rules.
Let's just put the comment inside the braces, instead.
This commit just moves and reformats the comments, and updates
win32tzlist.pl to match; there's no actual data change.
Per odd-looking results from Juan José Santamaría Flecha.
Back-patch, since the point is to make win32_tzmap[] look the
same in all supported branches again.
Tom Lane [Fri, 24 Apr 2020 16:02:36 +0000 (12:02 -0400)]
Repair performance regression in information_schema.triggers view.
Commit 32ff26911 introduced use of rank() into the triggers view to
calculate the spec-mandated action_order column. As written, this
prevents query constraints on the table-name column from being pushed
below the window aggregate step. That's bad for performance of this
typical usage pattern, since the view now has to be evaluated for all
tables not just the one(s) the user wants to see. It's also the cause
of some recent buildfarm failures, in which trying to evaluate the view
rows for triggers in process of being dropped resulted in "cache lookup
failed for function NNN" errors. Those rows aren't of interest to the
test script doing the query, but the filter that would eliminate them
is being applied too late. None of this happened before the rank()
call was there, so it's a regression compared to v10 and before.
We can improve matters by changing the rank() call so that instead of
partitioning by OIDs, it partitions by nspname and relname, casting
those to sql_identifier so that they match the respective view output
columns exactly. The planner has enough intelligence to know that
constraints on partitioning columns are safe to push down, so this
eliminates the performance problem and the regression test failure
risk. We could make the other partitioning columns match view outputs
as well, but it'd be more complicated and the performance benefits
are questionable.
Side note: as this stands, the planner will push down constraints on
event_object_table and trigger_schema, but not on event_object_schema,
because it checks for ressortgroupref matches not expression
equivalence. That might be worth improving someday, but it's not
necessary to fix the immediate concern.
Back-patch to v11 where the rank() call was added. Ordinarily we'd not
change information_schema in released branches, but the test failure has
been seen in v12 and presumably could happen in v11 as well, so we need
to do this to keep the buildfarm happy. The change is harmless so far
as users are concerned. Some might wish to apply it to existing
installations if performance of this type of query is of concern,
but those who don't are no worse off.
I bumped catversion in HEAD as a pro forma matter (there's no
catalog incompatibility that would really require a re-initdb).
Obviously that can't be done in the back branches.
Michael Paquier [Fri, 24 Apr 2020 02:34:03 +0000 (11:34 +0900)]
Remove some unstable parts from new TAP test for archive status check
The test is proving to have timing issues when looking at archive status
files on standbys after crash recovery, while other parts of the test
rely on pg_stat_archiver as a wait point to make sure that a given state
of the archiving is reached. The coverage is not heavily impacted by
the removal those extra tests.
Per reports from several buildfarm animals, like crake, piculet,
culicidae and francolin.
Michael Paquier [Thu, 23 Apr 2020 23:48:40 +0000 (08:48 +0900)]
Fix handling of WAL segments ready to be archived during crash recovery
78ea8b5 has fixed an issue related to the recycling of WAL segments on
standbys depending on archive_mode. However, it has introduced a
regression with the handling of WAL segments ready to be archived during
crash recovery, causing those files to be recycled without getting
archived.
This commit fixes the regression by tracking in shared memory if a live
cluster is either in crash recovery or archive recovery as the handling
of WAL segments ready to be archived is different in both cases (those
WAL segments should not be removed during crash recovery), and by using
this new shared memory state to decide if a segment can be recycled or
not. Previously, it was not possible to know if a cluster was in crash
recovery or archive recovery as the shared state was able to track only
if recovery was happening or not, leading to the problem.
A set of TAP tests is added to close the gap here, making sure that WAL
segments ready to be archived are correctly handled when a cluster is in
archive or crash recovery with archive_mode set to "on" or "always", for
both standby and primary.
Reported-by: Benoît Lobréau
Author: Jehan-Guillaume de Rorthais Reviewed-by: Kyotaro Horiguchi, Fujii Masao, Michael Paquier
Discussion: https://postgr.es/m/20200331172229.40ee00dc@firost
Backpatch-through: 9.5
Michael Paquier [Tue, 21 Apr 2020 22:27:49 +0000 (07:27 +0900)]
Fix memory leak in libpq when using sslmode=verify-full
Checking if Subject Alternative Names (SANs) from a certificate match
with the hostname connected to leaked memory after each lookup done.
This is broken since acd08d7 that added support for SANs in SSL
certificates, so backpatch down to 9.5.
Author: Roman Peshkurov Reviewed-by: Hamid Akhtar, Michael Paquier, David Steele
Discussion: https://postgr.es/m/CALLDf-pZ-E3mjxd5=bnHsDu9zHEOnpgPgdnO84E2RuwMCjjyPw@mail.gmail.com
Backpatch-through: 9.5
Tom Lane [Tue, 21 Apr 2020 19:58:42 +0000 (15:58 -0400)]
Fix possible crash during FATAL exit from reindexing.
index.c supposed that it could just use a PG_TRY block to clean up the
state associated with an active REINDEX operation. However, that code
doesn't run if we do a FATAL exit --- for example, due to a SIGTERM
shutdown signal --- while the REINDEX is happening. And that state does
get consulted during catalog accesses, which makes it problematic if we
do any catalog accesses during shutdown --- for example, to clean up any
temp tables created in the session.
If this combination of circumstances occurred, we could find ourselves
trying to access already-freed memory. In debug builds that'd fairly
reliably cause an assertion failure. In production we might often
get away with it, but with some bad luck it could cause a core dump.
Another possible bad outcome is an erroneous conclusion that an
index-to-be-accessed is being reindexed; but it looks like that would
be unlikely to have any consequences worse than failing to drop temp
tables right away. (They'd still get dropped by the next session that
uses that temp schema.)
To fix, get rid of the use of PG_TRY here, and instead hook into
the transaction abort mechanisms to clean up reindex state.
Per bug #16378 from Alexander Lakhin. This has been wrong for a
very long time, so back-patch to all supported branches.
Tom Lane [Tue, 21 Apr 2020 18:23:42 +0000 (14:23 -0400)]
Fix minor violations of FunctionCallInvoke usage protocol.
Working on commit 1c455078b led me to check through FunctionCallInvoke
call sites to see if every one was being honest about (a) making sure
that fcinfo.isnull is initially false, and (b) checking its state after
the call. Sure enough, I found some violations.
The main one is that finalize_partialaggregate re-used serialfn_fcinfo
without resetting isnull, even though it clearly intends to cater for
serialfns that return NULL. There would only be an issue with a
non-strict serialfn, since it's unlikely that a serialfn would return
NULL for non-null input. We have no non-strict serialfns in core, and
there may be none in the wild either, which would account for the lack
of complaints. Still, it's clearly wrong, so back-patch that fix to
9.6 where finalize_partialaggregate was introduced.
Also, arrayfuncs.c and rowtypes.c contained various callers that were
not bothering to check for result nulls. While what's being called is
a comparison or hash function that probably *shouldn't* return null,
that's a lousy excuse for not having any check at all. There are
existing places that just Assert(!fcinfo->isnull) in comparable
situations, so I added that to the places that were calling btree
comparison or hash support functions. In the places calling
boolean-returning equality functions, it's quite cheap to have them
treat isnull as FALSE, so make those places do that. Also remove some
"locfcinfo->isnull = false" assignments that are unnecessary given the
assumption that no previous call returned null. These changes seem like
mostly neatnik-ism or debugging support, so I didn't back-patch.
When a partition is detached, any triggers that had been cloned from its
parent were not properly disentangled from its parent triggers.
This resulted in triggers that could not be dropped because they
depended on the trigger in the trigger in the no-longer-parent table:
ALTER TABLE t DETACH PARTITION t1;
DROP TRIGGER trig ON t1;
ERROR: cannot drop trigger trig on table t1 because trigger trig on table t requires it
HINT: You can drop trigger trig on table t instead.
Moreover the table can no longer be re-attached to its parent, because
the trigger name is already taken:
ALTER TABLE t ATTACH PARTITION t1 FOR VALUES FROM (1)TO(2);
ERROR: trigger "trig" for relation "t1" already exists
The former is a bug introduced in commit 86f575948c77. (The latter is
not necessarily a bug, but it makes the bug more uncomfortable.)
To avoid the complexity that would be needed to tell whether the trigger
has a local definition that has to be merged with the one coming from
the parent table, establish the behavior that the trigger is removed
when the table is detached.
Magnus Hagander [Mon, 20 Apr 2020 10:53:40 +0000 (12:53 +0200)]
Allow pg_read_all_stats to access all stats views again
The views pg_stat_progress_* had not gotten the memo that
pg_read_all_stats is supposed to be able to read all statistics. Also
make a pass over all text-returning pg_stat_xyz functions that could
return "insufficient privilege" and make sure they also respect
pg_read_all_status.
Reported-by: Andrey M. Borodin Reviewed-by: Andrey M. Borodin, Kyotaro Horiguchi
Discussion: https://postgr.es/m/13145F2F-8458-4977-9D2D-7B2E862E5722@yandex-team.ru
Tom Lane [Sat, 18 Apr 2020 18:02:44 +0000 (14:02 -0400)]
Fix race conditions in synchronous standby management.
We have repeatedly seen the buildfarm reach the Assert(false) in
SyncRepGetSyncStandbysPriority. This apparently is due to failing to
consider the possibility that the sync_standby_priority values in
shared memory might be inconsistent; but they will be whenever only
some of the walsenders have updated their values after a change in
the synchronous_standby_names setting. That function is vastly too
complex for what it does, anyway, so rewriting it seems better than
trying to apply a band-aid fix.
Furthermore, the API of SyncRepGetSyncStandbys is broken by design:
it returns a list of WalSnd array indexes, but there is nothing
guaranteeing that the contents of the WalSnd array remain stable.
Thus, if some walsender exits and then a new walsender process
takes over that WalSnd array slot, a caller might make use of
WAL position data that it should not, potentially leading to
incorrect decisions about whether to release transactions that
are waiting for synchronous commit.
To fix, replace SyncRepGetSyncStandbys with a new function
SyncRepGetCandidateStandbys that copies all the required data
from shared memory while holding the relevant mutexes. If the
associated walsender process then exits, this data is still safe to
make release decisions with, since we know that that much WAL *was*
sent to a valid standby server. This incidentally means that we no
longer need to treat sync_standby_priority as protected by the
SyncRepLock rather than the per-walsender mutex.
SyncRepGetSyncStandbys is no longer used by the core code, so remove
it entirely in HEAD. However, it seems possible that external code is
relying on that function, so do not remove it from the back branches.
Instead, just remove the known-incorrect Assert. When the bug occurs,
the function will return a too-short list, which callers should treat
as meaning there are not enough sync standbys, which seems like a
reasonably safe fallback until the inconsistent state is resolved.
Moreover it's bug-compatible with what has been happening in non-assert
builds. We cannot do anything about the walsender-replacement race
condition without an API/ABI break.
The bogus assertion exists back to 9.6, but 9.6 is sufficiently
different from the later branches that the patch doesn't apply at all.
I chose to just remove the bogus assertion in 9.6, feeling that the
probability of a bad outcome from the walsender-replacement race
condition is too low to justify rewriting the whole patch for 9.6.
Andrew Dunstan [Fri, 17 Apr 2020 18:52:42 +0000 (14:52 -0400)]
Use a slightly more liberal regex to detect Visual Studio version
Apparently in some language versions of Visual Studio nmake outputs some
material after the version number and before the end of the line. This
has been seen in Chinese versions. Therefore, we no longer demand that
the version string comes at the end of a line.
Michael Paquier [Fri, 17 Apr 2020 01:45:20 +0000 (10:45 +0900)]
Fix minor memory leak in pg_basebackup and pg_receivewal
The result of the query used to retrieve the WAL segment size from the
backend was not getting freed in two code paths. Both pg_basebackup and
pg_receivewal exit immediately if a failure happened on this query, so
this was not an actual problem, but it could be an issue if this code
gets used for other tools in different ways, be they future tools in
this code tree or external, existing, ones.
Oversight in commit fc49e24, so backpatch down to 11.
Author: Jie Zhang
Discussion: https://postgr.es/m/970ad9508461469b9450b64027842331@G08CNEXMBPEKD06.g08.fujitsu.local
Backpatch-through: 11
Tom Lane [Thu, 16 Apr 2020 18:45:54 +0000 (14:45 -0400)]
Fix cache reference leak in contrib/sepgsql.
fixup_whole_row_references() did the wrong thing with a dropped column,
resulting in a commit-time warning about a cache reference leak.
I (tgl) added a test case exercising this, but back-patched the test
only as far as v10; the patch didn't apply cleanly to 9.6 and it
didn't seem worth the trouble to adapt it. The bug is pretty old
though, so apply the code change all the way back.
Tom Lane [Sat, 11 Apr 2020 16:29:06 +0000 (12:29 -0400)]
Clear dangling pointer to avoid bogus EXPLAIN printout in a corner case.
ExecReScanHashJoin will destroy the join's hash table if it expects
that the inner relation will produce different rows on rescan.
Up to now it's not bothered to clear the additional pointer to that
hash table that exists in the child HashState node. However, it's
possible for the query to terminate without building a fresh hash
table (this happens if the outer relation is found to be empty
during the final rescan). So we can end with a dangling pointer
to a deleted hash table. That was harmless originally, but since
9.0 EXPLAIN ANALYZE has used that pointer to print hash table
statistics. In debug builds this reproducibly results in garbage
statistics. In non-debug builds there's frequently no ill effects,
but in principle one could get wrong EXPLAIN ANALYZE output, or
perhaps even a crash if free() has released the hashtable memory
back to the OS.
To fix, just make sure we clear the additional pointer when destroying
the hash table. In problematic cases, EXPLAIN ANALYZE will then print
no hashtable statistics (reverting to its pre-9.0 behavior). This isn't
ideal, but since the problem manifests only in unusual corner cases,
it's hard to justify taking any risks to do better in the back
branches. A follow-on patch will improve matters in HEAD.
Konstantin Knizhnik and Tom Lane, per diagnosis by Thomas Munro
of a trouble report from Alvaro Herrera.
Tom Lane [Fri, 10 Apr 2020 17:12:58 +0000 (13:12 -0400)]
Doc: clarify locking requirements for ALTER TABLE ADD FOREIGN KEY.
The docs explained that a SHARE ROW EXCLUSIVE lock is needed on the
referenced table, but failed to say the same about the table being
altered. Since the page says that ACCESS EXCLUSIVE lock is taken
unless otherwise stated, this left readers with the wrong conclusion.
Tom Lane [Fri, 10 Apr 2020 14:44:10 +0000 (10:44 -0400)]
Doc: sync CREATE GROUP syntax synopsis with CREATE ROLE.
CREATE GROUP is an exact alias for CREATE ROLE, and CREATE USER is
almost an exact alias, as can easily be confirmed by checking the
code. So the man page syntax descriptions ought to match up. The
last few additions of role options seem to have forgotten to update
create_group.sgml, though. Fix that, and add a naggy reminder to
create_role.sgml in hopes of not forgetting again.
Tom Lane [Thu, 9 Apr 2020 19:38:43 +0000 (15:38 -0400)]
Further cleanup of ts_headline code.
Suppress a probably-meaningless uninitialized-variable warning
(induced by my previous patch, I'm sorry to say).
Improve mark_hl_fragments()'s test for overlapping cover strings:
it failed to consider the possibility that the current string is
strictly within another one. That's unlikely given the preceding
splitting into MaxWords fragments, but I don't think it's impossible.
Tom Lane [Thu, 9 Apr 2020 17:19:23 +0000 (13:19 -0400)]
Fix default text search parser's ts_headline code for phrase queries.
This code could produce very poor results when asked to highlight a
string based on a query using phrase-match operators. The root cause
is that hlCover(), which is supposed to find a minimal substring that
matches the query, was written assuming that word position is not
significant. I'm only 95% convinced that its algorithm was correct even
for plain AND/OR queries; but it definitely fails completely for phrase
matches, causing it to possibly not identify a cover string at all.
Hence, rewrite hlCover() with a less-tense algorithm that just tries
all the possible substrings, earlier and shorter ones first. (This is
not as bad as it sounds performance-wise, because all of the string
matching has been done already: the repeated tsquery match checks
boil down to pointer comparisons.)
Unfortunately, since that approach produces more candidate cover
strings than before, it also exposes that there were bugs in the
heuristics in mark_hl_words() for selecting a best cover string.
Fixes there include:
* Do not apply the ShortWord filter to words that appear in the query.
* Remove a misguided optimization for quickly rejecting a cover.
* Fix order-of-operation bug that could cause computation of a
wrong figure of merit (poslen) when shortening a cover.
* Change the preference rule so that candidate headlines that do not
include their whole cover string (after MaxWords trimming) are lowest
priority, since they may not actually satisfy the user's query.
This results in some changes in existing regression test cases,
but they all seem reasonable. Note in particular that the tests
involving strings like "1 2 3" were previously being affected by
the ShortWord filter, masking the normal matching behavior.
Per bug #16345 from Augustinas Jokubauskas; the new test cases are
based on that example. Back-patch to 9.6 where phrase search was
added to tsquery.
Tom Lane [Thu, 9 Apr 2020 16:37:00 +0000 (12:37 -0400)]
Cosmetic improvements for default text search parser's ts_headline code.
This code was woefully unreadable and under-commented. Try to improve
matters by adding comments, using some macros to make complicated
if-tests more readable, using boolean type where appropriate, etc.
There are a couple of tiny coding improvements too, but this commit
includes (I hope) no behavioral change.
Nonetheless, back-patch as far as 9.6, because a followup bug-fixing
commit depends on this.
Amit Kapila [Thu, 9 Apr 2020 03:55:57 +0000 (09:25 +0530)]
Allow parallel create index to accumulate buffer usage stats.
Currently, we don't account for buffer usage incurred by parallel workers
for parallel create index. This commit allows each worker to record the
buffer usage stats and leader backend to accumulate that stats at the
end of the operation. This will allow pg_stat_statements to display
correct buffer usage stats for (parallel) create index command.
Reported-by: Julien Rouhaud
Author: Sawada Masahiko Reviewed-by: Dilip Kumar, Julien Rouhaud and Amit Kapila
Backpatch-through: 11, where this was introduced
Discussion: https://postgr.es/m/20200328151721.GB12854@nol
Tom Lane [Wed, 8 Apr 2020 15:23:39 +0000 (11:23 -0400)]
Fix pg_dump/pg_restore to restore event trigger comments later.
Repair an oversight in commit 8728b2c70: if we're postponing restore
of event triggers to the end, we must also postpone restoring any
comments on them, since of course we cannot create the comments first.
(This opens yet another opportunity for an event trigger to bollix
the restore, but there's no help for that.)
Per bug #16346 from Alexander Lakhin.
Like the previous commit, back-patch to all supported branches.
Tom Lane [Wed, 8 Apr 2020 00:50:02 +0000 (20:50 -0400)]
Fix circle_in to accept "(x,y),r" as it's advertised to do.
Our documentation describes four allowed input syntaxes for circles,
but the regression tests tried only three ... with predictable
consequences. Remarkably, this has been wrong since the circle
datatype was added in 1997, but nobody noticed till now.
Tom Lane [Tue, 7 Apr 2020 20:30:55 +0000 (16:30 -0400)]
Adjust bytea get_bit/set_bit to cope with bytea strings > 256MB.
Since the existing bit number argument can't exceed INT32_MAX, it's
not possible for these functions to manipulate bits beyond the first
256MB of a bytea value. However, it'd be good if they could do at
least that much, and not fall over entirely for longer bytea values.
Adjust the comparisons to be done in int64 arithmetic so that works.
Also tweak the error reports to show sane values in case of overflow.
Also add some test cases to improve the miserable code coverage
of these functions.
Apply patch to back branches only; HEAD has a better solution
as of commit 26a944cf2.
Michael Paquier [Mon, 6 Apr 2020 02:05:54 +0000 (11:05 +0900)]
Preserve clustered index after rewrites with ALTER TABLE
A table rewritten by ALTER TABLE would lose tracking of an index usable
for CLUSTER. This setting is tracked by pg_index.indisclustered and is
controlled by ALTER TABLE, so some extra work was needed to restore it
properly. Note that ALTER TABLE only marks the index that can be used
for clustering, and does not do the actual operation.
Author: Amit Langote, Justin Pryzby Reviewed-by: Ibrar Ahmed, Michael Paquier
Discussion: https://postgr.es/m/20200202161718.GI13621@telsasoft.com
Backpatch-through: 9.5
Andres Freund [Mon, 6 Apr 2020 00:47:30 +0000 (17:47 -0700)]
Use TransactionXmin instead of RecentGlobalXmin in heap_abort_speculative().
There's a very low risk that RecentGlobalXmin could be far enough in
the past to be older than relfrozenxid, or even wrapped
around. Luckily the consequences of that having happened wouldn't be
too bad - the page wouldn't be pruned for a while.
Avoid that risk by using TransactionXmin instead. As that's announced
via MyPgXact->xmin, it is protected against wrapping around (see code
comments for details around relfrozenxid).
Author: Andres Freund
Discussion: https://postgr.es/m/20200328213023.s4eyijhdosuc4vcj@alap3.anarazel.de
Backpatch: 9.5-
Tom Lane [Fri, 3 Apr 2020 17:15:30 +0000 (13:15 -0400)]
Fix bugs in gin_fuzzy_search_limit processing.
entryGetItem()'s three code paths each contained bugs associated
with filtering the entries for gin_fuzzy_search_limit.
The posting-tree path failed to advance "advancePast" after having
decided to filter an item. If we ran out of items on the current
page and needed to advance to the next, what would actually happen
is that entryLoadMoreItems() would re-load the same page. Eventually,
the random dropItem() test would accept one of the same items it'd
previously rejected, and we'd move on --- but it could take awhile
with small gin_fuzzy_search_limit. To add insult to injury, this
case would inevitably cause entryLoadMoreItems() to decide it needed
to re-descend from the root, making things even slower.
The posting-list path failed to implement gin_fuzzy_search_limit
filtering at all, so that all entries in the posting list would
be returned.
The bitmap-result path used a "gotitem" variable that it failed to
update in the one place where it'd actually make a difference, ie
at the one "continue" statement. I think this was unreachable in
practice, because if we'd looped around then it shouldn't be the
case that the entries on the new page are before advancePast.
Still, the "gotitem" variable was contributing nothing to either
clarity or correctness, so get rid of it.
Refactor all three loops so that the termination conditions are
more alike and less unreadable.
The code coverage report showed that we had no coverage at all for
the re-descend-from-root code path in entryLoadMoreItems(), which
seems like a very bad thing, so add a test case that exercises it.
We also had exactly no coverage for gin_fuzzy_search_limit, so add a
simplistic test case that at least hits those code paths a little bit.
Tom Lane [Fri, 3 Apr 2020 15:24:56 +0000 (11:24 -0400)]
Fix bogus CALLED_AS_TRIGGER() defenses.
contrib/lo's lo_manage() thought it could use
trigdata->tg_trigger->tgname in its error message about
not being called as a trigger. That naturally led to a core dump.
unique_key_recheck() figured it could Assert that fcinfo->context
is a TriggerData node in advance of having checked that it's
being called as a trigger. That's harmless in production builds,
and perhaps not that easy to reach in any case, but it's logically
wrong.
The first of these per bug #16340 from William Crowell;
the second from manual inspection of other CALLED_AS_TRIGGER
call sites.
Back-patch the lo.c change to all supported branches, the
other to v10 where the thinko crept in.
Bruce Momjian [Thu, 2 Apr 2020 21:42:09 +0000 (17:42 -0400)]
doc: remove unnecessary INNER keyword
A join that was added in commit 9b2009c4cf that did not use the INNER
keyword but the existing query used it. It was cleaner to remove the
existing INNER keyword.
Reported-by: Peter Eisentraut
Discussion: https://postgr.es/m/a1ffbfda-59d2-5732-e5fb-3df8582b6434@2ndquadrant.com
Tom Lane [Wed, 1 Apr 2020 18:49:49 +0000 (14:49 -0400)]
Check equality semantics for unique indexes on partitioned tables.
We require the partition key to be a subset of the set of columns
being made unique, so that physically-separate indexes on the different
partitions are sufficient to enforce the uniqueness constraint.
The existing code checked that the listed columns appear, but did not
inquire into the index semantics, which is a serious oversight given
that different index opclasses might enforce completely different
notions of uniqueness.
Ideally, perhaps, we'd just match the partition key opfamily to the
index opfamily. But hash partitioning uses hash opfamilies which we
can't directly match to btree opfamilies. Hence, look up the equality
operator in each family, and accept if it's the same operator. This
should be okay in a fairly general sense, since the equality operator
ought to precisely represent the opfamily's notion of uniqueness.
A remaining weak spot is that we don't have a cross-index-AM notion of
which opfamily member is "equality". But we know which one to use for
hash and btree AMs, and those are the only two that are relevant here
at present. (Any non-core AMs that know how to enforce equality are
out of luck, for now.)
Back-patch to v11 where this feature was introduced.
Bruce Momjian [Tue, 31 Mar 2020 21:07:43 +0000 (17:07 -0400)]
doc: clarify which table creation is used for inheritance part.
Previously people might assume that the partition syntax version of
CREATE TABLE is to be used for the inheritance partition table example;
mention that the non-partitioned version should be used.
Tom Lane [Tue, 31 Mar 2020 16:57:55 +0000 (12:57 -0400)]
Teach pg_ls_dir_files() to ignore ENOENT failures from stat().
Buildfarm experience shows that this function can fail with ENOENT
if some other process unlinks a file between when we read the directory
entry and when we try to stat() it. The problem is old but we had
not noticed it until 085b6b667 added regression test coverage.
To fix, just ignore ENOENT failures. There is one other case that
this might hide: a symlink that points to nowhere. That seems okay
though, at least better than erroring.
Back-patch to v10 where this function was added, since the regression
test cases were too.
Tom Lane [Tue, 31 Mar 2020 15:37:44 +0000 (11:37 -0400)]
Back-patch addition of stack overflow and interrupt checks for lquery.
Experimentation shows that it's not hard at all to drive the
old implementation of "ltree ~ lquery" match to stack overflow,
so throw in a check_stack_depth() call, as I just did in HEAD.
I wasn't able to make it take a long time, because all the
pathological cases I tried hit stack overflow first; but
I bet there are some others that do take a long time, so add
CHECK_FOR_INTERRUPTS() too.
Tom Lane [Mon, 30 Mar 2020 15:14:58 +0000 (11:14 -0400)]
Be more careful about extracting encoding from locale strings on Windows.
GetLocaleInfoEx() can fail on strings that setlocale() was perfectly
happy with. A common way for that to happen is if the locale string
is actually a Unix-style string, say "et_EE.UTF-8". In that case,
what's after the dot is an encoding name, not a Windows codepage number;
blindly treating it as a codepage number led to failure, with a fairly
silly error message. Hence, check to see if what's after the dot is
all digits, and if not, treat it as a literal encoding name rather than
a codepage number. This will do the right thing with many Unix-style
locale strings, and produce a more sensible error message otherwise.
Somewhat independently of that, treat a zero (CP_ACP) result from
GetLocaleInfoEx() as meaning that we must use UTF-8 encoding.
Tom Lane [Sun, 29 Mar 2020 22:54:19 +0000 (18:54 -0400)]
Doc: correct misstatement about ltree label maximum length.
The documentation says that the max length is 255 bytes, but
code inspection says it's actually 255 characters; and relevant
lengths are stored as uint16 so that that works.
Tom Lane [Sat, 28 Mar 2020 21:09:51 +0000 (17:09 -0400)]
Protect against overflow of ltree.numlevel and lquery.numlevel.
These uint16 fields could be overflowed by excessively long input,
producing strange results. Complain for invalid input.
Likewise check for out-of-range values of the repeat counts in lquery.
(We don't try too hard on that one, notably not bothering to detect
if atoi's result has overflowed.)
Also detect length overflow in ltree_concat.
In passing, be more consistent about whether "syntax error" messages
include the type name. Also, clarify the documentation about what
the size limit is.
This has been broken for a long time, so back-patch to all supported
branches.
Nikita Glukhov, reviewed by Benjie Gillam and Tomas Vondra
Andres Freund [Sat, 28 Mar 2020 18:52:11 +0000 (11:52 -0700)]
Ensure snapshot is registered within ScanPgRelation().
In 9.4 I added support to use a historical snapshot in
ScanPgRelation(), while adding logical decoding. Unfortunately a
conflict with the concurrent removal of SnapshotNow was incorrectly
resolved, leading to an unregistered snapshot being used.
It is not correct to use an unregistered (or non-active) snapshot for
anything non-trivial, because catalog invalidations can cause the
snapshot to be invalidated.
Luckily it seems unlikely to actively cause problems in practice, as
ScanPgRelation() requires that we already have a lock on the relation,
we only look for a single row, and we don't appear to rely on the
result's tid to be correct. It however is clearly wrong and potential
negative consequences would likely be hard to find. So it seems worth
backpatching the fix, even without a concrete hazard.
Tom Lane [Thu, 26 Mar 2020 22:06:55 +0000 (18:06 -0400)]
Ensure that plpgsql cleans up cleanly during parallel-worker exit.
plpgsql_xact_cb ought to treat events XACT_EVENT_PARALLEL_COMMIT and
XACT_EVENT_PARALLEL_ABORT like XACT_EVENT_COMMIT and XACT_EVENT_ABORT
respectively, since its goal is to do process-local cleanup. This
oversight caused plpgsql's end-of-transaction cleanup to not get done
in parallel workers. Since a parallel worker will exit just after the
transaction cleanup, the effects of this are limited. I couldn't find
any case in the core code with user-visible effects, but perhaps there
are some in extensions. In any case it's wrong, so let's fix it before
it bites us not after.
In passing, add some comments around the handling of expression
evaluation resources in DO blocks. There's no live bug there, but it's
quite unobvious what's happening; at least I thought so. This isn't
related to the other issue, except that I found both things while poking
at expression-evaluation performance.
Back-patch the plpgsql_xact_cb fix to 9.5 where those event types
were introduced, and the DO-block commentary to v11 where DO blocks
gained the ability to issue COMMIT/ROLLBACK.
Peter Eisentraut [Thu, 26 Mar 2020 10:51:39 +0000 (11:51 +0100)]
Drop slot's LWLock before returning from SaveSlotToPath()
When SaveSlotToPath() is called with elevel=LOG, the early exits didn't
release the slot's io_in_progress_lock.
This could result in a walsender being stuck on the lock forever. A
possible way to get into this situation is if the offending code paths
are triggered in a low disk space situation.
Andres Freund [Mon, 23 Mar 2020 20:57:43 +0000 (13:57 -0700)]
Fix potential crash after constraint violation errors in partitioned tables.
During the reporting of constraint violations for partitioned tables,
ExecPartitionCheckEmitError(), ExecConstraints(),
ExecWithCheckOptions() set the slot descriptor of the input slot to
the root partition's tuple desc. That's generally problematic when
the slot could be used by other routines, but can cause crashes after
the introduction of slots with "fixed" tuple descriptors in ad7dbee368a.
The problem likely escaped detection so far for two reasons: First,
currently the only known way that these routines are used with a
partitioned table that is not "owned" by partitioning code is when
"fast defaults" are used for the child partition. Second, as an error
is raised afterwards, an "external" slot that had its descriptor
changed, is very unlikely to continue being used.
Even though the issue currently is only known to cause a crash for
11 (as that has both fast defaults and "fixed" slot descriptors), it
seems worth applying the fix to 10 too. Potentially changing random
slots is hazardous.
Regression tests will be added in a separate commit, as it seems best
to add them for master and 12 too.
Reported-By: Daniel WM
Author: Andres Freund
Bug: #16293
Discussion: https://postgr.es/m/16293-26f5777d10143a66@postgresql.org
Backpatch: 11, 10 only
Tom Lane [Mon, 23 Mar 2020 16:42:15 +0000 (12:42 -0400)]
Doc: explain that LIKE et al can be used in ANY (sub-select) etc.
This wasn't stated anywhere, and it's perhaps not that obvious,
since we get questions about it from time to time. Also undocumented
was that the parser actually translates these into operators.
Tom Lane [Mon, 23 Mar 2020 15:58:01 +0000 (11:58 -0400)]
Fix our getopt_long's behavior for a command line argument of just "-".
src/port/getopt_long.c failed on such an argument, always seeing it
as an unrecognized switch. This is unhelpful; better is to treat such
an item as a non-switch argument. That behavior is what we find in
GNU's getopt_long(); it's what src/port/getopt.c does; and it is
required by POSIX for getopt(), which getopt_long() ought to be
generally a superset of. Moreover, it's expected by ecpg, which
intends an argument of "-" to mean "read from stdin". So fix it.
Also add some documentation about ecpg's behavior in this area, since
that was miserably underdocumented. I had to reverse-engineer it
from the code.
Per bug #16304 from James Gray. Back-patch to all supported branches,
since this has been broken forever.
Michael Paquier [Mon, 23 Mar 2020 04:38:14 +0000 (13:38 +0900)]
Doc: Fix type of some storage parameters in CREATE TABLE page
autovacuum_vacuum_scale_factor and autovacuum_analyze_scale_factor have
been documented as "float4", but "floating type" is used in this case
for GUCs and relation options in the documentation.
Noah Misch [Sun, 22 Mar 2020 16:24:09 +0000 (09:24 -0700)]
Revert "Skip WAL for new relfilenodes, under wal_level=minimal."
This reverts commit cb2fd7eac285b1b0a24eeb2b8ed4456b66c5a09f. Per
numerous buildfarm members, it was incompatible with parallel query, and
a test case assumed LP64. Back-patch to 9.5 (all supported versions).
Noah Misch [Sat, 21 Mar 2020 16:38:26 +0000 (09:38 -0700)]
Skip WAL for new relfilenodes, under wal_level=minimal.
Until now, only selected bulk operations (e.g. COPY) did this. If a
given relfilenode received both a WAL-skipping COPY and a WAL-logged
operation (e.g. INSERT), recovery could lose tuples from the COPY. See
src/backend/access/transam/README section "Skipping WAL for New
RelFileNode" for the new coding rules. Maintainers of table access
methods should examine that section.
To maintain data durability, just before commit, we choose between an
fsync of the relfilenode and copying its contents to WAL. A new GUC,
wal_skip_threshold, guides that choice. If this change slows a workload
that creates small, permanent relfilenodes under wal_level=minimal, try
adjusting wal_skip_threshold. Users setting a timeout on COMMIT may
need to adjust that timeout, and log_min_duration_statement analysis
will reflect time consumption moving to COMMIT from commands like COPY.
Internally, this requires a reliable determination of whether
RollbackAndReleaseCurrentSubTransaction() would unlink a relation's
current relfilenode. Introduce rd_firstRelfilenodeSubid. Amend the
specification of rd_createSubid such that the field is zero when a new
rel has an old rd_node. Make relcache.c retain entries for certain
dropped relations until end of transaction.
Back-patch to 9.5 (all supported versions). This introduces a new WAL
record type, XLOG_GIST_ASSIGN_LSN, without bumping XLOG_PAGE_MAGIC. As
always, update standby systems before master systems. This changes
sizeof(RelationData) and sizeof(IndexStmt), breaking binary
compatibility for affected extensions. (The most recent commit to
affect the same class of extensions was 089e4d405d0f3b94c74a2c6a54357a84a681754b.)
Kyotaro Horiguchi, reviewed (in earlier, similar versions) by Robert
Haas. Heikki Linnakangas and Michael Paquier implemented earlier
designs that materially clarified the problem. Reviewed, in earlier
designs, by Andrew Dunstan, Andres Freund, Alvaro Herrera, Tom Lane,
Fujii Masao, and Simon Riggs. Reported by Martijn van Oosterhout.
Noah Misch [Sat, 21 Mar 2020 16:38:33 +0000 (09:38 -0700)]
Back-patch log_newpage_range().
Back-patch a subset of commit 9155580fd5fc2a0cbb23376dfca7cd21f59c2c7b
to v11, v10, 9.6, and 9.5. Include the latest repairs to this function.
Use a new XLOG_FPI_MULTI value instead of reusing XLOG_FPI. That way,
if an older server reads WAL from this function, that server will PANIC
instead of applying just one page of the record. The next commit adds a
call to this function.
Noah Misch [Sat, 21 Mar 2020 16:38:26 +0000 (09:38 -0700)]
During heap rebuild, lock any TOAST index until end of transaction.
swap_relation_files() calls toast_get_valid_index() to find and lock
this index, just before swapping with the rebuilt TOAST index. The
latter function releases the lock before returning. Potential for
mischief is low; a concurrent session can issue ALTER INDEX ... SET
(fillfactor = ...), which is not alarming. Nonetheless, changing
pg_class.relfilenode without a lock is unconventional. Back-patch to
9.5 (all supported versions), because another fix needs this.
Noah Misch [Sat, 21 Mar 2020 16:38:26 +0000 (09:38 -0700)]
Fix cosmetic blemishes involving rd_createSubid.
Remove an obsolete comment from AtEOXact_cleanup(). Restore formatting
of a comment in struct RelationData, mangled by the pgindent run in
commit 9af4159fce6654aa0e081b00d02bca40b978745c. Back-patch to 9.5 (all
supported versions), because another fix stacks on this.
Bruce Momjian [Thu, 19 Mar 2020 19:20:55 +0000 (15:20 -0400)]
pg_upgrade: make get_major_server_version() err msg consistent
This patch fixes the error message in get_major_server_version() to be
"could not parse version file", and uses the full file path name, rather
than just the data directory path.
Also, commit 4109bb5de4 added the cause of the failure to the "could
not open" error message, and improved quoting. This patch backpatches
the "could not open" cause to PG 12, where it was first widely used, and
backpatches the quoting fix in that patch to all supported releases.
Reported-by: Tom Lane
Discussion: https://postgr.es/m/87pne2w98h.fsf@wibble.ilmari.org
Tom Lane [Tue, 17 Mar 2020 19:05:17 +0000 (15:05 -0400)]
Doc: clarify behavior of "anyrange" pseudo-type.
I noticed that we completely failed to document the restriction
that an "anyrange" result type has to be inferred from an "anyrange"
input. The docs also were less clear than they could be about the
relationship between "anyrange" and "anyarray".
Tom Lane [Tue, 17 Mar 2020 16:09:27 +0000 (12:09 -0400)]
Use pkg-config, if available, to locate libxml2 during configure.
If pkg-config is installed and knows about libxml2, use its information
rather than asking xml2-config. Otherwise proceed as before. This
patch allows "configure --with-libxml" to succeed on platforms that
have pkg-config but not xml2-config, which is likely to soon become
a typical situation.
The old mechanism can be forced by setting XML2_CONFIG explicitly
(hence, build processes that were already doing so will certainly
not need adjustment). Also, it's now possible to set XML2_CFLAGS
and XML2_LIBS explicitly to override both programs.
There is a small risk of this breaking existing build processes,
if there are multiple libxml2 installations on the machine and
pkg-config disagrees with xml2-config about which to use. The
only case where that seems really likely is if a builder has tried
to select a non-default xml2-config by putting it early in his PATH
rather than setting XML2_CONFIG. Plan to warn against that in the
minor release notes.
Back-patch to v10; before that we had no pkg-config infrastructure,
and it doesn't seem worth adding it for this.
Hugh McMaster and Tom Lane; Peter Eisentraut also made an earlier
attempt at this, from which I lifted most of the docs changes.
Tom Lane [Tue, 17 Mar 2020 01:05:29 +0000 (21:05 -0400)]
Avoid holding a directory FD open across assorted SRF calls.
This extends the fixes made in commit 085b6b667 to other SRFs with the
same bug, namely pg_logdir_ls(), pgrowlocks(), pg_timezone_names(),
pg_ls_dir(), and pg_tablespace_databases().
Also adjust various comments and documentation to warn against
expecting to clean up resources during a ValuePerCall SRF's final
call.
Back-patch to all supported branches, since these functions were
all born broken.
Backpatch as far back as it applies cleanly (and as far as as each
function exists). REL_10_STABLE uses different wording, but hopefully
people are not reading docs so old to write new apps anyway.
Tom Lane [Sat, 14 Mar 2020 18:42:22 +0000 (14:42 -0400)]
Restructure polymorphic-type resolution in funcapi.c.
resolve_polymorphic_tupdesc() and resolve_polymorphic_argtypes() failed to
cover the case of having to resolve anyarray given only an anyrange input.
The bug was masked if anyelement was also used (as either input or
output), which probably helps account for our not having noticed.
While looking at this I noticed that resolve_generic_type() would produce
the wrong answer if asked to make that same resolution. ISTM that
resolve_generic_type() is confusingly defined and overly complex, so
rather than fix it, let's just make funcapi.c do the specific lookups
it requires for itself.
With this change, resolve_generic_type() is not used anywhere, so remove
it in HEAD. In the back branches, leave it alone (complete with bug)
just in case any external code is using it.
While we're here, make some other refactoring adjustments in funcapi.c
with an eye to upcoming future expansion of the set of polymorphic types:
* Simplify quick-exit tests by adding an overall have_polymorphic_result
flag. This is about a wash now but will be a win when there are more
flags.
* Reduce duplication of code between resolve_polymorphic_tupdesc() and
resolve_polymorphic_argtypes().
* Don't bother to validate correct matching of anynonarray or anyenum;
the parser should have done that, and even if it didn't, just doing
"return false" here would lead to a very confusing, off-point error
message. (Really, "return false" in these two functions should only
occur if the call_expr isn't supplied or we can't obtain data type
info from it.)
* For the same reason, throw an elog rather than "return false" if
we fail to resolve a polymorphic type.
The bug's been there since we added anyrange, so back-patch to
all supported branches.
Peter Eisentraut [Fri, 13 Mar 2020 10:28:11 +0000 (11:28 +0100)]
Preserve replica identity index across ALTER TABLE rewrite
If an index was explicitly set as replica identity index, this setting
was lost when a table was rewritten by ALTER TABLE. Because this
setting is part of pg_index but actually controlled by ALTER
TABLE (not part of CREATE INDEX, say), we have to do some extra work
to restore it.
Based-on-patch-by: Quan Zongliang <quanzongliang@gmail.com> Reviewed-by: Euler Taveira <euler.taveira@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/c70fcab2-4866-0d9f-1d01-e75e189db342@gmail.com
Thomas Munro [Thu, 12 Mar 2020 05:06:54 +0000 (18:06 +1300)]
Fix nextXid tracking bug on standbys (9.5-11 only).
RecordKnownAssignedTransactionIds() should never move
nextXid backwards. Before this commit, that could happen
if some other code path had advanced it without advancing
latestObservedXid.
One consequence is that a well timed XLOG_CHECKPOINT_ONLINE
could cause hot standby feedback messages to get confused
and report an xmin from a future epoch, potentially allowing
vacuum to run too soon on the primary.
Repair, by making sure RecordKnownAssignedTransactionIds()
can only move nextXid forwards.
In release 12 and master, this was already done by commit 2fc7af5e, which consolidated similar code and straightened
out this bug. Back-patch to supported releases before that.
Tom Lane [Wed, 11 Mar 2020 22:23:57 +0000 (18:23 -0400)]
Fix test case instability introduced in 085b6b667.
I forgot that the WAL directory might hold other files besides WAL
segments, notably including new segments still being filled.
That means a blind test for the first file's size being 16MB can
fail. Restrict based on file name length to make it more robust.
Peter Geoghegan [Wed, 11 Mar 2020 21:15:00 +0000 (14:15 -0700)]
Paper over bt_metap() oldest_xact bug in backbranches.
The data types that contrib/pageinspect's bt_metap() function were
declared to return as OUT arguments were wrong in some cases. In
particular, the oldest_xact column (a TransactionId/xid field) was
declared integer/int4 within the pageinspect extension's sql file. This
led to errors when an oldest_xact value that exceeded 2^31-1 was
encountered.
We cannot fix the declaration on Postgres 11 or 12. All we can do is
ameliorate the problem. Use "%d" instead of "%u" to format the output
of the oldest_xact value. This makes the C code match the declaration,
suppressing unhelpful error messages that might otherwise make
bt_metap() totally unusable. A bogus negative oldest_xact value will be
displayed instead of raising an error.
This commit addresses the same issue as master branch commit 691e8b2e18,
which actually fixed the problem. Backpatch to the 11 and 12 branches
only, since they are the only branches (other than master) that have
oldest_xact. All of the other problematic columns already display bogus
output for out of range values.
Reported-By: Victor Yegorov
Bug: #16285
Discussion: https://postgr.es/m/20200309223557.aip5n6ewln4ixbbi@alap3.anarazel.de
Backpatch: 11 and 12 only
Alvaro Herrera [Wed, 11 Mar 2020 19:54:54 +0000 (16:54 -0300)]
Add pg_dump support for ALTER obj DEPENDS ON EXTENSION
pg_dump is oblivious to this kind of dependency, so they're lost on
dump/restores (and pg_upgrade). Have pg_dump emit ALTER lines so that
they're preserved. Add some pg_dump tests for the whole thing, also.
Reviewed-by: Tom Lane (offlist) Reviewed-by: Ibrar Ahmed Reviewed-by: Ahsan Hadi (who also reviewed commit 899a04f5ed61)
Discussion: https://postgr.es/m/20200217225333.GA30974@alvherre.pgsql
Tom Lane [Wed, 11 Mar 2020 19:27:59 +0000 (15:27 -0400)]
Avoid holding a directory FD open across pg_ls_dir_files() calls.
This coding technique is undesirable because (a) it leaks the FD for
the rest of the transaction if the SRF is not run to completion, and
(b) allocated FDs are a scarce resource, but multiple interleaved
uses of the relevant functions could eat many such FDs.
In v11 and later, a query such as "SELECT pg_ls_waldir() LIMIT 1"
yields a warning about the leaked FD, and the only reason there's
no warning in earlier branches is that fd.c didn't whine about such
leaks before commit 9cb7db3f0. Even disregarding the warning, it
wouldn't be too hard to run a backend out of FDs with careless use
of these SQL functions.
Hence, rewrite the function so that it reads the directory within
a single call, returning the results as a tuplestore rather than
via value-per-call mode.
There are half a dozen other built-in SRFs with similar problems,
but let's fix this one to start with, just to see if the buildfarm
finds anything wrong with the code.
In passing, fix bogus error report for stat() failure: it was
whining about the directory when it should be fingering the
individual file. Doubtless a copy-and-paste error.
Back-patch to v10 where this function was added.
Justin Pryzby, with cosmetic tweaks and test cases by me
Alvaro Herrera [Wed, 11 Mar 2020 14:04:59 +0000 (11:04 -0300)]
Avoid duplicates in ALTER ... DEPENDS ON EXTENSION
If the command is attempted for an extension that the object already
depends on, silently do nothing.
In particular, this means that if a database containing multiple such
entries is dumped, the restore will silently do the right thing and
record just the first one. (At least, in a world where pg_dump does
dump such entries -- which it doesn't currently, but it will.)
Backpatch to 9.6, where this kind of dependency was introduced.
Reviewed-by: Ibrar Ahmed, Tom Lane (offlist)
Discussion: https://postgr.es/m/20200217225333.GA30974@alvherre.pgsql
Tom Lane [Mon, 9 Mar 2020 18:58:11 +0000 (14:58 -0400)]
Fix pg_dump/pg_restore to restore event triggers later.
Previously, event triggers were restored just after regular triggers
(and FK constraints, which are basically triggers). This is risky
since an event trigger, once installed, could interfere with subsequent
restore commands. Worse, because event triggers don't have any
particular dependencies on any post-data objects, a parallel restore
would consider them eligible to be restored the moment the post-data
phase starts, allowing them to also interfere with restoration of a
whole bunch of objects that would have been restored before them in
a serial restore. There's no way to completely remove the risk of a
misguided event trigger breaking the restore, since if nothing else
it could break other event triggers. But we can certainly push them
to later in the process to minimize the hazard.
To fix, tweak the RestorePass mechanism introduced by commit 3eb9a5e7c
so that event triggers are handled as part of the post-ACL processing
pass (renaming the "REFRESH" pass to "POST_ACL" to reflect its more
general use). This will cause them to restore after everything except
matview refreshes, which seems OK since matview refreshes really ought
to run in the post-restore state of the database. In a parallel
restore, event triggers and matview refreshes might be intermixed,
but that seems all right as well.
Also update the code and comments in pg_dump_sort.c so that its idea
of how things are sorted agrees with what actually happens due to
the RestorePass mechanism. This is mostly cosmetic: it'll affect the
order of objects in a dump's TOC, but not the actual restore order.
But not changing that would be quite confusing to somebody reading
the code.
Fujii Masao [Mon, 9 Mar 2020 15:14:43 +0000 (00:14 +0900)]
Fix bug that causes to report waiting in PS display twice, in hot standby.
Previously "waiting" could appear twice via PS in case of lock conflict
in hot standby mode. Specifically this issue happend when the delay
in WAL application determined by max_standby_archive_delay and
max_standby_streaming_delay had passed but it took more than 500 msec
to cancel all the conflicting transactions. Especially we can observe this
easily by setting those delay parameters to -1.
The cause of this issue was that WaitOnLock() and
ResolveRecoveryConflictWithVirtualXIDs() added "waiting" to
the process title in that case. This commit prevents
ResolveRecoveryConflictWithVirtualXIDs() from reporting waiting
in case of lock conflict, to fix the bug.
Fujii Masao [Mon, 9 Mar 2020 06:31:31 +0000 (15:31 +0900)]
Avoid assertion failure with targeted recovery in standby mode.
At the end of recovery, standby mode is turned off to re-fetch the last
valid record from archive or pg_wal. Previously, if recovery target was
reached and standby mode was turned off while the current WAL source
was stream, recovery could try to retrieve WAL file containing the last
valid record unexpectedly from stream even though not in standby mode.
This caused an assertion failure. That is, the assertion test confirms that
WAL file should not be retrieved from stream if standby mode is not true.
This commit moves back the current WAL source to archive if it's stream
even though not in standby mode, to avoid that assertion failure.
This issue doesn't cause the server to crash when built with assertion
disabled. In this case, the attempt to retrieve WAL file from stream not
in standby mode just fails. And then recovery tries to retrieve WAL file
from archive or pg_wal.
Michael Paquier [Tue, 3 Mar 2020 04:56:34 +0000 (13:56 +0900)]
Fix assertion failure with ALTER TABLE ATTACH PARTITION and indexes
Using ALTER TABLE ATTACH PARTITION causes an assertion failure when
attempting to work on a partitioned index, because partitioned indexes
cannot have partition bounds.
The grammar of ALTER TABLE ATTACH PARTITION requires partition bounds,
but not ALTER INDEX, so mixing ALTER TABLE with partitioned indexes is
confusing. Hence, on HEAD, prevent ALTER TABLE to attach a partition if
the relation involved is a partitioned index. On back-branches, as
applications may rely on the existing behavior, just remove the
culprit assertion.
Reported-by: Alexander Lakhin
Author: Amit Langote, Michael Paquier
Discussion: https://postgr.es/m/16276-5cd1dcc8fb8be7b5@postgresql.org
Backpatch-through: 11
Tom Lane [Sat, 29 Feb 2020 18:48:10 +0000 (13:48 -0500)]
Correctly re-use hash tables in buildSubPlanHash().
Commit 356687bd8 omitted to remove leftover code for destroying
a hashed subplan's hash tables, with the result that the tables
were always rebuilt not reused; this leads to severe memory
leakage if a hashed subplan is re-executed enough times.
Moreover, the code for reusing the hashnulls table had a typo
that would have made it do the wrong thing if it were reached.
Looking at the code coverage report shows severe under-coverage
of the potential callers of ResetTupleHashTable, so add some test
cases that exercise them.
Andreas Karlsson and Tom Lane, per reports from Ranier Vilela
and Justin Pryzby.
Tom Lane [Sat, 29 Feb 2020 01:28:34 +0000 (20:28 -0500)]
Avoid failure if autovacuum tries to access a just-dropped temp namespace.
Such an access became possible when commit 246a6c8f7 added more
aggressive cleanup of orphaned temp relations by autovacuum.
Since autovacuum's snapshot might be slightly stale, it could
attempt to access an already-dropped temp namespace, resulting in
an assertion failure or null-pointer dereference. (In practice,
since we don't drop temp namespaces automatically but merely
recycle them, this situation could only arise if a superuser does
a manual drop of a temp namespace. Still, that should be allowed.)
The core of the bug, IMO, is that isTempNamespaceInUse and its callers
failed to think hard about whether to treat "temp namespace isn't there"
differently from "temp namespace isn't in use". In hopes of forestalling
future mistakes of the same ilk, replace that function with a new one
checkTempNamespaceStatus, which makes the same tests but returns a
three-way enum rather than just a bool. isTempNamespaceInUse is gone
entirely in HEAD; but just in case some external code is relying on it,
keep it in the back branches, as a bug-compatible wrapper around the
new function.
Per report originally from Prabhat Kumar Sahu, investigated by Mahendra
Singh and Michael Paquier; the final form of the patch is my fault.
This replaces the failed fix attempt in a052f6cbb.
Tom Lane [Fri, 28 Feb 2020 16:29:59 +0000 (11:29 -0500)]
Doc: correct thinko in pg_buffercache documentation.
Access to this module is granted to the pg_monitor role, not
pg_read_all_stats. (Given the view's performance impact,
it seems wise to be restrictive, so I think this was the
correct decision --- and anyway it was clearly intentional.)
Michael Paquier [Thu, 27 Feb 2020 12:58:50 +0000 (21:58 +0900)]
Remove TAP test for createdb --lc-ctype
OpenBSD falls back to "C" when using an incorrect input with setlocale()
and LC_CTYPE, causing this test, introduced by 008cf04, to fail. This
removes the culprit test to avoid the portability issue.
Per report from Robert Haas, via buildfarm member curculio.
Michael Paquier [Thu, 27 Feb 2020 06:31:52 +0000 (15:31 +0900)]
Skip foreign tablespaces when running pg_checksums/pg_verify_checksums
Attempting to use pg_checksums (pg_verify_checksums in 11) on a data
folder which includes tablespace paths used across multiple major
versions would cause pg_checksums to scan all directories present in
pg_tblspc, and not only marked with TABLESPACE_VERSION_DIRECTORY. This
could lead to failures when for example running sanity checks on an
upgraded instance with --check. Even worse, it was possible to rewrite
on-disk pages with --enable for a cluster potentially online.
This commit makes pg_checksums skip any directories not named
TABLESPACE_VERSION_DIRECTORY, similarly to what is done for base
backups.
Reported-by: Michael Banck
Author: Michael Banck, Bernd Helmle
Discussion: https://postgr.es/m/62031974fd8e941dd8351fbc8c7eff60d59c5338.camel@credativ.de
backpatch-through: 11
Michael Paquier [Thu, 27 Feb 2020 02:21:07 +0000 (11:21 +0900)]
createdb: Fix quoting of --encoding, --lc-ctype and --lc-collate
The original coding failed to properly quote those arguments, leading to
failures when using quotes in the values used. As the quoting can be
encoding-sensitive, the connection to the backend needs to be taken
before applying the correct quoting.
Author: Michael Paquier Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20200214041004.GB1998@paquier.xyz
Backpatch-through: 9.5
Alvaro Herrera [Wed, 26 Feb 2020 22:57:14 +0000 (19:57 -0300)]
Fix docs regarding AFTER triggers on partitioned tables
In commit 86f575948c77 I forgot to update the trigger.sgml paragraph
that needs to explain that AFTER triggers are allowed in partitioned
tables. Do so now.
Michael Paquier [Mon, 24 Feb 2020 09:14:22 +0000 (18:14 +0900)]
Add prefix checks in exclude lists for pg_rewind, pg_checksums and base backups
An instance of PostgreSQL crashing with a bad timing could leave behind
temporary pg_internal.init files, potentially causing failures when
verifying checksums. As the same exclusion lists are used between
pg_rewind, pg_checksums and basebackup.c, all those tools are extended
with prefix checks to keep everything in sync, with dedicated checks
added for pg_internal.init.
Backpatch down to 11, where pg_checksums (pg_verify_checksums in 11) and
checksum verification for base backups have been introduced.
Reported-by: Michael Banck
Author: Michael Paquier Reviewed-by: Kyotaro Horiguchi, David Steele
Discussion: https://postgr.es/m/62031974fd8e941dd8351fbc8c7eff60d59c5338.camel@credativ.de
Backpatch-through: 11