Tom Lane [Fri, 29 Sep 2023 18:07:30 +0000 (14:07 -0400)]
Suppress macOS warnings about duplicate libraries in link commands.
As of Xcode 15 (macOS Sonoma), the linker complains about duplicate
references to the same library. We see warnings about libpgport and
libpgcommon being duplicated in many client executables. This is a
consequence of the hack introduced in commit 6b7ef076b to list
libpgport before libpq while not removing it from $(LIBS).
(Commit 8396447cd later applied the same rule to libpgcommon.)
The concern in 6b7ef076b was to ensure that the client executable
wouldn't unintentionally depend on pgport functions from libpq.
That concern is obsolete on any platform for which we can do symbol
export control, because if we can then the pgport functions in libpq
won't be exposed anyway. Hence, we can fix this problem by just
removing libpgport and libpgcommon from $(libpq_pgport), and letting
clients depend on the occurrences in $(LIBS).
In the back branches, do that only on macOS (which we know has
symbol export control). In HEAD, let's be more aggressive and
remove the extra libraries everywhere. The only still-supported
platforms that lack export control are MinGW/Cygwin, and it
doesn't seem worth sweating over ABI stability details for those
(or if somebody does care, it'd probably be possible to perform
symbol export control for those too). As well as being simpler,
this might give some microscopic improvement in build time.
The meson build system is not changed here, as it doesn't have
this particular disease, though it does have some related issues
that we'll fix separately.
doc: Change statistics function xref to the right target
Commit 7d3b7011b added a link to the statistics functions, which at the
time were anchored under the section for statistics views. aebe989477a
added a separate section for statistics functions, but the link was not
updated to point to the new anchor. Fix by changing the xref.
Backpatch to all supported branches.
Author: Peter Smith <peter.b.smith@fujitsu.com>
Discussion: https://postgr.es/m/CAHut+Ptr0jKzNNtWnssLq+3jNhbyaBseqf6NPrWHk08mQFRoTg@mail.gmail.com
Backpatch-through: 11
nbtree's mark/restore processing failed to correctly handle an edge case
involving array key advancement and related search-type scan key state.
Scans with ScalarArrayScalarArrayOpExpr quals requiring mark/restore
processing (for a merge join) could incorrectly conclude that an
affected array/scan key must not have advanced during the time between
marking and restoring the scan's position.
As a result of all this, array key handling within btrestrpos could skip
a required call to _bt_preprocess_keys(). This confusion allowed later
primitive index scans to overlook tuples matching the true current array
keys. The scan's search-type scan keys would still have spurious values
corresponding to the final array element(s) -- not values matching the
first/now-current array element(s).
To fix, remember that "array key wraparound" has taken place during the
ongoing btrescan in a flag variable stored in the scan's state, and use
that information at the point where btrestrpos decides if another call
to _bt_preprocess_keys is required.
Oversight in commit 70bc5833, which taught nbtree to handle array keys
during mark/restore processing, but missed this subtlety. That commit
was itself a bug fix for an issue in commit 9e8da0f7, which taught
nbtree to handle ScalarArrayOpExpr quals natively.
Tom Lane [Thu, 28 Sep 2023 18:05:26 +0000 (14:05 -0400)]
Fix checking of index expressions in CompareIndexInfo().
This code was sloppy about comparison of index columns that
are expressions. It didn't reliably reject cases where one
index has an expression where the other has a plain column,
and it could index off the start of the attmap array, leading
to a Valgrind complaint (though an actual crash seems unlikely).
I'm not sure that the expression-vs-column sloppiness leads
to any visible problem in practice, because the subsequent
comparison of the two expression lists would reject cases
where the indexes have different numbers of expressions
overall. Maybe we could falsely match indexes having the
same expressions in different column positions, but it'd
require unlucky contents of the word before the attmap array.
It's not too surprising that no problem has been reported
from the field. Nonetheless, this code is clearly wrong.
Per bug #18135 from Alexander Lakhin. Back-patch to all
supported branches.
Tom Lane [Wed, 27 Sep 2023 01:06:21 +0000 (21:06 -0400)]
Stop using "-multiply_defined suppress" on macOS.
We started to use this linker switch in commit 9df308697 of
2004-07-13, which was in the OS X 10.3 era. Apparently it's been a
no-op since around OS X 10.9. Apple's most recent toolchain version
actively complains about it, so it's time to get rid of it.
Andres Freund [Mon, 25 Sep 2023 18:50:02 +0000 (11:50 -0700)]
pg_dump: tests: Correct test condition for invalid databases
For some reason I used not_like = { pg_dumpall_dbprivs => 1, } in the test
condition of one of the tests added in in c66a7d75e65. That doesn't make sense
for two reasons: 1) not_like isn't a valid test condition 2) the database
should not be dumped in any of the tests. Due to 1), the test achieved its
goal, but clearly the formulation is confusing. Instead use like => {}, with
a comment explaining why.
Reported-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/3ddf79f2-8b7b-a093-11d2-5c739bc64f86@eisentraut.org
Backpatch: 11-, like c66a7d75e65
Tom Lane [Mon, 25 Sep 2023 18:41:57 +0000 (14:41 -0400)]
Collect dependency information for parsed CallStmts.
Parse analysis of a CallStmt will inject mutable information,
for instance the OID of the called procedure, so that subsequent
DDL may create a need to re-parse the CALL. We failed to detect
this for CALLs in plpgsql routines, because no dependency information
was collected when putting a CallStmt into the plan cache. That
could lead to misbehavior or strange errors such as "cache lookup
failed".
Before commit ee895a655, the issue would only manifest for CALLs
appearing in atomic contexts, because we re-planned non-atomic
CALLs every time through anyway.
It is now apparent that extract_query_dependencies() probably
needs a special case for every utility statement type for which
stmt_requires_parse_analysis() returns true. I wanted to add
something like Assert(!stmt_requires_parse_analysis(...)) when
falling out of extract_query_dependencies_walker without doing
anything, but there are API issues as well as a more fundamental
point: stmt_requires_parse_analysis is supposed to be applied to
raw parser output, so it'd be cheating to assume it will give the
correct answer for post-parse-analysis trees. I contented myself
with adding a comment.
Per bug #18131 from Christian Stork. Back-patch to all supported
branches.
Tom Lane [Mon, 25 Sep 2023 15:50:28 +0000 (11:50 -0400)]
Limit to_tsvector_byid's initial array allocation to something sane.
The initial estimate of the number of distinct ParsedWords is just
that: an estimate. Don't let it exceed what palloc is willing to
allocate. If in fact we need more entries, we'll eventually fail
trying to enlarge the array. But if we don't, this allows success on
inputs that currently draw "invalid memory alloc request size".
Per bug #18080 from Uwe Binder. Back-patch to all supported branches.
Tom Lane [Fri, 22 Sep 2023 18:52:36 +0000 (14:52 -0400)]
Doc: copy-edit the introductory para for the pg_class catalog.
The previous wording had a faint archaic whiff to it, and more
importantly used "catalogs" as a verb, which while cutely
self-referential seems likely to provoke confusion in this
particular context. Also consistently use "kind" not "type" to
refer to the different kinds of relations distinguished by relkind.
Per gripe from Martin Nash. Back-patch to supported versions.
The comment introduced by commit e7cb7ee14 was a bit too terse, which
could lead to extensions doing different things within the hook function
than we intend to allow. Extend the comment to explain what they can do
within the hook function.
Back-patch to all supported branches.
In passing, I rephrased a nearby comment that I recently added to the
back branches.
Michael Paquier [Mon, 18 Sep 2023 23:31:31 +0000 (08:31 +0900)]
Fix assertion failure with PL/Python exceptions
PLy_elog() was not able to handle correctly cases where a SPI called
failed, which would fill in a DETAIL string able to trigger an
assertion. We may want to improve this infrastructure so as it is able
to provide any extra detail information provided by an error stack, but
this is left as a future improvement as it could impact existing error
stacks and any applications that depend on them. For now, the assertion
is removed and a regression test is added to cover the case of a failure
with a detail string.
This problem exists since 2bd78eb8d51c, so backpatch all the way down
with tweaks to the regression tests output added where required.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/18070-ab9c171cbf4ebb0f@postgresql.org
Backpatch-through: 11
Tom Lane [Mon, 18 Sep 2023 18:27:47 +0000 (14:27 -0400)]
Don't crash if cursor_to_xmlschema is used on a non-data-returning Portal.
cursor_to_xmlschema() assumed that any Portal must have a tupDesc,
which is not so. Add a defensive check.
It's plausible that this mistake occurred because of the rather
poorly chosen name of the lookup function SPI_cursor_find(),
which in such cases is returning something that isn't very much
like a cursor. Add some documentation to try to forestall future
errors of the same ilk.
Report and patch by Boyu Yang (docs changes by me). Back-patch
to all supported branches.
Tom Lane [Fri, 15 Sep 2023 21:01:26 +0000 (17:01 -0400)]
Track nesting depth correctly when drilling down into RECORD Vars.
expandRecordVariable() failed to adjust the parse nesting structure
correctly when recursing to inspect an outer-level Var. This could
result in assertion failures or core dumps in corner cases.
Likewise, get_name_for_var_field() failed to adjust the deparse
namespace stack correctly when recursing to inspect an outer-level
Var. In this case the likely result was a "bogus varno" error
while deparsing a view.
Per bug #18077 from Jingzhou Fu. Back-patch to all supported
branches.
Tom Lane [Fri, 15 Sep 2023 20:39:27 +0000 (16:39 -0400)]
Fix get_expr_result_type() to find field names for RECORD Consts.
This is a back-patch of commit d57534740 ("Fix EXPLAIN of SEARCH
BREADTH FIRST with a constant initial value") into pre-v14 branches.
At the time I'd thought it was not needed in branches that lack the
SEARCH/CYCLE feature, but that was just a failure of imagination.
It's possible to demonstrate "record type has not been registered"
failures in older branches too, during deparsing of views that contain
references to fields of composite constants.
Back-patch only the code changes, as the test cases added by d57534740
all require SEARCH/CYCLE syntax. A suitable test case will be added
in the upcoming fix for bug #18077.
Tom Lane [Fri, 15 Sep 2023 20:20:08 +0000 (16:20 -0400)]
Allow extracting fields from a ROW() expression in more cases.
Teach get_expr_result_type() to manufacture a tuple descriptor directly
from a RowExpr node. If the RowExpr has type RECORD, this is the only
way to get a tupdesc for its result, since even if the rowtype has been
blessed, we don't have its typmod available at this point. (If the
RowExpr has some named composite type, we continue to let the existing
code handle it, since the RowExpr might well not have the correct column
names embedded in it.)
This fixes assorted corner cases illustrated by the added regression
tests.
This is a back-patch of the v13-era commit 8b7a0f1d1 into previous
branches. At the time I'd judged it not important enough to back-patch,
but the upcoming fix for bug #18077 includes a test case that depends
on this working correctly; and 8b7a0f1d1 has now aged long enough to
have good confidence that it won't break anything.
Michael Paquier [Thu, 14 Sep 2023 07:00:47 +0000 (16:00 +0900)]
Revert "Improve error message on snapshot import in snapmgr.c"
This reverts commit a0d87bcd9b57, following a remark from Andres Frend
that the new error can be triggered with an incorrect SET TRANSACTION
SNAPSHOT command without being really helpful for the user as it uses
the internal file name.
Michael Paquier [Thu, 14 Sep 2023 01:30:37 +0000 (10:30 +0900)]
Improve error message on snapshot import in snapmgr.c
When a snapshot file fails to be read in ImportSnapshot(), it would
issue an ERROR as "invalid snapshot identifier" when opening a stream
for it in read-only mode. This error message is reworded to be the same
as all the other messages used in this case on failure, which is useful
when debugging this area.
Thinko introduced by bb446b689b66 where snapshot imports have been
added. A backpatch down to 11 is done as this can improve any work
related to snapshot imports in older branches.
Author: Bharath Rupireddy Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/CALj2ACWmr=3KdxDkm8h7Zn1XxBoF6hdzq8WQyMn2y1OL5RYFrg@mail.gmail.com
Backpatch-through: 11
Thomas Munro [Wed, 13 Sep 2023 02:32:24 +0000 (14:32 +1200)]
Fix exception safety bug in typcache.c.
If an out-of-memory error was thrown at an unfortunate time,
ensure_record_cache_typmod_slot_exists() could leak memory and leave
behind a global state that produced an infinite loop on the next call.
Fix by merging RecordCacheArray and RecordIdentifierArray into a single
array. With only one allocation or re-allocation, there is no
intermediate state.
Back-patch to all supported releases.
Reported-by: "James Pang (chaolpan)" <chaolpan@cisco.com> Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/PH0PR11MB519113E738814BDDA702EDADD6EFA%40PH0PR11MB5191.namprd11.prod.outlook.com
Amit Kapila [Tue, 12 Sep 2023 04:06:56 +0000 (09:36 +0530)]
Fix uninitialized access to InitialRunningXacts during decoding after ERROR.
The transactions and subtransactions array that was allocated under
snapshot builder memory context and recorded during decoding was not
cleared in case of errors. This can result in an assertion failure if we
attempt to retry logical decoding within the same session. To address this
issue, we register a callback function under the snapshot builder memory
context to clear the recorded transactions and subtransactions array along
with the context.
This problem doesn't exist in PG16 and HEAD as instead of using
InitialRunningXacts, we added the list of transaction IDs and
sub-transaction IDs, that have modified catalogs and are running during
snapshot serialization, to the serialized snapshot (see commit 7f13ac8123).
Author: Hou Zhijie Reviewed-by: Amit Kapila
Backpatch-through: 11
Discussion: http://postgr.es/m/18055-ab3beed9f4b7b7d6@postgresql.org
Michael Paquier [Thu, 7 Sep 2023 05:12:36 +0000 (14:12 +0900)]
pg_basebackup: Generate valid temporary slot names under PQbackendPID()
pgbouncer can cause PQbackendPID() to return negative values due to it
filling be_pid with random bytes (even these days pid_max can only be
set up to 2^22 on 64b machines on Linux, for example, so this cannot
happen with normal PID numbers). When this happens, pg_basebackup may
generate a temporary slot name that may not be accepted by the parser,
leading to spurious failures, like:
pg_basebackup: error: could not send replication command
ERROR: replication slot name "pg_basebackup_-1201966863" contains
invalid character
This commit fixes that problem by formatting the result from
PQbackendPID() as an unsigned integer when creating the temporary
replication slot name, so as the invalid character is gone and the
command can be parsed.
Michael Paquier [Mon, 4 Sep 2023 05:55:58 +0000 (14:55 +0900)]
Fix out-of-bound read in gtsvector_picksplit()
This could lead to an imprecise choice when splitting an index page of a
GiST index on a tsvector, deciding which entries should remain on the
old page and which entries should move to a new page.
This is wrong since tsearch2 has been moved into core with commit 140d4ebcb46e, so backpatch all the way down. This error has been
spotted by valgrind.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/17950-6c80a8d2b94ec695@postgresql.org
Backpatch-through: 11
Tom Lane [Thu, 24 Aug 2023 16:02:40 +0000 (12:02 -0400)]
Avoid unnecessary plancache revalidation of utility statements.
Revalidation of a plancache entry (after a cache invalidation event)
requires acquiring a snapshot. Normally that is harmless, but not
if the cached statement is one that needs to run without acquiring a
snapshot. We were already aware of that for TransactionStmts,
but for some reason hadn't extrapolated to the other statements that
PlannedStmtRequiresSnapshot() knows mustn't set a snapshot. This can
lead to unexpected failures of commands such as SET TRANSACTION
ISOLATION LEVEL. We can fix it in the same way, by excluding those
command types from revalidation.
However, we can do even better than that: there is no need to
revalidate for any statement type for which parse analysis, rewrite,
and plan steps do nothing interesting, which is nearly all utility
commands. To mechanize this, invent a parser function
stmt_requires_parse_analysis() that tells whether parse analysis does
anything beyond wrapping a CMD_UTILITY Query around the raw parse
tree. If that's what it does, then rewrite and plan will just
skip the Query, so that it is not possible for the same raw parse
tree to produce a different plan tree after cache invalidation.
stmt_requires_parse_analysis() is basically equivalent to the
existing function analyze_requires_snapshot(), except that for
obscure reasons that function omits ReturnStmt and CallStmt.
It is unclear whether those were oversights or intentional.
I have not been able to demonstrate a bug from not acquiring a
snapshot while analyzing these commands, but at best it seems mighty
fragile. It seems safer to acquire a snapshot for parse analysis of
these commands too, which allows making stmt_requires_parse_analysis
and analyze_requires_snapshot equivalent.
In passing this fixes a second bug, which is that ResetPlanCache
would exclude ReturnStmts and CallStmts from revalidation.
That's surely *not* safe, since they contain parsable expressions.
Per bug #18059 from Pavel Kulakov. Back-patch to all supported
branches.
Andrew Dunstan [Tue, 22 Aug 2023 15:57:08 +0000 (11:57 -0400)]
Cache by-reference missing values in a long lived context
Attribute missing values might be needed past the lifetime of the tuple
descriptors from which they are extracted. To avoid possibly using
pointers for by-reference values which might thus be left dangling, we
cache a datumCopy'd version of the datum in the TopMemoryContext. Since
we first search for the value this only needs to be done once per
session for any such value.
Original complaint from Tom Lane, idea for mitigation by Andrew Dunstan,
tweaked by Tom Lane.
Backpatch to version 11 where missing values were introduced.
doc: Replace list of drivers and PLs with wiki link
The list of external language drivers and procedural languages was
never complete or exhaustive, and rather than attempting to manage
it the content has migrated to the wiki. This replaces the tables
altogether with links to the wiki as we regularly get requests for
adding various projects, which we reject without any clear policy
for why or how the content should be managed.
The threads linked to below are the most recent discussions about
this, the archives contain many more.
Backpatch to all supported branches since the list on the wiki
applies to all branches.
The fix itself is fine, but the test revealed other problems related
to parallel query that are not easily fixable. Remove the test for
now to fix the buildfarm.
Noah Misch [Mon, 7 Aug 2023 13:05:56 +0000 (06:05 -0700)]
Reject substituting extension schemas or owners matching ["$'\].
Substituting such values in extension scripts facilitated SQL injection
when @extowner@, @extschema@, or @extschema:...@ appeared inside a
quoting construct (dollar quoting, '', or ""). No bundled extension was
vulnerable. Vulnerable uses do appear in a documentation example and in
non-bundled extensions. Hence, the attack prerequisite was an
administrator having installed files of a vulnerable, trusted,
non-bundled extension. Subject to that prerequisite, this enabled an
attacker having database-level CREATE privilege to execute arbitrary
code as the bootstrap superuser. By blocking this attack in the core
server, there's no need to modify individual extensions. Back-patch to
v11 (all supported versions).
Reported by Micah Gate, Valerie Woolard, Tim Carey-Smith, and Christoph
Berg.
Disallow replacing joins with scans in problematic cases.
Commit e7cb7ee14, which introduced the infrastructure for FDWs and
custom scan providers to replace joins with scans, failed to add support
handling of pseudoconstant quals assigned to replaced joins in
createplan.c, leading to an incorrect plan without a gating Result node
when postgres_fdw replaced a join with such a qual.
To fix, we could add the support by 1) modifying the ForeignPath and
CustomPath structs to store the list of RestrictInfo nodes to apply to
the join, as in JoinPaths, if they represent foreign and custom scans
replacing a join with a scan, and by 2) modifying create_scan_plan() in
createplan.c to use that list in that case, instead of the
baserestrictinfo list, to get pseudoconstant quals assigned to the join;
but #1 would cause an ABI break. So fix by modifying the infrastructure
to just disallow replacing joins with such quals.
Back-patch to all supported branches.
Reported by Nishant Sharma. Patch by me, reviewed by Nishant Sharma and
Richard Guo.
Tom Lane [Thu, 27 Jul 2023 16:07:48 +0000 (12:07 -0400)]
Raise fixed token-length limit in hba.c.
Historically, hba.c limited tokens in the authentication configuration
files (pg_hba.conf and pg_ident.conf) to less than 256 bytes. We have
seen a few reports of this limit causing problems; notably, for
moderately-complex LDAP configurations. Increase the limit to 10240
bytes as a low-risk stop-gap solution.
In v13 and earlier, this also requires raising MAX_LINE, the limit
on overall line length. I'm hesitant to make this code consume
too much stack space, so I only raised that to 20480 bytes.
Tom Lane [Wed, 19 Jul 2023 15:00:34 +0000 (11:00 -0400)]
Doc: improve description of IN and row-constructor comparisons.
IN and NOT IN work fine on records and arrays, so just say that
they accept "expressions" not "scalar expressions". I think that
that phrasing was meant to say that they don't work on set-returning
expressions, but that's not the common meaning of "scalar".
Revise the description of row-constructor comparisons to make it
perhaps a bit less confusing. (This partially reverts some
dubious wording changes made by commit f56651519.)
Per gripe from Ilya Nenashev. Back-patch to supported branches.
In HEAD and v16, also drop a NOTE about pre-8.2 behavior, which
is hopefully no longer of interest to anybody.
Tom Lane [Tue, 18 Jul 2023 15:59:39 +0000 (11:59 -0400)]
Doc: fix out-of-date example of SPI usage.
The "count" argument of SPI_exec() only limits execution when
the query is actually returning rows. This was not the case
before PG 9.0, so this example was correct when written; but
we missed updating it in commit 2ddc600f8. Extend the example
to show the behavior both with and without RETURNING.
While here, improve the commentary and markup for the rest
of the example.
David G. Johnston and Tom Lane, per report from Curt Kolovson.
Back-patch to all supported branches.
Michael Paquier [Tue, 18 Jul 2023 05:04:54 +0000 (14:04 +0900)]
Fix indentation in twophase.c
This has been missed in cb0cca1, noticed before buildfarm member koel
has been able to complain while poking at a different patch. Like the
other commit, backpatch all the way down to limit the odds of merge
conflicts.
Michael Paquier [Tue, 18 Jul 2023 04:44:35 +0000 (13:44 +0900)]
Fix recovery of 2PC transaction during crash recovery
A crash in the middle of a checkpoint with some two-phase state data
already flushed to disk by this checkpoint could cause a follow-up crash
recovery to recover twice the same transaction, once from what has been
found in pg_twophase/ at the beginning of recovery and a second time
when replaying its corresponding record.
This would lead to FATAL failures in the startup process during
recovery, where the same transaction would have a state recovered twice
instead of once:
LOG: recovering prepared transaction 731 from shared memory
LOG: recovering prepared transaction 731 from shared memory
FATAL: lock ExclusiveLock on object 731/0/0 is already held
This issue is fixed by skipping the addition of any 2PC state coming
from a record whose equivalent 2PC state file has already been loaded in
TwoPhaseState at the beginning of recovery by restoreTwoPhaseData(),
which is OK as long as the system has not reached a consistent state.
The timing to get a messed up recovery processing is very racy, and
would very unlikely happen. The thread that has reported the issue has
demonstrated the bug using injection points to force a PANIC in the
middle of a checkpoint.
Issue introduced in 728bd99, so backpatch all the way down.
Michael Paquier [Fri, 14 Jul 2023 02:16:13 +0000 (11:16 +0900)]
Add indisreplident to fields refreshed by RelationReloadIndexInfo()
RelationReloadIndexInfo() is a fast-path used for index reloads in the
relation cache, and it has always forgotten about updating
indisreplident, which is something that would happen after an index is
selected for a replica identity. This can lead to incorrect cache
information provided when executing a command in a transaction context
that updates indisreplident.
None of the code paths currently on HEAD that need to check upon
pg_index.indisreplident fetch its value from the relation cache, always
relying on a fresh copy on the syscache. Unfortunately, this may not be
the case of out-of-core code, that could see out-of-date value.
Author: Shruthi Gowda Reviewed-by: Robert Haas, Dilip Kumar, Michael Paquier
Discussion: https://postgr.es/m/CAASxf_PBcxax0wW-3gErUyftZ0XrCs3Lrpuhq4-Z3Fak1DoW7Q@mail.gmail.com
Backpatch-through: 11
Michael Paquier [Fri, 14 Jul 2023 01:13:22 +0000 (10:13 +0900)]
Fix updates of indisvalid for partitioned indexes
indisvalid is switched to true for partitioned indexes when all its
partitions have valid indexes when attaching a new partition, up to the
top-most parent if all its leaves are themselves valid when dealing with
multiple layers of partitions.
The copy of the tuple from pg_index used to switch indisvalid to true
came from the relation cache, which is incorrect. Particularly, in the
case reported by Shruthi Gowda, executing a series of commands in a
single transaction would cause the validation of partitioned indexes to
use an incorrect version of a pg_index tuple, as indexes are reloaded
after an invalidation request with RelationReloadIndexInfo(), a much
faster version than a full index cache rebuild. In this case, the
limited information updated in the cache leads to an incorrect version
of the tuple used. One of the symptoms reported was the following
error, with a replica identity update, for instance:
"ERROR: attempted to update invisible tuple"
This is incorrect since 8b08f7d, so backpatch all the way down.
Andres Freund [Thu, 13 Jul 2023 20:03:37 +0000 (13:03 -0700)]
Handle DROP DATABASE getting interrupted
Until now, when DROP DATABASE got interrupted in the wrong moment, the removal
of the pg_database row would also roll back, even though some irreversible
steps have already been taken. E.g. DropDatabaseBuffers() might have thrown
out dirty buffers, or files could have been unlinked. But we continued to
allow connections to such a corrupted database.
To fix this, mark databases invalid with an in-place update, just before
starting to perform irreversible steps. As we can't add a new column in the
back branches, we use pg_database.datconnlimit = -2 for this purpose.
An invalid database cannot be connected to anymore, but can still be
dropped.
Unfortunately we can't easily add output to psql's \l to indicate that some
database is invalid, it doesn't fit in any of the existing columns.
Add tests verifying that a interrupted DROP DATABASE is handled correctly in
the backend and in various tools.
Reported-by: Evgeny Morozov <postgresql3@realityexists.net>
Author: Andres Freund <andres@anarazel.de> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/20230509004637.cgvmfwrbht7xm7p6@awork3.anarazel.de
Discussion: https://postgr.es/m/20230314174521.74jl6ffqsee5mtug@awork3.anarazel.de
Backpatch: 11-, bug present in all supported versions
Andres Freund [Thu, 13 Jul 2023 20:03:37 +0000 (13:03 -0700)]
Release lock after encountering bogs row in vac_truncate_clog()
When vac_truncate_clog() encounters bogus datfrozenxid / datminmxid values, it
returns early. Unfortunately, until now, it did not release
WrapLimitsVacuumLock. If the backend later tries to acquire
WrapLimitsVacuumLock, the session / autovacuum worker hangs in an
uncancellable way. Similarly, other sessions will hang waiting for the
lock. However, if the backend holding the lock exited or errored out for some
reason, the lock was released.
The bug was introduced as a side effect of 566372b3d643.
It is interesting that there are no production reports of this problem. That
is likely due to a mix of bugs leading to bogus values having gotten less
common, process exit releasing locks and instances of hangs being hard to
debug for "normal" users.
Tom Lane [Thu, 13 Jul 2023 17:07:51 +0000 (13:07 -0400)]
Remove unnecessary pfree() in g_intbig_compress().
GiST compress functions (like all GiST opclass functions) are
supposed to be called in short-lived memory contexts, so that
minor memory leaks in them are not of concern, and indeed
explicit pfree's are likely slightly counterproductive.
But this one in g_intbig_compress() is more than
slightly counterproductive, because it's guarded by
"if (in != DatumGetArrayTypeP(entry->key))" which means
that if this test succeeds, we've detoasted the datum twice.
(And to add insult to injury, the extra detoast result is
leaked.) Let's just drop the whole stanza, relying on the
GiST temporary context mechanism to clean up in good time.
The analogous bit in g_int_compress() is
if (r != (ArrayType *) DatumGetPointer(entry->key))
pfree(r);
which doesn't have the gratuitous-detoast problem so
I left it alone. Perhaps there is a case for removing
unnecessary pfree's more widely, but I'm not sure if it's
worth the code churn.
The potential extra decompress seems expensive enough to
justify calling this a (minor) performance bug and
back-patching.
Konstantin Knizhnik, Matthias van de Meent, Tom Lane
Tom Lane [Mon, 10 Jul 2023 16:14:34 +0000 (12:14 -0400)]
Be more rigorous about local variables in PostgresMain().
Since PostgresMain calls sigsetjmp, any local variables that are not
marked "volatile" have a risk of unspecified behavior. In practice
this means that when control returns via longjmp, such variables might
get reset to their values as of the time of sigsetjmp, depending on
whether the compiler chose to put them in registers or on the stack.
We were careful about this for "send_ready_for_query", but not the
other local variables.
In the case of the timeout_enabled flags, resetting them to
their initial "false" states is actually good, since we do
"disable_all_timeouts()" in the longjmp cleanup code path. If that
does not happen, we risk uselessly calling "disable_timeout()" later,
which is harmless but a little bit expensive. Let's explicitly reset
these flags so that the behavior is correct and platform-independent.
(This change means that we really don't need the new "volatile"
markings after all, but let's install them anyway since any change
in this logic could re-introduce a problem.)
There is no issue for "firstchar" and "input_message" because those
are explicitly reinitialized each time through the query processing
loop. To make that clearer, move them to be declared inside the loop.
That leaves us with all the function-lifespan locals except the
sigjmp_buf itself marked as volatile, which seems like a good policy
to have going forward.
Because of the possibility of extra disable_timeout() calls, this
seems worth back-patching.
Michael Paquier [Mon, 10 Jul 2023 00:40:24 +0000 (09:40 +0900)]
Fix ALTER EXTENSION SET SCHEMA with objects outside an extension's schema
As coded, the code would use as a base comparison the namespace OID from
the first object scanned in pg_depend when switching its namespace
dependency entry to the new one, and use it as a base of comparison for
any follow-up checks. It would also be used as the old namespace OID to
switch *from* for the extension's pg_depend entry. Hence, if the first
object scanned has a namespace different than the one stored in the
extension, we would finish by:
- Not checking that the extension objects map with the extension's
schema.
- Not switching the extension -> namespace dependency entry to the new
namespace provided by the user, making ALTER EXTENSION ineffective.
This issue exists since this command has been introduced in d9572c4 for
relocatable extension, so backpatch all the way down to 11. The test
case has been provided by Heikki, that I have tweaked a bit to show the
effects on pg_depend for the extension.
Reported-by: Heikki Linnakangas
Author: Michael Paquier, Heikki Linnakangas
Discussion: https://postgr.es/m/20eea594-a05b-4c31-491b-007b6fceef28@iki.fi
Backpatch-through: 11
Andrew Dunstan [Thu, 6 Jul 2023 16:27:40 +0000 (12:27 -0400)]
Skip pg_baseback long filename test if path too long on Windows
On Windows, it's sometimes difficult to create a file with a path longer
than 255 chars, and if it can be created it might not be seen by the
archiver. This can be triggered by the test for tar backups with
filenames greater than 100 bytes. So we skip that test if the path would
exceed 255.
WAL-log the creation of the init fork of unlogged indexes.
We create a file, so we better WAL-log it. In practice, all the
built-in index AMs and all extensions that I'm aware of write a
metapage to the init fork, which is WAL-logged, and replay of the
metapage implicitly creates the fork too. But if ambuildempty() didn't
write any page, we would miss it.
This can be seen with dummy_index_am. Set up replication, create a
'dummy_index_am' index on an unlogged table, and look at the files
created in the replica: the init fork is not created on the
replica. Dummy_index_am doesn't do anything with the relation files,
however, so it doesn't lead to any user-visible errors.
Backpatch to all supported versions.
Reviewed-by: Robert Haas
Discussion: https://www.postgresql.org/message-id/6e5bbc08-cdfc-b2b3-9e23-1a914b9850a9%40iki.fi
llvm_release_context() called llvm_enter_fatal_on_oom(), but was missing
the corresponding llvm_leave_fatal_on_oom() call. As a result, if JIT was
used at all, we were almost always in the "fatal-on-oom" state.
It only makes a difference if you use an extension written in C++, and
run out of memory in a C++ 'new' call. In that case, you would get a
PostgreSQL FATAL error, instead of the default behavior of throwing a
C++ exception.
Back-patch to all supported versions.
Reviewed-by: Daniel Gustafsson
Discussion: https://www.postgresql.org/message-id/54b78cca-bc84-dad8-4a7e-5b56f764fab5@iki.fi
Ensure that creation of an empty relfile is fsync'd at checkpoint.
If you create a table and don't insert any data into it, the relation file
is never fsync'd. You don't lose data, because an empty table doesn't have
any data to begin with, but if you crash and lose the file, subsequent
operations on the table will fail with "could not open file" error.
To fix, register an fsync request in mdcreate(), like we do for mdwrite().
Per discussion, we probably should also fsync the containing directory
after creating a new file. But that's a separate and much wider issue.
Backpatch to all supported versions.
Reviewed-by: Andres Freund, Thomas Munro
Discussion: https://www.postgresql.org/message-id/d47d8122-415e-425c-d0a2-e0160829702d%40iki.fi
Thomas Munro [Tue, 4 Jul 2023 03:16:34 +0000 (15:16 +1200)]
Re-bin segment when memory pages are freed.
It's OK to be lazy about re-binning memory segments when allocating,
because that can only leave segments in a bin that's too high. We'll
search higher bins if necessary while allocating next time, and
also eventually re-bin, so no memory can become unreachable that way.
However, when freeing memory, the largest contiguous range of free pages
might go up, so we should re-bin eagerly to make sure we don't leave the
segment in a bin that is too low for get_best_segment() to find.
The re-binning code is moved into a function of its own, so it can be
called whenever free pages are returned to the segment's free page map.
Back-patch to all supported releases.
Author: Dongming Liu <ldming101@gmail.com> Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier version) Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CAL1p7e8LzB2LSeAXo2pXCW4%2BRya9s0sJ3G_ReKOU%3DAjSUWjHWQ%40mail.gmail.com
Thomas Munro [Mon, 3 Jul 2023 04:20:01 +0000 (16:20 +1200)]
Fix race in SSI interaction with gin fast path.
The ginfast.c code previously checked for conflicts in before locking
the relevant buffer, leaving a window where a RW conflict could be
missed. Re-order.
There was also a place where buffer ID and block number were confused
while trying to predicate-lock a page, noted by visual inspection.
Back-patch to all supported releases. Fixes one more problem discovered
with the reproducer from bug #17949, in this case when Dmitry tried
other index types.
Thomas Munro [Mon, 3 Jul 2023 04:18:20 +0000 (16:18 +1200)]
Fix race in SSI interaction with bitmap heap scan.
When performing a bitmap heap scan, we don't want to miss concurrent
writes that occurred after we observed the heap's rs_nblocks, but before
we took predicate locks on index pages. Therefore, we can't skip
fetching any heap tuples that are referenced by the index, because we
need to test them all with CheckForSerializableConflictOut(). The
old optimization that would ignore any references to blocks >=
rs_nblocks gets in the way of that requirement, because it means that
concurrent writes in that window are ignored.
Removing that optimization shouldn't affect correctness at any isolation
level, because any new tuples shouldn't be visible to an MVCC snapshot.
There also shouldn't be any error-causing references to heap blocks past
the end, because we should have held at least an AccessShareLock on the
table before the index scan. It can't get smaller while our transaction
is running. For now, though, we'll keep the optimization at lower
levels to avoid making unnecessary changes in a bug fix.
Back-patch to all supported releases. In release 11, the code is in a
different place but not fundamentally different. Fixes one aspect of
bug #17949.
Thomas Munro [Mon, 3 Jul 2023 04:16:27 +0000 (16:16 +1200)]
Fix race in SSI interaction with empty btrees.
When predicate-locking btrees, we have a special case for completely
empty btrees, since there is no page to lock. This was racy, because,
without buffer lock held, a matching key could be inserted between the
_bt_search() and the PredicateLockRelation() calls.
Fix, by rechecking _bt_search() after taking the relation-level SIREAD
lock, if using SERIALIZABLE isolation and an empty btree is discovered.
Back-patch to all supported releases. Fixes one aspect of bug #17949.
Andrew Dunstan [Mon, 3 Jul 2023 14:06:26 +0000 (10:06 -0400)]
Improve pg_basebackup long file name test Windows robustness
Creation of a file with a very long name can create problems on Windows
due to its file path limits. Work around that by creating the file via a
symlink with a shorter name.
Michael Paquier [Mon, 3 Jul 2023 01:06:20 +0000 (10:06 +0900)]
Make PG_TEST_NOCLEAN work for temporary directories in TAP tests
When set, this environment variable was only effective for data
directories but not for all the other temporary files created by
PostgreSQL::Test::Utils. Keeping the temporary files after a successful
run can be useful for debugging purposes.
The documentation is updated to reflect the new behavior, with contents
available in doc/ since v16 and in src/test/perl/README since v15.
Author: Jacob Champion Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/CAAWbhmgHtDH1SGZ+Fw05CsXtE0mzTmjbuUxLB9mY9iPKgM6cUw@mail.gmail.com
Discussion: https://postgr.es/m/YyPd9unV14SX2bLF@paquier.xyz
Backpatch-through: 11
Michael Paquier [Fri, 30 Jun 2023 04:55:07 +0000 (13:55 +0900)]
Fix marking of indisvalid for partitioned indexes at creation
The logic that introduced partitioned indexes missed a few things when
invalidating a partitioned index when these are created, still the code
is written to handle recursions:
1) If created from scratch because a mapping index could not be found,
the new index created could be itself invalid, if for example it was a
partitioned index with one of its leaves invalid.
2) A CCI was missing when indisvalid is set for a parent index, leading
to inconsistent trees when recursing across more than one level for a
partitioned index creation if an invalidation of the parent was
required.
This could lead to the creation of a partition index tree where some of
the partitioned indexes are marked as invalid, but some of the parents
are marked valid, which is not something that should happen (as
validatePartitionedIndex() defines, indisvalid is switched to true for a
partitioned index iff all its partitions are themselves valid).
This patch makes sure that indisvalid is set to false on a partitioned
index if at least one of its partition is invalid. The flag is set to
true if *all* its partitions are valid.
The regression test added in this commit abuses of a failed concurrent
index creation, marked as invalid, that maps with an index created on
its partitioned table afterwards.
Reported-by: Alexander Lakhin Reviewed-by: Alexander Lakhin
Discussion: https://postgr.es/m/14987634-43c0-0cb3-e075-94d423607e08@gmail.com
Backpatch-through: 11
Tom Lane [Thu, 29 Jun 2023 14:19:10 +0000 (10:19 -0400)]
Fix order of operations in ExecEvalFieldStoreDeForm().
If the given composite datum is toasted out-of-line,
DatumGetHeapTupleHeader will perform database accesses to detoast it.
That can invalidate the result of get_cached_rowtype, as documented
(perhaps not plainly enough) in that function's API spec; which leads
to strange errors or crashes when we try to use the TupleDesc to read
the tuple. In short then, trying to update a field of a composite
column could fail intermittently if the overall column value is wide
enough to require toasting.
We can fix the bug at no cost by just changing the order of
operations, since we don't need the TupleDesc until after detoasting.
(Other callers of get_cached_rowtype appear to get this right already,
so there's only one bug.)
Note that the added regression test case reveals this bug reliably
only with debug_discard_caches/CLOBBER_CACHE_ALWAYS.
Per bug #17994 from Alexander Lakhin. Sadly, this patch does not fix
the missing-values issue revealed in the bug discussion; we'll need
some more work to cover that.
Michael Paquier [Wed, 28 Jun 2023 06:57:55 +0000 (15:57 +0900)]
Ignore invalid indexes when enforcing index rules in ALTER TABLE ATTACH PARTITION
A portion of ALTER TABLE .. ATTACH PARTITION is to ensure that the
partition being attached to the partitioned table has a correct set of
indexes, so as there is a consistent index mapping between the
partitioned table and its new-to-be partition. However, as introduced
in 8b08f7d, the current logic could choose an invalid index as a match,
which is something that can exist when dealing with more than two levels
of partitioning, like attaching a partitioned table (that has
partitions, with an index created by CREATE INDEX ON ONLY) to another
partitioned table.
A partitioned index with indisvalid set to false is equivalent to an
incomplete partition tree, meaning that an invalid partitioned index
does not have indexes defined in all its partitions. Hence, choosing an
invalid partitioned index can create inconsistent partition index trees,
where the parent attaching to is valid, but its partition may be
invalid.
In the report from Alexander Lakhin, this showed up as an assertion
failure when validating an index. Without assertions enabled, the
partition index tree would be actually broken, as indisvalid should
be switched to true for a partitioned index once all its partitions are
themselves valid. With two levels of partitioning, the top partitioned
table used a valid index and was able to link to an invalid index stored
on its partition, itself a partitioned table.
I have studied a few options here (like the possibility to switch
indisvalid to false for the parent), but came down to the conclusion
that we'd better rely on a simple rule: invalid indexes had better never
be chosen, so as the partition attached uses and creates indexes that
the parent expects. Some regression tests are added to provide some
coverage. Note that the existing coverage is not impacted.
This is a problem since partitioned indexes exist, so backpatch all the
way down to v11.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/14987634-43c0-0cb3-e075-94d423607e08@gmail.com
Backpatch-through: 11
Tom Lane [Sat, 24 Jun 2023 21:18:08 +0000 (17:18 -0400)]
Check for interrupts and stack overflow in TParserGet().
TParserGet() recurses for some token types, meaning it's possible
to drive it to stack overflow. Since this is a minority behavior,
I chose to add the check_stack_depth() call to the two places that
recurse rather than doing it during every single call.
While at it, add CHECK_FOR_INTERRUPTS(), because this can run
unpleasantly long for long inputs.
Per bug #17995 from Zuming Jiang. This is old, so back-patch
to all supported branches.
Peter Eisentraut [Sun, 19 Jul 2020 10:14:42 +0000 (12:14 +0200)]
Define OPENSSL_API_COMPAT
This avoids deprecation warnings from newer OpenSSL versions (3.0.0 in
particular).
This has been originally applied as 4d3db13 for v14 and newer versions,
but not on the older branches out of caution, and this commit closes the
gap to remove all these deprecation warnings in all the branches still
supported.
OPENSSL_API_COMPAT's value is set based on the oldest version of OpenSSL
supported on a branch: 1.0.1 for Postgres 13 and 0.9.8 for Postgres 11
and 12.
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/FEF81714-D479-4512-839B-C769D2605F8A@yesql.se
Discussion: https://postgr.es/m/ZJJmOH+hIOSoesux@paquier.xyz
Backpatch-through: 11
Amit Kapila [Thu, 22 Jun 2023 06:19:10 +0000 (11:49 +0530)]
Doc: Clarify the behavior of triggers/rules in a logical subscriber.
By default, triggers and rules do not fire on a logical replication
subscriber based on the "session_replication_role" GUC being set to
"replica". However, the docs in the logical replication section assumed
that the reader understood how this GUC worked. This modifies the docs to
be more explicit and links back to the GUC itself.
Author: Jonathan Katz, Peter Smith Reviewed-by: Vignesh C, Euler Taveira
Backpatch-through: 11
Discussion: https://postgr.es/m/5bb2c9a2-499f-e1a2-6e33-5ce96b35cc4a@postgresql.org
David Rowley [Thu, 22 Jun 2023 00:51:36 +0000 (12:51 +1200)]
Doc: mention that extended stats aren't used for joins
Statistics defined by the CREATE STATISTICS command are only used to
assist with the selectivity estimations of base relations, never for
joins. Here we mention this fact in the notes section of the CREATE
STATISTICS command.
Peter Geoghegan [Thu, 22 Jun 2023 00:41:48 +0000 (17:41 -0700)]
nbtree VACUUM: cope with topparent inconsistencies.
Avoid "right sibling %u of block %u is not next child" errors when
vacuuming a corrupt nbtree index. Just LOG the issue and press on.
That way VACUUM will have a decent chance of finishing off all required
processing for the index (and for the table as a whole).
This is similar to recent work from commit 5abff197, as well as work
from commit 5b861baa (later backpatched as commit 43e409ce), which
taught nbtree VACUUM to keep going when its "re-find" check fails. The
hardening added by this commit takes place directly after the "re-find"
check, right before the critical section for the first stage of page
deletion.
Tom Lane [Wed, 21 Jun 2023 15:07:11 +0000 (11:07 -0400)]
Avoid Assert failure when processing empty statement in aborted xact.
exec_parse_message() wants to create a cached plan in all cases,
including for empty input. The empty-input path does not have
a test for being in an aborted transaction, making it possible
that plancache.c will fail due to trying to do database lookups
even though there's no real work to do.
One solution would be to throw an aborted-transaction error in
this path too, but it's not entirely clear whether the lack of
such an error was intentional or whether some clients might be
relying on non-error behavior. Instead, let's hack plancache.c
so that it treats empty statements with the same logic it
already had for transaction control commands, ensuring that it
can soldier through even in an already-aborted transaction.
Per bug #17983 from Alexander Lakhin. Back-patch to all
supported branches.
Amit Kapila [Wed, 21 Jun 2023 04:01:32 +0000 (09:31 +0530)]
Fix the errhint message and docs for drop subscription failure.
The existing errhint message and docs were missing the fact that we can't
disassociate from the slot unless the subscription is disabled.
Author: Robert Sjöblom, Peter Smith Reviewed-by: Peter Eisentraut, Amit Kapila
Backpatch-through: 11
Discussion: https://postgr.es/m/807bdf85-61ea-88e2-5712-6d9fcd4eabff@fortnox.se
David Rowley [Mon, 19 Jun 2023 01:03:17 +0000 (13:03 +1200)]
Don't use partial unique indexes for unique proofs in the planner
Here we adjust relation_has_unique_index_for() so that it no longer makes
use of partial unique indexes as uniqueness proofs. It is incorrect to
use these as the predicates used by check_index_predicates() to set
predOK makes use of not only baserestrictinfo quals as proofs, but also
qual from join conditions. For relation_has_unique_index_for()'s case, we
need to know the relation is unique for a given set of columns before any
joins are evaluated, so if predOK was only set to true due to some join
qual, then it's unsafe to use such indexes in
relation_has_unique_index_for(). The final plan may not even make use
of that index, which could result in reading tuples that are not as
unique as the planner previously expected them to be.
Bug: #17975 Reported-by: Tor Erik Linnerud
Backpatch-through: 11, all supported versions
Discussion: https://postgr.es/m/17975-98a90c156f25c952%40postgresql.org
Michael Paquier [Thu, 15 Jun 2023 04:45:44 +0000 (13:45 +0900)]
intarray: Prevent out-of-bound memory reads with gist__int_ops
As gist__int_ops stands in intarray, it is possible to store GiST
entries for leaf pages that can cause corruptions when decompressed.
Leaf nodes are stored as decompressed all the time by the compression
method, and the decompression method should map with that, retrieving
the contents of the page without doing any decompression. However, the
code authorized the insertion of leaf page data with a higher number of
array items than what can be supported, generating a NOTICE message to
inform about this matter (199 for a 8k page, for reference). When
calling the decompression method, a decompression would be attempted on
this leaf node item but the contents should be retrieved as they are.
The NOTICE message generated when dealing with the compression of a leaf
page and too many elements in the input array for gist__int_ops has been
introduced by 08ee64e, removing the marker stored in the array to track
if this is actually a leaf node. However, it also missed the fact that
the decompression path should do nothing for a leaf page. Hence, as the
code stand, a too-large array would be stored as uncompressed but the
decompression path would attempt a decompression rather that retrieving
the contents as they are.
This leads to various problems. First, even if 08ee64e tried to address
that, it is possible to do out-of-bound chunk writes with a large input
array, with the backend informing about that with WARNINGs. On
decompression, retrieving the stored leaf data would lead to incorrect
memory reads, leading to crashes or even worse.
Perhaps somebody would be interested in expanding the number of array
items that can be handled in a leaf page for this operator in the
future, which would require revisiting the choice done in 08ee64e, but
based on the lack of reports about this problem since 2005 it does not
look so. For now, this commit prevents the insertion of data for leaf
pages when using more array items that the code can handle on
decompression, switching the NOTICE message to an ERROR. If one wishes
to use more array items, gist__intbig_ops is an optional choice.
While on it, use ERRCODE_PROGRAM_LIMIT_EXCEEDED as error code when a
limit is reached, because that's what the module is facing in such
cases.
Author: Ankit Kumar Pandey, Alexander Lakhin Reviewed-by: Richard Guo, Michael Paquier
Discussion: https://postgr.es/m/796b65c3-57b7-bddf-b0d5-a8afafb8b627@gmail.com
Discussion: https://postgr.es/m/17888-f72930e6b5ce8c14@postgresql.org
Backpatch-through: 11
Tom Lane [Tue, 13 Jun 2023 19:58:37 +0000 (15:58 -0400)]
Correctly update hasSubLinks while mutating a rule action.
rewriteRuleAction neglected to check for SubLink nodes in the
securityQuals of range table entries. This could lead to failing
to convert such a SubLink to a SubPlan, resulting in assertion
crashes or weird errors later in planning.
In passing, fix some poor coding in rewriteTargetView:
we should not pass the source parsetree's hasSubLinks
field to ReplaceVarsFromTargetList's outer_hasSubLinks.
ReplaceVarsFromTargetList knows enough to ignore that
when a Query node is passed, but it's still confusing
and bad precedent: if we did try to update that flag
we'd be updating a stale copy of the parsetree.
Per bug #17972 from Alexander Lakhin. This has been broken since
we added RangeTblEntry.securityQuals (although the presented test
case only fails back to 215b43cdc), so back-patch all the way.
Michael Paquier [Mon, 12 Jun 2023 00:14:20 +0000 (09:14 +0900)]
hstore: Tighten key/value parsing check for whitespaces
isspace() can be locale-sensitive depending on the platform, causing
hstore to consider as whitespaces characters it should not see as such.
For example, U+0105, being decoded as 0xC4 0x85 in UTF-8, would be
discarded from the input given.
This problem is similar to 9ae2661, though it was missed that hstore
can also manipulate non-ASCII inputs, so replace the existing isspace()
calls with scanner_isspace().
This problem exists for a long time, so backpatch all the way down.
Author: Evan Jones
Discussion: https://postgr.es/m/CA+HWA9awUW0+RV_gO9r1ABZwGoZxPztcJxPy8vMFSTbTfi4jig@mail.gmail.com
Backpatch-through: 11
Michael Paquier [Sun, 11 Jun 2023 01:34:00 +0000 (10:34 +0900)]
Fix missing initializations of MyProc.delayChkptEnd
This commit fixes an oversight introduced in 10520f4, that has added
delayChkptEnd to PGPROC to avoid ABI breakages on stable branches, where
two spots have missed to initialize this variable (delayChkpt was
switched back from int to bool, and it was initialized as 0 so there was
no consequences for it):
- InitProcess(), where the per-process data structures of a backend are
initialized.
- InitAuxiliaryProcess(), same but for auxiliary processes.
An interruption during relation truncation while this flag is set could
cause an assertion failure when a follow-up process does a relation
truncation while reusing the same PGPROC entry. A second effect could
be incorrect checkpoint end delays.
While on it, add an assertion in ProcArrayClearTransaction() for
delayChkptEnd to be in line with 5788e25. This is needed only for v14.
This issue affects v11~v14, but not v15~, as we use a single field
called delayChkptFlags to delay checkpoints there.
Michael Paquier [Fri, 9 Jun 2023 02:56:48 +0000 (11:56 +0900)]
Refactor routine to find single log content pattern in TAP tests
The same routine to check if a specific pattern can be found in the
server logs was copied over four different test scripts. This refactors
the whole to use a single routine located in PostgreSQL::Test::Cluster,
named log_contains, to grab the contents of the server logs and check
for a specific pattern.
On HEAD, the code previously used assumed that slurp_file() could not
handle an undefined offset, setting it to zero, but slurp_file() does
do an extra fseek() before retrieving the log contents only if an offset
is defined. In two places, the test was retrieving the full log
contents with slurp_file() after calling substr() to apply an offset,
ignoring that slurp_file() would be able to handle that.
Backpatch all the way down to ease the introduction of new tests that
could rely on the new routine.
Author: Vignesh C Reviewed-by: Andrew Dunstan, Dagfinn Ilmari Mannsåker, Michael Paquier
Discussion: https://postgr.es/m/CALDaNm0YSiLpjCmajwLfidQrFOrLNKPQir7s__PeVvh9U3uoTQ@mail.gmail.com
Backpatch-through: 11
Fujii Masao [Thu, 8 Jun 2023 11:11:52 +0000 (20:11 +0900)]
doc: Fix example command for ALTER FOREIGN TABLE ... OPTIONS.
In the documentation, previously the example command for
ALTER FOREIGN TABLE ... OPTIONS incorrectly included both
the option name and value with the DROP operation.
The correct syntax for the DROP operation requires only
the name of the option to be specified. This commit fixes
the example by removing the option value from the DROP operation.
Back-patch to all supported versions.
Author: Mehmet Emin KARAKAS <emin100@gmail.com> Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/CANQrdXAHzbcEYhjGoe5A42OmfvdQhHFJzyKj9gJvHuDKyOF5Ng@mail.gmail.com
Peter Geoghegan [Thu, 25 May 2023 22:32:45 +0000 (15:32 -0700)]
nbtree VACUUM: cope with right sibling link corruption.
Avoid "right sibling's left-link doesn't match" errors when vacuuming a
corrupt nbtree index. Just LOG the issue and press on. That way VACUUM
will have a decent chance of finishing off all required processing for
the index (and for the table as a whole).
This error was seen in the field from time to time (it's more than a
theoretical risk), so giving VACUUM the ability to press on like this
has real value. Nothing short of a REINDEX is expected to fix the
underlying index corruption, so giving up (by throwing an error) risks
making a bad situation far worse. Anything that blocks forward progress
by VACUUM like this might go unnoticed for a long time. This could
eventually lead to a wraparound/xidStopLimit outage.
Note that _bt_unlink_halfdead_page() has always been able to bail on
page deletion when the target page's left sibling page was in an
inconsistent state. It now does the same thing (returns false to back
out of the second phase of deletion) when it notices sibling link
corruption in the target page's right sibling page.
This is similar to the work from commit 5b861baa (later backpatched as
commit 43e409ce), which taught nbtree to press on with vacuuming an
index when page deletion fails to "re-find" a downlink in the target
page's parent page. The "re-find" check seems to make VACUUM bail on
page deletion more often in practice, but there is no reason to take any
chances here.
Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/CAH2-Wzko2q2kP1+UvgJyP9g0mF4hopK0NtQZcxwvMv9_ytGhkQ@mail.gmail.com
Backpatch: 11- (all supported versions).
Tom Lane [Fri, 19 May 2023 14:57:46 +0000 (10:57 -0400)]
Avoid naming conflict between transactions.sql and namespace.sql.
Commits 681d9e462 et al added a test case in namespace.sql that
implicitly relied on there not being a table "public.abc".
However, the concurrently-run transactions.sql test creates precisely
such a table, so with the right timing you'd get a failure.
Creating a table named as generically as "abc" in a common schema
seems like bad practice, so fix this by changing the name of
transactions.sql's table. (Compare 2cf8c7aa4.)
Tomas Vondra [Thu, 18 May 2023 22:00:22 +0000 (00:00 +0200)]
Fix handling of empty ranges and NULLs in BRIN
BRIN indexes did not properly distinguish between summaries for empty
(no rows) and all-NULL ranges, treating them as essentially the same
thing. Summaries were initialized with allnulls=true, and opclasses
simply reset allnulls to false when processing the first non-NULL value.
This however produces incorrect results if the range starts with a NULL
value (or a sequence of NULL values), in which case we forget the range
contains NULL values when adding the first non-NULL value.
This happens because the allnulls flag is used for two separate
purposes - to mark empty ranges (not representing any rows yet) and
ranges containing only NULL values.
Opclasses don't know which of these cases it is, and so don't know
whether to set hasnulls=true. Setting the flag in both cases would make
it correct, but it would also make BRIN indexes useless for queries with
IS NULL clauses. All ranges start empty (and thus allnulls=true), so all
ranges would end up with either allnulls=true or hasnulls=true.
The severity of the issue is somewhat reduced by the fact that it only
happens when adding values to an existing summary with allnulls=true.
This can happen e.g. for small tables (because a summary for the first
range exists for all BRIN indexes), or for tables with large fraction of
NULL values in the indexed columns.
Bulk summarization (e.g. during CREATE INDEX or automatic summarization)
that processes all values at once is not affected by this issue. In this
case the flags were updated in a slightly different way, not forgetting
the NULL values.
To identify empty ranges we use a new flag, stored in an unused bit in
the BRIN tuple header so the on-disk format remains the same. A matching
flag is added to BrinMemTuple, into a 3B gap after bt_placeholder.
That means there's no risk of ABI breakage, although we don't actually
pass the BrinMemTuple to any public API.
We could also skip storing index tuples for empty summaries, but then
we'd have to always process such ranges - even if there are no rows in
large parts of the table (e.g. after a bulk DELETE), it would still
require reading the pages etc. So we store them, but ignore them when
building the bitmap.
Backpatch to 11. The issue exists since BRIN indexes were introduced in
9.5, but older releases are already EOL.
Backpatch-through: 11 Reviewed-by: Justin Pryzby, Matthias van de Meent, Alvaro Herrera
Discussion: https://postgr.es/m/402430e4-7d9d-6cf1-09ef-464d80afff3b@enterprisedb.com
Tomas Vondra [Thu, 18 May 2023 11:00:31 +0000 (13:00 +0200)]
Fix handling of NULLs when merging BRIN summaries
When merging BRIN summaries, union_tuples() did not correctly update the
target hasnulls/allnulls flags. When merging all-NULL summary into a
summary without any NULL values, the result had both flags set to false
(instead of having hasnulls=true).
This happened because the code only considered the hasnulls flags,
ignoring the possibility the source summary has allnulls=true.
Discovered while investigating issues with handling empty BRIN ranges
and handling of NULL values, but it's a separate problem (has nothing to
do with empty ranges).
Fixed by considering both flags on the source summary, and updating the
hasnulls flag on the target summary.
Backpatch to 11. The bug exists since 9.5 (where BRIN indexes were
introduced), but those releases are EOL already.