Nathan Bossart [Mon, 7 Oct 2024 21:49:20 +0000 (16:49 -0500)]
vacuumdb: Schema-qualify operator in catalog query's WHERE clause.
Commit 1ab67c9dfa, which modified this catalog query so that it
doesn't return temporary relations, forgot to schema-qualify the
operator. A comment earlier in the function implores us to fully
qualify everything in the query:
* Since we execute the constructed query with the default search_path
* (which could be unsafe), everything in this query MUST be fully
* qualified.
This commit fixes that. While at it, add a newline for consistency
with surrounding code.
Nathan Bossart [Mon, 7 Oct 2024 18:51:03 +0000 (13:51 -0500)]
Fix Y2038 issues with MyStartTime.
Several places treat MyStartTime as a "long", which is only 32 bits
wide on some platforms. In reality, MyStartTime is a pg_time_t,
i.e., a signed 64-bit integer. This will lead to interesting bugs
on the aforementioned systems in 2038 when signed 32-bit integers
are no longer sufficient to store Unix time (e.g., "pg_ctl start"
hanging). To fix, ensure that MyStartTime is handled as a 64-bit
value everywhere. (Of course, users will need to ensure that
time_t is 64 bits wide on their system, too.)
Co-authored-by: Max Johnson
Discussion: https://postgr.es/m/CO1PR07MB905262E8AC270FAAACED66008D682%40CO1PR07MB9052.namprd07.prod.outlook.com
Backpatch-through: 12
Amit Kapila [Mon, 7 Oct 2024 09:34:05 +0000 (15:04 +0530)]
Fix fetching default toast value during decoding of in-progress transactions.
During logical decoding of in-progress transactions, we perform the toast
table scan while fetching the default toast value for an attribute. We
forgot to initialize the flag during this scan to indicate that the system
table scan is in progress. We need this flag to ensure that during logical
decoding we never directly access the tableam or heap APIs because we check
for concurrent aborts only in systable_* APIs.
Reported-by: Alexander Lakhin
Author: Takeshi Ideriha, Hou Zhijie Reviewed-by: Amit Kapila, Hou Zhijie
Backpatch-through: 14
Discussion: https://postgr.es/m/18641-6687273b7f15269d@postgresql.org
Tom Lane [Sun, 6 Oct 2024 20:03:48 +0000 (16:03 -0400)]
Ignore not-yet-defined Portals in pg_cursors view.
pg_cursor() supposed that any Portal it finds in the hash table must
have sourceText set up, but there's an edge case where that is not so.
A newly-created Portal has sourceText = NULL, and that doesn't change
until PortalDefineQuery is called. In SPI_cursor_open_internal,
we perform GetCachedPlan between CreatePortal and PortalDefineQuery,
and it's possible for user-defined code to execute during that
planning and cause a fetch from the pg_cursors view, resulting in a
null-pointer-dereference crash. (It looks like the same could happen
in exec_bind_message, but I've not tried to provoke a failure there.)
I considered trying to fix this by setting sourceText sooner, but
there may be instances of this same calling pattern in extensions,
and we couldn't be sure they'd get the memo promptly. It seems
better to redefine pg_cursor as not showing Portals that have
not yet had PortalDefineQuery called on them, which we can do by
just skipping them if sourceText is still NULL.
(Before a1c692358, pg_cursor would instead return a row with NULL
in the statement column. We could revert to that behavior but it
doesn't really seem like a better definition, especially since our
documentation doesn't suggest that the column could be NULL.)
Per report from PetSerAl. Back-patch to all supported branches.
Thomas Munro [Sat, 5 Oct 2024 00:48:33 +0000 (13:48 +1300)]
Reject non-ASCII locale names.
Commit bf03cfd1 started scanning all available BCP 47 locale names on
Windows. This caused an abort/crash in the Windows runtime library if
the default locale name contained non-ASCII characters, because of our
use of the setlocale() save/restore pattern with "char" strings. After
switching to another locale with a different encoding, the saved name
could no longer be understood, and setlocale() would abort.
"Turkish_Türkiye.1254" is the example from recent reports, but there are
other examples of countries and languages with non-ASCII characters in
their names, and they appear in Windows' (old style) locale names.
To defend against this:
1. In initdb, reject non-ASCII locale names given explicity on the
command line, or returned by the operating system environment with
setlocale(..., ""), or "canonicalized" by the operating system when we
set it.
2. In initdb only, perform the save-and-restore with Windows'
non-standard wchar_t variant of setlocale(), so that it is not subject
to round trip failures stemming from char string encoding confusion.
3. In the backend, we don't have to worry about the save-and-restore
problem because we have already vetted the defaults, so we just have to
make sure that CREATE DATABASE also rejects non-ASCII names in any new
databases. SET lc_XXX doesn't suffer from the problem, but the ban
applies to it too because it uses check_locale(). CREATE COLLATION
doesn't suffer from the problem either, but it doesn't use
check_locale() so it is not included in the new ban for now, to minimize
the change.
Anyone who encounters the new error message should either create a new
duplicated locale with an ASCII-only name using Windows Locale Builder,
or consider using BCP 47 names like "tr-TR". Users already couldn't
initialize a cluster with "Turkish_Türkiye.1254" on PostgreSQL 16+, but
the new failure mode is an error message that explains why, instead of a
crash.
Back-patch to 16, where bf03cfd1 landed. Older versions are affected
in theory too, but only 16 and later are causing crash reports.
Reviewed-by: Andrew Dunstan <andrew@dunslane.net> (the idea, not the patch) Reported-by: Haifang Wang (Centific Technologies Inc) <v-haiwang@microsoft.com>
Discussion: https://postgr.es/m/PH8PR21MB3902F334A3174C54058F792CE5182%40PH8PR21MB3902.namprd21.prod.outlook.com
Tom Lane [Wed, 2 Oct 2024 21:30:36 +0000 (17:30 -0400)]
Parse libpq's "keepalives" option more like other integer options.
Use pqParseIntParam (nee parse_int_param) instead of using strtol
directly. This allows trailing whitespace, which the previous coding
didn't, and makes the spelling of the error message consistent with
other similar cases.
This seems to be an oversight in commit e7a221797, which introduced
parse_int_param. That fixed places that were using atoi(), but missed
this place which was randomly using strtol() instead.
Ordinarily I'd consider this minor cleanup not worth back-patching.
However, it seems that ecpg assumes it can add trailing whitespace
to URL parameters, so that use of the keepalives option fails in
that context. Perhaps that's worth improving as a separate matter.
In the meantime, back-patch this to all supported branches.
Michael Paquier [Wed, 2 Oct 2024 02:12:48 +0000 (11:12 +0900)]
doc: Clarify name of files generated by pg_waldump --save-fullpage
The fork name is always separated with the block number by an underscore
in the names of the files generated, but the docs stuck them together
without a separator, which was confusing.
Author: Christoph Berg
Discussion: https://postgr.es/m/ZvxtSLiix9eceMRM@msg.df7cb.de
Backpatch-through: 16
Michael Paquier [Tue, 1 Oct 2024 06:44:09 +0000 (15:44 +0900)]
Fix race condition in COMMIT PREPARED causing orphaned 2PC files
COMMIT PREPARED removes on-disk 2PC files near its end, but the state
checked if a file is on-disk or not gets read from shared memory while
not holding the two-phase state lock.
Because of that, there was a small window where a second backend doing a
PREPARE TRANSACTION could reuse the GlobalTransaction put back into the
2PC free list by the COMMIT PREPARED, overwriting the "ondisk" flag read
afterwards by the COMMIT PREPARED to decide if its on-disk two-phase
state file should be removed, preventing the file deletion.
This commit fixes this issue so as the "ondisk" flag in the
GlobalTransaction is read while holding the two-phase state lock, not
from shared memory after its entry has been added to the free list.
Orphaned two-phase state files flushed to disk after a checkpoint are
discarded at the beginning of recovery. However, a truncation of
pg_xact/ would make the startup process issue a FATAL when it cannot
read the SLRU page holding the state of the transaction whose 2PC file
was orphaned, which is a necessary step to decide if the 2PC file should
be removed or not. Removing manually the file would be necessary in
this case.
Issue introduced by effe7d9552dd, so backpatch all the way down.
reindexdb: Skip reindexing temporary tables and indexes.
Reindexing temp tables or indexes of other sessions is not allowed.
However, reindexdb in parallel mode previously listed them as
the objects to process, leading to failures.
This commit ensures reindexdb in parallel mode skips temporary tables
and indexes by adding a condition based on the relpersistence column
in pg_class to the object listing queries, preventing these issues.
Note that this commit does not affect reindexdb when temporary tables
or indexes are explicitly specified using the -t or -j options;
reindexdb in that case still does not skip them and can cause an error.
Back-patch to v13 where parallel mode was introduced in reindexdb.
Author: Fujii Masao Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/5f37ee56-14fb-44fe-9150-9eb97e10538b@oss.nttdata.com
Remove NULL dereference from RenameRelationInternal().
Defect in last week's commit aac2c9b4fde889d13f859c233c2523345e72d32b,
per Coverity. Reaching this would need catalog corruption. Back-patch
to v12, like that commit.
Michael Paquier [Fri, 27 Sep 2024 00:40:16 +0000 (09:40 +0900)]
Fix incorrect memory access in VACUUM FULL with invalid toast indexes
An invalid toast index is skipped in reindex_relation(). These would be
remnants of a failed REINDEX CONCURRENTLY and they should never been
rebuilt as there can only be one valid toast index at a time.
REINDEX_REL_SUPPRESS_INDEX_USE, used by CLUSTER and VACUUM FULL, needs
to maintain a list of the indexes being processed. The list of indexes
is retrieved from the relation cache, and includes invalid indexes. The
code has missed that invalid toast indexes are ignored in
reindex_relation() as this leads to a hard failure in reindex_index(),
and they were left in the reindex pending list, making the list
inconsistent when rechecked. The incorrect memory access was happening
when scanning pg_class for the refresh of pg_database.datfrozenxid, when
doing a scan of pg_class.
This issue exists since REINDEX CONCURRENTLY exists, where invalid toast
indexes can exist, so backpatch all the way down.
Reported-by: Alexander Lakhin
Author: Tender Wang
Discussion: https://postgr.es/m/18630-9aed99c38830657d@postgresql.org
Backpatch-through: 12
Michael Paquier [Wed, 25 Sep 2024 05:44:53 +0000 (14:44 +0900)]
vacuumdb: Skip temporary tables in query to build list of relations
Running vacuumdb with a non-superuser while another user has created a
temporary table would lead to a mid-flight permission failure,
interrupting the operation. vacuum_rel() skips temporary relations of
other backends, and it makes no sense for vacuumdb to know about these
relations, so let's switch it to ignore temporary relations entirely.
Adding a qual in the query based on relpersistence simplifies the
generation of its WHERE clause in vacuum_one_database(), per se the
removal of "has_where".
For inplace update durability, make heap_update() callers wait.
The previous commit fixed some ways of losing an inplace update. It
remained possible to lose one when a backend working toward a
heap_update() copied a tuple into memory just before inplace update of
that tuple. In catalogs eligible for inplace update, use LOCKTAG_TUPLE
to govern admission to the steps of copying an old tuple, modifying it,
and issuing heap_update(). This includes MERGE commands. To avoid
changing most of the pg_class DDL, don't require LOCKTAG_TUPLE when
holding a relation lock sufficient to exclude inplace updaters.
Back-patch to v12 (all supported versions). In v13 and v12, "UPDATE
pg_class" or "UPDATE pg_database" can still lose an inplace update. The
v14+ UPDATE fix needs commit 86dc90056dfdbd9d1b891718d2e5614e3e432f35,
and it wasn't worth reimplementing that fix without such infrastructure.
Reviewed by Nitin Motiani and (in earlier versions) Heikki Linnakangas.
Fix data loss at inplace update after heap_update().
As previously-added tests demonstrated, heap_inplace_update() could
instead update an unrelated tuple of the same catalog. It could lose
the update. Losing relhasindex=t was a source of index corruption.
Inplace-updating commands like VACUUM will now wait for heap_update()
commands like GRANT TABLE and GRANT DATABASE. That isn't ideal, but a
long-running GRANT already hurts VACUUM progress more just by keeping an
XID running. The VACUUM will behave like a DELETE or UPDATE waiting for
the uncommitted change.
For implementation details, start at the systable_inplace_update_begin()
header comment and README.tuplock. Back-patch to v12 (all supported
versions). In back branches, retain a deprecated heap_inplace_update(),
for extensions.
Reported by Smolkin Grigory. Reviewed by Nitin Motiani, (in earlier
versions) Heikki Linnakangas, and (in earlier versions) Alexander
Lakhin.
Warn if LOCKTAG_TUPLE is held at commit, under debug_assertions.
The current use always releases this locktag. A planned use will
continue that intent. It will involve more areas of code, making unlock
omissions easier. Warn under debug_assertions, like we do for various
resource leaks. Back-patch to v12 (all supported versions), the plan
for the commit of the new use.
Tom Lane [Mon, 29 Apr 2024 23:46:33 +0000 (19:46 -0400)]
Allow meson builds to run test_pg_dump test in installcheck mode.
This had been disabled because the test "doesn't delete its user".
It doesn't seem like a great idea for the meson tests to act
differently from the makefile tests, though, and the makefiles
had no such exception (which is how come only copperhead noticed
the problem just fixed in 534287403). In any case, the premise
is false since 936e3fa37, so let's remove the restriction.
Project policy is to not leave global objects behind after a regress
test run. This was found as a result of the development of a patch
to make pg_regress detect such leftovers automatically, which in the
end was withdrawn due to issues with parallel runs.
This was originally committed as 936e3fa3787a, but the issue also exists
in the 12~16 range.
Michael Paquier [Thu, 19 Sep 2024 07:25:11 +0000 (16:25 +0900)]
psql: Fix memory leak with repeated calls of \bind
Calling \bind repeatedly would cause the memory allocated for the list
of bind parameters to be leaked after each call, as the list is reset
when beginning a single call.
This issue is fixed by making the cleanup of the bind parameter list
more aggressive, refactoring it into a single routine called after
processing a query and before running an individual \bind.
HEAD required more surgery and has been fixed by 87eeadaea143. Issue
introduced by 5b66de3433e2.
Don't enter parallel mode when holding interrupts.
Doing so caused the leader to hang in wait_event=ParallelFinish, which
required an immediate shutdown to resolve. Back-patch to v12 (all
supported versions).
Michael Paquier [Wed, 18 Sep 2024 00:59:19 +0000 (09:59 +0900)]
Add missing query ID reporting in extended query protocol
This commit adds query ID reports for two code paths when processing
extended query protocol messages:
- When receiving a bind message, setting it to the first Query retrieved
from a cached cache.
- When receiving an execute message, setting it to the first PlannedStmt
stored in a portal.
An advantage of this method is that this is able to cover all the types
of portals handled in the extended query protocol, particularly these
two when the report done in ExecutorStart() is not enough (neither is an
addition in ExecutorRun(), actually, for the second point):
- Multiple execute messages, with multiple ExecutorRun().
- Portal with execute/fetch messages, like a query with a RETURNING
clause and a fetch size that stores the tuples in a first execute
message going though ExecutorStart() and ExecuteRun(), followed by one
or more execute messages doing only fetches from the tuplestore created
in the first message. This corresponds to the case where
execute_is_fetch is set, for example.
Note that the query ID reporting done in ExecutorStart() is still
necessary, as an EXECUTE requires it. Query ID reporting is optimistic
and more calls to pgstat_report_query_id() don't matter as the first
report takes priority except if the report is forced. The comment in
ExecutorStart() is adjusted to reflect better the reality with the
extended query protocol.
The test added in pg_stat_statements is a courtesy of Robert Haas. This
uses psql's \bind metacommand, hence this part is backpatched down to
v16.
Reported-by: Kaido Vaikla, Erik Wienhold
Author: Sami Imseih Reviewed-by: Jian He, Andrei Lepikhov, Michael Paquier
Discussion: https://postgr.es/m/CA+427g8DiW3aZ6pOpVgkPbqK97ouBdf18VLiHFesea2jUk3XoQ@mail.gmail.com
Discussion: https://postgr.es/m/CA+TgmoZxtnf_jZ=VqBSyaU8hfUkkwoJCJ6ufy4LGpXaunKrjrg@mail.gmail.com
Discussion: https://postgr.es/m/1391613709.939460.1684777418070@office.mailbox.org
Backpatch-through: 14
Tom Lane [Tue, 17 Sep 2024 19:53:26 +0000 (15:53 -0400)]
Repair pg_upgrade for identity sequences with non-default persistence.
Since we introduced unlogged sequences in v15, identity sequences
have defaulted to having the same persistence as their owning table.
However, it is possible to change that with ALTER SEQUENCE, and
pg_dump tries to preserve the logged-ness of sequences when it doesn't
match (as indeed it wouldn't for an unlogged table from before v15).
The fly in the ointment is that ALTER SEQUENCE SET [UN]LOGGED fails
in binary-upgrade mode, because it needs to assign a new relfilenode
which we cannot permit in that mode. Thus, trying to pg_upgrade a
database containing a mismatching identity sequence failed.
To fix, add syntax to ADD/ALTER COLUMN GENERATED AS IDENTITY to allow
the sequence's persistence to be set correctly at creation, and use
that instead of ALTER SEQUENCE SET [UN]LOGGED in pg_dump. (I tried to
make SET [UN]LOGGED work without any pg_dump modifications, but that
seems too fragile to be a desirable answer. This way should be
markedly faster anyhow.)
In passing, document the previously-undocumented SEQUENCE NAME option
that pg_dump also relies on for identity sequences; I see no value
in trying to pretend it doesn't exist.
Per bug #18618 from Anthony Hsu.
Back-patch to v15 where we invented this stuff.
Tom Lane [Sun, 15 Sep 2024 17:33:09 +0000 (13:33 -0400)]
Replace usages of xmlXPathCompile() with xmlXPathCtxtCompile().
In existing releases of libxml2, xmlXPathCompile can be driven
to stack overflow because it fails to protect itself against
too-deeply-nested input. While there is an upstream fix as of
yesterday, it will take years for that to propagate into all
shipping versions. In the meantime, we can protect our own
usages basically for free by calling xmlXPathCtxtCompile instead.
(The actual bug is that libxml2 keeps its nesting counter in the
xmlXPathContext, and its parsing code was willing to just skip
counting nesting levels if it didn't have a context. So if we supply
a context, all is well. It seems odd actually that it works at all
to not supply a context, because this means that XPath parsing does
not have access to XML namespace info. Apparently libxml2 never
checks namespaces until runtime? Anyway, this seems like good
future-proofing even if its only immediate effect is to dodge a bug.)
Sadly, this hack only offers protection with libxml2 2.9.11 and newer.
Before that there are multiple similar problems, so if you are
processing untrusted XML it behooves you to get a newer version.
But we have some pretty old libxml2 in the buildfarm, so it seems
impractical to add a regression test to verify this fix.
Per bug #18617 from Jingzhou Fu. Back-patch to all supported
versions.
Tom Lane [Sat, 14 Sep 2024 21:55:03 +0000 (17:55 -0400)]
Run regression tests with timezone America/Los_Angeles.
Historically we've used timezone "PST8PDT", but the recent release
2024b of tzdb changes the definition of that zone in a way that
breaks many test cases concerned with dates before 1970. Although
we've not yet adopted 2024b into our own tree, this is already
problematic for people using --with-system-tzdata if their platform
has already adopted 2024b. To work with both older and newer
versions of tzdb, switch to using "America/Los_Angeles", accepting
the ensuing changes in regression test results.
Andrew Dunstan [Sat, 14 Sep 2024 14:26:25 +0000 (10:26 -0400)]
Improve meson's detection of perl build flags
The current method of detecting perl build flags breaks if the path to
perl contains a space. This change makes two improvements. First,
instead of getting a list of ldflags and ccdlflags and then trying to
filter those out of the reported ldopts, we tell perl to suppress
reporting those in the first instance. Second, it tells perl to parse
those and output them, one per line. Thus any space on the option in a
file name, for example, is preserved.
Andrew Dunstan [Sat, 14 Sep 2024 12:37:08 +0000 (08:37 -0400)]
Only define NO_THREAD_SAFE_LOCALE for MSVC plperl when required
Latest versions of Strawberry Perl define USE_THREAD_SAFE_LOCALE, and we
therefore get a handshake error when building against such instances.
The solution is to perform a test to see if USE_THREAD_SAFE_LOCALE is
defined and only define NO_THREAD_SAFE_LOCALE if it isn't.
Backpatch the meson.build fix back to release 16 and apply the same
logic to Mkvcbuild.pm in releases 12 through 16.
Original report of the issue from Muralikrishna Bandaru.
Tom Lane [Fri, 13 Sep 2024 20:16:47 +0000 (16:16 -0400)]
Allow _h_indexbuild() to be interrupted.
When we are building a hash index that is large enough to need
pre-sorting (larger than either maintenance_work_mem or NBuffers),
the initial sorting phase is interruptible, but the insertion
phase wasn't. Add the missing CHECK_FOR_INTERRUPTS().
Per bug #18616 from Alexander Lakhin. Back-patch to all
supported branches.
I managed to break this test in two different ways in commit 05036a3155.
First, the output of the new call to tuple_data_split() on the test
sequence is dependent on endianness. This is fixed by setting a
special start value for the test sequence that produces the same
output regardless of the endianness of the machine.
Second, on versions older than v15, the new test case fails under
"force_parallel_mode = regress" with the following error:
ERROR: cannot access temporary tables during a parallel operation
This is because pageinspect's disk-accessing functions are
incorrectly marked PARALLEL SAFE on versions older than v15 (see
commit aeaaf520f4 for details). This one is fixed by changing the
test sequence to be permanent. The only reason it was previously
marked temporary was to avoid needing a DROP SEQUENCE command at
the end of the test. Unlike some other tests in this file, the use
of a permanent sequence here shouldn't result in any test
instability like what was fixed by commit e2933a6e11.
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/ZuOKOut5hhDlf_bP%40nathan
Backpatch-through: 12
Reintroduce support for sequences in pgstattuple and pageinspect.
Commit 4b82664156 restricted a number of functions provided by
contrib modules to only relations that use the "heap" table access
method. Sequences always use this table access method, but they do
not advertise as such in the pg_class system catalog, so the
aforementioned commit also (presumably unintentionally) removed
support for sequences from some of these functions. This commit
reintroduces said support for sequences to these functions and adds
a couple of relevant tests.
Co-authored-by: Ayush Vatsa Reviewed-by: Robert Haas, Michael Paquier, Matthias van de Meent
Discussion: https://postgr.es/m/CACX%2BKaP3i%2Bi9tdPLjF5JCHVv93xobEdcd_eB%2B638VDvZ3i%3DcQA%40mail.gmail.com
Backpatch-through: 12
David Rowley [Thu, 12 Sep 2024 10:37:54 +0000 (22:37 +1200)]
Doc: alphabetize aggregate function table
A few recent JSON aggregates have been added without much consideration
to the existing order. Put these back in alphabetical order (with the
exception of the JSONB variant of each JSON aggregate).
Author: Wolfgang Walther <walther@technowledgy.de> Reviewed-by: Marlene Reiterer <marlene.reiterer.03@gmail.com>
Discussion: https://postgr.es/m/6a7b910c-3feb-4006-b817-9b4759cb6bb6%40technowledgy.de
Backpatch-through: 16, where these aggregates were added
Tom Lane [Wed, 11 Sep 2024 15:41:47 +0000 (11:41 -0400)]
Remove incorrect Assert.
check_agglevels_and_constraints() asserted that if we find an
aggregate function in an EXPR_KIND_FROM_SUBSELECT expression, the
expression must be in a LATERAL subquery. Alexander Lakhin found a
case where that's not so: because of the odd scoping rules for NEW/OLD
within a rule, a reference to NEW/OLD could cause an aggregate to be
considered top-level even though it's in an unmarked sub-select.
The error message that would be thrown seems sufficiently on-point,
so just remove the Assert. (Hence, this is not a bug for production
builds.)
This Assert was added by me in commit eaccfded9 (9.3 era). It looks
like I put it in to cross-check that the new logic for detecting
misplaced aggregates (using agglevelsup) caught the same cases that a
previous check on p_lateral_active did. So there might have been some
related misbehavior before eaccfded9 ... but that's very ancient
history by now, so I didn't dig any deeper.
Per bug #18608 from Alexander Lakhin. Back-patch to all supported
branches.
Tomas Vondra [Wed, 11 Sep 2024 11:21:05 +0000 (13:21 +0200)]
Fix unique key checks in JSON object constructors
When building a JSON object, the code builds a hash table of keys, to
allow checking if the keys are unique. The uniqueness check and adding
the new key happens in json_unique_check_key(), but this assumes the
pointer to the key remains valid.
Unfortunately, two places passed pointers to keys in a buffer, while
also appending more data (additional key/value pairs) to the buffer.
With enough data the buffer is resized by enlargeStringInfo(), which
calls repalloc(), invalidating the earlier key pointers.
Due to this the uniqueness check may fail with both false negatives and
false positives, producing JSON objects with duplicate keys or failing
to produce a perfectly valid JSON object.
This affects multiple functions that enforce uniqueness of keys, all
introduced in PG16 with the new SQL/JSON:
Existing regression tests did not detect the issue, simply because the
initial buffer size is 1024 and the objects were small enough not to
require the repalloc.
With a sufficiently large object, AddressSanitizer reported the access
to invalid memory immediately. So would valgrind, of course.
Fixed by copying the key into the hash table memory context, and adding
regression tests with enough data to repalloc the buffer. Backpatch to
16, where the functions were introduced.
Reported by Alexander Lakhin. Investigation and initial fix by Junwang
Zhao, with various improvements and tests by me.
Reported-by: Alexander Lakhin
Author: Junwang Zhao, Tomas Vondra
Backpatch-through: 16
Discussion: https://postgr.es/m/18598-3279ed972a2347c7@postgresql.org
Discussion: https://postgr.es/m/CAEG8a3JjH0ReJF2_O7-8LuEbO69BxPhYeXs95_x7+H9AMWF1gw@mail.gmail.com
Tom Lane [Tue, 10 Sep 2024 20:20:31 +0000 (16:20 -0400)]
Fix some whitespace issues in XMLSERIALIZE(... INDENT).
We must drop whitespace while parsing the input, else libxml2
will include "blank" nodes that interfere with the desired
indentation behavior. The end result is that we didn't indent
nodes separated by whitespace.
Also, it seems that libxml2 may add a trailing newline when working
in DOCUMENT mode. This is semantically insignificant, so strip it.
This is in the gray area between being a bug fix and a definition
change. However, the INDENT option is still pretty new (since v16),
so I think we can get away with changing this in stable branches.
Hence, back-patch to v16.
Michael Paquier [Mon, 9 Sep 2024 04:50:02 +0000 (13:50 +0900)]
Fix waits of REINDEX CONCURRENTLY for indexes with predicates or expressions
As introduced by f9900df5f94, a REINDEX CONCURRENTLY job done for an
index with predicates or expressions would set PROC_IN_SAFE_IC in its
MyProc->statusFlags, causing it to be ignored by other concurrent
operations.
Such concurrent index rebuilds should never be ignored, as a predicate
or an expression could call a user-defined function that accesses a
different table than the table where the index is rebuilt.
A test that uses injection points is added, backpatched down to 17.
Michail has proposed a different test, but I have added something
simpler with more coverage.
Tom Lane [Fri, 6 Sep 2024 15:57:57 +0000 (11:57 -0400)]
Fix incorrect pg_stat_io output on 32-bit machines.
pg_stat_get_io() applied TimestampTzGetDatum twice to the
stat_reset_timestamp value. On 64-bit builds that's harmless because
TimestampTzGetDatum is a no-op, but on 32-bit builds it results in
displaying garbage in the stats_reset column of the pg_stat_io view.
Bug dates to commit a9c70b46d which introduced pg_stat_io, so
back-patch to v16 where that came in.
Tom Lane [Thu, 5 Sep 2024 16:42:33 +0000 (12:42 -0400)]
Prevent mis-encoding of "trailing junk after numeric literal" errors.
Since commit 2549f0661, we reject an identifier immediately following
a numeric literal (without separating whitespace), because that risks
ambiguity with hex/octal/binary integers. However, that patch used
token patterns like "{integer}{ident_start}", which is problematic
because {ident_start} matches only a single byte. If the first
character after the integer is a multibyte character, this ends up
with flex reporting an error message that includes a partial multibyte
character. That can cause assorted bad-encoding problems downstream,
both in the report to the client and in the postmaster log file.
To fix, use {identifier} not {ident_start} in the "junk" token
patterns, so that they will match complete multibyte characters.
This seems generally better user experience quite aside from the
encoding problem: for "123abc" the error message will now say that
the error appeared at or near "123abc" instead of "123a".
While at it, add some commentary about why these patterns exist
and how they work.
Report and patch by Karina Litskevich; review by Pavel Borisov.
Back-patch to v15 where the problem came in.
Thomas Munro [Sat, 31 Aug 2024 02:32:08 +0000 (14:32 +1200)]
Stabilize 039_end_of_wal test.
The first test was sensitive to the insert LSN after setting up the
catalogs, which depended on environmental things like the locales on the
OS and usernames. Switch to a new WAL file before the first test, as a
simple way to put every computer into the same state.
Back-patch to all supported releases.
Reported-by: Anton Voloshin <a.voloshin@postgrespro.ru> Reported-by: Nathan Bossart <nathandbossart@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/b26aeac2-cb6d-4633-a7ea-945baae83dcf%40postgrespro.ru
Tom Lane [Fri, 30 Aug 2024 16:42:13 +0000 (12:42 -0400)]
Avoid inserting PlaceHolderVars in cases where pre-v16 PG did not.
Commit 2489d76c4 removed some logic from pullup_replace_vars()
that avoided wrapping a PlaceHolderVar around a pulled-up
subquery output expression if the expression could be proven
to go to NULL anyway (because it contained Vars or PHVs of the
pulled-up relation and did not contain non-strict constructs).
But removing that logic turns out to cause performance regressions
in some cases, because the extra PHV blocks subexpression folding,
and will do so even if outer-join reduction later turns it into a
no-op with no phnullingrels bits. This can for example prevent
an expression from being matched to an index.
The reason for always adding a PHV was to ensure we had someplace
to put the varnullingrels marker bits of the Var being replaced.
However, it turns out we can optimize in exactly the same cases that
the previous code did, because we can instead attach the needed
varnullingrels bits to the contained Var(s)/PHV(s).
This is not a complete solution --- it would be even better if we
could remove PHVs after reducing them to no-ops. It doesn't look
practical to back-patch such an improvement, but this change seems
safe and at least gets rid of the performance-regression cases.
Per complaint from Nikhil Raj. Back-patch to v16 where the
problem appeared.
Tom Lane [Thu, 29 Aug 2024 17:24:17 +0000 (13:24 -0400)]
Fix mis-deparsing of ORDER BY lists when there is a name conflict.
If an ORDER BY item in SELECT is a bare identifier, the parser
first seeks it as an output column name of the SELECT (for SQL92
compatibility). However, ruleutils.c is expecting the SQL99
interpretation where such a name is an input column name. So it's
possible to produce an incorrect display of a view in the (admittedly
pretty ill-advised) case where some other column is renamed in the
SELECT output list to match an ORDER BY column.
This can be fixed by table-qualifying such names in the dumped
view text. To avoid cluttering less-ill-advised queries, we'd
like to do so only when there's an actual name conflict.
That requires passing the current get_query_def call's resultDesc
parameter down to get_variable, so that it can determine what
the output column names are. In hopes of reducing rather than
increasing notational clutter in ruleutils.c, I moved that value
into the deparse_context struct and removed it from the parameter
lists of get_query_def's other subroutines.
I made a few other cosmetic changes while at it:
* Likewise move the colNamesVisible parameter into deparse_context.
* Rename deparse_context's windowTList field to targetList,
since it's no longer used only in connection with WINDOW clauses.
* Replace the special_exprkind field with a bool inGroupBy,
since that was all it was being used for, and the apparent
flexibility of storing a ParseExprKind proved to be illusory.
(We need a separate varInOrderBy field to make this patch work.)
* Remove useless save/restore logic in get_select_query_def.
In principle, this bug is quite old. However, it seems unreachable
before 1b4d280ea, because before that the presence of "new" and "old"
entries in a view's rangetable caused us to always table-qualify every
Var reference in dumped views. Hence, back-patch to v16 where that
came in.
Peter Eisentraut [Thu, 29 Aug 2024 06:38:29 +0000 (08:38 +0200)]
Disallow USING clause when altering type of generated column
This does not make sense. It would write the output of the USING
clause into the converted column, which would violate the generation
expression. This adds a check to error out if this is specified.
There was a test for this, but that test errored out for a different
reason, so it was not effective.
Reported-by: Jian He <jian.universality@gmail.com> Reviewed-by: Yugo NAGATA <nagata@sraoss.co.jp>
Discussion: https://www.postgresql.org/message-id/flat/c7083982-69f4-4b14-8315-f9ddb20b9834%40eisentraut.org
Amit Kapila [Wed, 21 Aug 2024 03:31:11 +0000 (09:01 +0530)]
Don't advance origin during apply failure.
We advance origin progress during abort on successful streaming and
application of ROLLBACK in parallel streaming mode. But the origin
shouldn't be advanced during an error or unsuccessful apply due to
shutdown. Otherwise, it will result in a transaction loss as such a
transaction won't be sent again by the server.
Reported-by: Hou Zhijie
Author: Hayato Kuroda and Shveta Malik Reviewed-by: Amit Kapila
Backpatch-through: 16
Discussion: https://postgr.es/m/TYAPR01MB5692FAC23BE40C69DA8ED4AFF5B92@TYAPR01MB5692.jpnprd01.prod.outlook.com
Nathan Bossart [Tue, 20 Aug 2024 18:43:20 +0000 (13:43 -0500)]
Fix a couple of wait event descriptions.
The descriptions for ProcArrayGroupUpdate and XactGroupUpdate claim
that these events mean we are waiting for the group leader "at end
of a parallel operation," but neither pertains to parallel
operations. This commit reverts these descriptions to their
wording before commit 3048898e73, i.e., "end of a parallel
operation" is changed to "transaction end."
Author: Sameer Kumar Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CAGPeHmh6UMrKQHKCmX%2B5vV5TH9P%3DKw9en3k68qEem6J%3DyrZPUA%40mail.gmail.com
Backpatch-through: 13
Alvaro Herrera [Mon, 19 Aug 2024 20:09:10 +0000 (16:09 -0400)]
Avoid failure to open dropped detached partition
When a partition is detached and immediately dropped, a prepared
statement could try to compute a new partition descriptor that includes
it. This leads to this kind of error:
ERROR: could not open relation with OID 457639
Avoid this by skipping the partition in expand_partitioned_rtentry if it
doesn't exist.
Noted by me while investigating bug #18559. Kuntal Gosh helped to
identify the exact failure.
Backpatch to 14, where DETACH CONCURRENTLY was introduced.
Tomas Vondra [Mon, 19 Aug 2024 11:31:51 +0000 (13:31 +0200)]
Explain dropdb can't use syscache because of TOAST
Add a comment explaining dropdb() can't rely on syscache. The issue with
flattened rows was fixed by commit 0f92b230f88b, but better to have
a clear explanation why the systable scan is necessary. The other places
doing in-place updates on pg_database have the same comment.
Suggestion and patch by Yugo Nagata. Backpatch to 12, same as the fix.
Commit 274bbced disabled session tickets for TLSv1.3 on top of the
already disabled TLSv1.2 session tickets, but accidentally caused
a regression where TLSv1.2 session tickets were incorrectly sent.
Fix by unconditionally disabling TLSv1.2 session tickets and only
disable TLSv1.3 tickets when the right version of OpenSSL is used.
Backpatch to all supported branches.
Reported-by: Cameron Vogt <cvogt@automaticcontrols.net> Reported-by: Fire Emerald <fire.github@gmail.com> Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/DM6PR16MB3145CF62857226F350C710D1AB852@DM6PR16MB3145.namprd16.prod.outlook.com
Backpatch-through: v12
Thomas Munro [Mon, 19 Aug 2024 09:21:03 +0000 (21:21 +1200)]
Fix harmless LC_COLLATE[_MASK] confusion.
Commit ca051d8b101 called newlocale(LC_COLLATE, ...) instead of
newlocale(LC_COLLATE_MASK, ...), in code reached only on FreeBSD. They
have the same value on that OS, explaining why it worked. Fix.
Thomas Munro [Sun, 18 Aug 2024 23:47:37 +0000 (11:47 +1200)]
ci: Upgrade MacPorts version to 2.10.1.
MacPorts version 2.9.3 started failing in our ci_macports_packages.sh
script, for reasons not fully determined, but plausibly linked to the
release of 2.10.1. 2.10.1 seems to work, so let's switch to it.
Back-patch to 15, where CI began.
Reported-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/81f104e8-f0a9-43c0-85bd-2bbbf590a5b8%40eisentraut.org
Tomas Vondra [Sun, 18 Aug 2024 22:04:41 +0000 (00:04 +0200)]
Fix DROP DATABASE for databases with many ACLs
Commit c66a7d75e652 modified DROP DATABASE so that if interrupted, the
database is known to be in an invalid state and can only be dropped.
This is done by setting a flag using an in-place update, so that it's
not lost in case of rollback.
For databases with many ACLs, this may however fail like this:
ERROR: wrong tuple length
This happens because with many ACLs, the pg_database.datacl attribute
gets TOASTed. The dropdb() code reads the tuple from the syscache, which
means it's detoasted. But the in-place update expects the tuple length
to match the on-disk tuple.
Fixed by reading the tuple from the catalog directly, not from syscache.
Report and fix by Ayush Tiwari. Backpatch to 12. The DROP DATABASE fix
was backpatched to 11, but 11 is EOL at this point.
Alvaro Herrera [Mon, 12 Aug 2024 22:17:56 +0000 (18:17 -0400)]
Fix creation of partition descriptor during concurrent detach+drop
If a partition undergoes DETACH CONCURRENTLY immediately followed by
DROP, this could cause a problem for a concurrent transaction
recomputing the partition descriptor when running a prepared statement,
because it tries to dereference a pointer to a tuple that's not found in
a catalog scan.
The existing retry logic added in commit dbca3469ebf8 is sufficient to
cope with the overall problem, provided we don't try to dereference a
non-existant heap tuple.
Arguably, the code in RelationBuildPartitionDesc() has been wrong all
along, since no check was added in commit 898e5e3290a7 against receiving
a NULL tuple from the catalog scan; that bug has only become
user-visible with DETACH CONCURRENTLY which was added in branch 14.
Therefore, even though there's no known mechanism to cause a crash
because of this, backpatch the addition of such a check to all supported
branches. In branches prior to 14, this would cause the code to fail
with a "missing relpartbound for relation XYZ" error instead of
crashing; that's okay, because there are no reports of such behavior
anyway.
Tom Lane [Sun, 11 Aug 2024 16:24:56 +0000 (12:24 -0400)]
Suppress Coverity warnings about Asserts in get_name_for_var_field.
Coverity thinks dpns->plan could be null at these points. That
shouldn't really be possible, but it's easy enough to modify the
Asserts so they'd not core-dump if it were true.
These are new in b919a97a6. Back-patch to v13; the v12 version
of the patch didn't have these Asserts.
Tom Lane [Sat, 10 Aug 2024 19:51:28 +0000 (15:51 -0400)]
Allow adjusting session_authorization and role in parallel workers.
The code intends to allow GUCs to be set within parallel workers
via function SET clauses, but not otherwise. However, doing so fails
for "session_authorization" and "role", because the assign hooks for
those attempt to set the subsidiary "is_superuser" GUC, and that call
falls foul of the "not otherwise" prohibition. We can't switch to
using GUC_ACTION_SAVE for this, so instead add a new GUC variable
flag GUC_ALLOW_IN_PARALLEL to mark is_superuser as being safe to set
anyway. (This is okay because is_superuser has context PGC_INTERNAL
and thus only hard-wired calls can change it. We'd need more thought
before applying the flag to other GUCs; but maybe there are other
use-cases.) This isn't the prettiest fix perhaps, but other
alternatives we thought of would be much more invasive.
While here, correct a thinko in commit 059de3ca4: when rejecting
a GUC setting within a parallel worker, we should return 0 not -1
if the ereport doesn't longjmp. (This seems to have no consequences
right now because no caller cares, but it's inconsistent.) Improve
the comments to try to forestall future confusion of the same kind.
Despite the lack of field complaints, this seems worth back-patching.
Thanks to Nathan Bossart for the idea to invent a new flag,
and for review.
Tom Lane [Fri, 9 Aug 2024 15:21:39 +0000 (11:21 -0400)]
Fix "failed to find plan for subquery/CTE" errors in EXPLAIN.
To deparse a reference to a field of a RECORD-type output of a
subquery, EXPLAIN normally digs down into the subquery's plan to try
to discover exactly which anonymous RECORD type is meant. However,
this can fail if the subquery has been optimized out of the plan
altogether on the grounds that no rows could pass the WHERE quals,
which has been possible at least since 3fc6e2d7f. There isn't
anything remaining in the plan tree that would help us, so fall back
to printing the field name as "fN" for the N'th column of the record.
(This will actually be the right thing some of the time, since it
matches the column names we assign to RowExprs.)
In passing, fix a comment typo in create_projection_plan, which
I noticed while experimenting with an alternative fix for this.
Per bug #18576 from Vasya B. Back-patch to all supported branches.
Alvaro Herrera [Thu, 8 Aug 2024 23:35:13 +0000 (19:35 -0400)]
Refuse ATTACH of a table referenced by a foreign key
Trying to attach a table as a partition which is already on the
referenced side of a foreign key on the partitioned table that it is
being attached to, leads to strange behavior: we try to clone the
foreign key from the parent to the partition, but this new FK points to
the partition itself, and the mix of pg_constraint rows and triggers
doesn't behave well.
Rather than trying to untangle the mess (which might be possible given
sufficient time), I opted to forbid the ATTACH. This doesn't seem a
problematic restriction, given that we already fail to create the
foreign key if you do it the other way around, that is, having the
partition first and the FK second.
Backpatch to all supported branches.
Reported-by: Alexander Lakhin <exclusion@gmail.com> Reviewed-by: Tender Wang <tndrwang@gmail.com>
Discussion: https://postgr.es/m/18541-628a61bc267cd2d3@postgresql.org
Fix pg_rewind debug output to print the source timeline history
getTimelineHistory() is called twice, to read the source and the
target timeline history files. However, the loop to print the file
with the --debug option used the wrong variable when dealing with the
source. As a result, the source's history was always printed as empty.
Spotted while debugging bug #18575, but this does not fix that bug,
just the debugging output. Backpatch to all supported versions.
Commit 0b9466fce added a dependency on fe_memutils' pnstrdup() inside
informix.c. This adds an exit() path in a library, which we don't
want. (Unlike libpq, the ecpg libraries don't have an automated check
for that, but it makes sense to keep them to a similar standard.) The
ecpg code can already handle failure results from the *strdup() call
by itself.
Author: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/CAOYmi+=pg=W5L1h=3MEP_EB24jaBu2FyATrLXqQHGe7cpuvwyg@mail.gmail.com
Tom Lane [Wed, 7 Aug 2024 16:54:39 +0000 (12:54 -0400)]
Fix edge case in plpgsql's make_callstmt_target().
If the plancache entry for the CALL statement is already stale,
it's possible for us to fetch an old procedure OID out of it,
and then fail with "cache lookup failed for function NNN".
In ordinary usage this never happens because make_callstmt_target
is called just once immediately after building the plancache
entry. It can be forced however by setting up an erroneous CALL
(that causes make_callstmt_target itself to report an error),
then dropping/recreating the target procedure, then repeating
the erroneous CALL.
To fix, use SPI_plan_get_cached_plan() to fetch the plancache's
plan, rather than assuming we can use SPI_plan_get_plan_sources().
This shouldn't add any noticeable overhead in the normal case,
and in the stale-plan case we'd have had to replan anyway a little
further down.
The other callers of SPI_plan_get_plan_sources() seem OK, because
either they don't need up-to-date plans or they know that the
query was just (re) planned. But add some commentary in hopes
of not falling into this trap again.
Per bug #18574 from Song Hongyu. Back-patch to v14 where this coding
was introduced. (Older branches have comparable code, but it's run
after any required replanning, so there's no issue.)
Make fallback MD5 implementation thread-safe on big-endian systems
Replace a static scratch buffer with a local variable, because a
static buffer makes the function not thread-safe. This function is
used in client-code in libpq, so it needs to be thread-safe. It was
until commit b67b57a966, which replaced the implementation with the
one from pgcrypto.
Backpatch to v14, where we switched to the new implementation.
Reviewed-by: Robert Haas, Michael Paquier
Discussion: https://www.postgresql.org/message-id/dfa2015d-ad21-4802-a4cc-3850fc5fff3f@iki.fi
Masahiko Sawada [Mon, 5 Aug 2024 13:05:28 +0000 (06:05 -0700)]
Restrict accesses to non-system views and foreign tables during pg_dump.
When pg_dump retrieves the list of database objects and performs the
data dump, there was possibility that objects are replaced with others
of the same name, such as views, and access them. This vulnerability
could result in code execution with superuser privileges during the
pg_dump process.
This issue can arise when dumping data of sequences, foreign
tables (only 13 or later), or tables registered with a WHERE clause in
the extension configuration table.
To address this, pg_dump now utilizes the newly introduced
restrict_nonsystem_relation_kind GUC parameter to restrict the
accesses to non-system views and foreign tables during the dump
process. This new GUC parameter is added to back branches too, but
these changes do not require cluster recreation.
Thomas Munro [Sun, 12 May 2024 19:55:20 +0000 (07:55 +1200)]
Skip citext_utf8 test on Windows.
Back-patch of commit cff4e5a3 to 15 and 16, per request from Oleg
Tselebrovskiy. Original commit message:
On other Windows build farm animals it is already skipped because they
don't use UTF-8 encoding. On "hamerkop", UTF-8 is used, and then the
test fails.
It is not clear to me (a non-Windows person looking only at buildfarm
evidence) whether Windows is less sophisticated than other OSes and
doesn't know how to downcase Turkish İ with the standard Unicode
database, or if it is more sophisticated than other systems and uses
locale-specific behavior like ICU does.
Whichever the reason, the result is the same: we need to skip the test
on Windows, just as we already do for ICU, at least until a
Windows-savvy developer comes up with a better idea. The technique for
detecting the OS is borrowed from collate.windows.win1252.sql.
This was anticipated by commit c2e8bd27, but the problem only surfaced
when Windows build farm animals started using Meson.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CA%2BhUKGJ1LeC3aE2qQYTK95rFVON3ZVoTQpTKJqxkHdtEyawH4A%40mail.gmail.com
Some buildfarm animals are failing with "cannot change
"client_encoding" during a parallel operation". It looks like
assign_client_encoding is unhappy at being asked to roll back a
client_encoding setting after a parallel worker encounters a
failure. There must be more to it though: why didn't I see this
during local testing? In any case, it's clear that moving the
RestoreGUCState() call is not as side-effect-free as I thought.
Given that the bug f5f30c22e intended to fix has gone unreported
for years, it's not something that's urgent to fix; I'm not
willing to risk messing with it further with only days to our
next release wrap.
Tom Lane [Wed, 31 Jul 2024 22:54:10 +0000 (18:54 -0400)]
Allow parallel workers to cope with a newly-created session user ID.
Parallel workers failed after a sequence like
BEGIN;
CREATE USER foo;
SET SESSION AUTHORIZATION foo;
because check_session_authorization could not see the uncommitted
pg_authid row for "foo". This is because we ran RestoreGUCState()
in a separate transaction using an ordinary just-created snapshot.
The same disease afflicts any other GUC that requires catalog lookups
and isn't forgiving about the lookups failing.
To fix, postpone RestoreGUCState() into the worker's main transaction
after we've set up a snapshot duplicating the leader's. This affects
check_transaction_isolation and check_transaction_deferrable, which
think they should only run during transaction start. Make them
act like check_transaction_read_only, which already knows it should
silently accept the value when InitializingParallelWorker.
Per bug #18545 from Andrey Rachitskiy. Back-patch to all
supported branches, because this has been wrong for awhile.
David Rowley [Wed, 31 Jul 2024 13:26:55 +0000 (01:26 +1200)]
Doc: mention executor memory usage for enable_partitionwise* GUCs
Prior to this commit, the docs for enable_partitionwise_aggregate and
enable_partitionwise_join mentioned the additional overheads enabling
these causes for the query planner, but they mentioned nothing about the
possible surge in work_mem-consuming executor nodes that could end up in
the final plan. Dimitrios reported the OOM killer intervened on his
query as a result of using enable_partitionwise_aggregate=on.
Here we adjust the docs to mention the possible increase in the number of
work_mem-consuming executor nodes that can appear in the final plan as a
result of enabling these GUCs.
01e2b7f0fd02a44e introduced a test that vacuum correctly removes tuples
older than OldestXmin. The same commit was backpatched on 14-16, but 16
is the only version with meson and the test was mistakenly left off of
the recovery test meson build file. Add it now.
Use DELETE instead of UPDATE to speed up vacuum test
01e2b7f0fd02a44e introduced a test which generated dead tuples for
vacuum with an UPDATE. The test only required enough dead TIDs for two
rounds of index vacuuming. This can be accomplished with a DELETE
instead of an UPDATE -- which generates about 50% less WAL and makes the
test 20% faster in many cases. The test takes several seconds (more on
slow buildfarm animals) because we need quite a few tuples to trigger
two rounds of index vacuuming; so it is worth a follow-on commit to
speed it up.
Suggested-by: Masahiko Sawada
Discussion: https://postgr.es/m/CAAKRu_bWmMjmqL%2BOZ2duEQ80u7cRvpsExLNZNjzk-pXX5skwMQ%40mail.gmail.com
Backpatch-through: 14, the first version containing this test.
David Rowley [Sun, 28 Jul 2024 10:23:54 +0000 (22:23 +1200)]
Fix incorrect return value for pg_size_pretty(bigint)
pg_size_pretty(bigint) would return the value in bytes rather than PB
for the smallest-most bigint value. This happened due to an incorrect
assumption that the absolute value of -9223372036854775808 could be
stored inside a signed 64-bit type.
Here we fix that by instead storing that value in an unsigned 64-bit type.
This bug does exist in versions prior to 15 but the code there is
sufficiently different and the bug seems sufficiently non-critical that
it does not seem worth risking backpatching further.
Author: Joseph Koshakow <koshy44@gmail.com>
Discussion: https://postgr.es/m/CAAvxfHdTsMZPWEHUrZ=h3cky9Ccc3Mtx2whUHygY+ABP-mCmUw@mail.gmail.com
Backpatch-through: 15
Peter Eisentraut [Sun, 28 Jul 2024 07:12:00 +0000 (09:12 +0200)]
libpq: Use strerror_r instead of strerror
Commit 453c4687377 introduced a use of strerror() into libpq, but that
is not thread-safe. Fix by using strerror_r() instead.
In passing, update some of the code comments added by 453c4687377, as
we have learned more about the reason for the change in OpenSSL that
started this.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: Discussion: https://postgr.es/m/b6fb018b-f05c-4afd-abd3-318c649faf18@highgo.ca
Support falling back to non-preferred readline implementation with meson
To build with -Dreadline=enabled one can use either readline or
libedit. The -Dlibedit_preferred flag is supposed to control the order
of names to lookup. This works fine when either both libraries are
present or -Dreadline is set to auto. However, explicitly enabling
readline with only libedit present, but not setting libedit_preferred,
or alternatively enabling readline with only readline present, but
setting libedit_preferred, too, are both broken. This is because
cc.find_library will throw an error for a not found dependency as soon
as the first required dependency is checked, thus it's impossible to
fallback to the alternative.
Here we only check the second of the two dependencies for
requiredness, thus we only fail when none of the two can be found.
Author: Wolfgang Walther Reviewed-by: Nazir Bilal Yavuz, Alvaro Herrera, Peter Eisentraut Reviewed-by: Tristan Partin
Discussion: https://www.postgresql.org/message-id/ca8f37e1-a2c3-40e2-91f6-59c3d3652ad4@technowledgy.de
Backpatch: 16-, where meson support was added
Support absolute bindir/libdir in regression tests with meson
Passing an absolute bindir/libdir will install the binaries and
libraries to <build>/tmp_install/<bindir> and
<build>/tmp_install/<libdir> respectively.
This path is correctly passed to the regression test suite via
configure/make, but not via meson, yet. This is because the "/"
operator in the following expression throws away the whole left side
when the right side is an absolute path:
test_install_location / get_option('libdir')
This was already correctly handled for dir_prefix, which is likely
absolute as well. This patch handles both bindir and libdir in the
same way - prefixing absolute paths with the tmp_install path
correctly.
Author: Wolfgang Walther Reviewed-by: Nazir Bilal Yavuz, Alvaro Herrera, Peter Eisentraut Reviewed-by: Tristan Partin
Discussion: https://www.postgresql.org/message-id/ca8f37e1-a2c3-40e2-91f6-59c3d3652ad4@technowledgy.de
Backpatch: 16-, where meson support was added
Some distributions put clang into a different path than the llvm
binary path.
For example, this is the case on NixOS / nixpkgs, which failed to find
clang with meson before this patch.
Author: Wolfgang Walther Reviewed-by: Nazir Bilal Yavuz, Alvaro Herrera, Peter Eisentraut Reviewed-by: Tristan Partin
Discussion: https://www.postgresql.org/message-id/ca8f37e1-a2c3-40e2-91f6-59c3d3652ad4@technowledgy.de
Backpatch: 16-, where meson support was added
The upstream name for the ossp-uuid package / pkg-config file is
"uuid". Many distributions change this to be "ossp-uuid" to not
conflict with e2fsprogs.
This lookup fails on distributions which don't change this name, for
example NixOS / nixpkgs. Both "ossp-uuid" and "uuid" are also checked
in configure.ac.
Author: Wolfgang Walther Reviewed-by: Nazir Bilal Yavuz, Alvaro Herrera, Peter Eisentraut Reviewed-by: Tristan Partin
Discussion: https://www.postgresql.org/message-id/ca8f37e1-a2c3-40e2-91f6-59c3d3652ad4@technowledgy.de
Backpatch: 16-, where meson support was added
Commit 274bbced85383e831dde accidentally placed the pg_config.h.in
for SSL_CTX_set_num_tickets on the wrong line wrt where autoheader
places it. Fix by re-arranging and backpatch to the same level as
the original commit.
OpenSSL supports two types of session tickets for TLSv1.3, stateless
and stateful. The option we've used only turns off stateless tickets
leaving stateful tickets active. Use the new API introduced in 1.1.1
to disable all types of tickets.
Backpatch to all supported versions.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reported-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20240617173803.6alnafnxpiqvlh3g@awork3.anarazel.de
Backpatch-through: v12
Tom Lane [Thu, 25 Jul 2024 23:52:08 +0000 (19:52 -0400)]
Doc: fix misleading syntax synopses for targetlists.
In the syntax synopses for SELECT, INSERT, UPDATE, etc,
SELECT ... and RETURNING ... targetlists were missing { ... }
braces around an OR (|) operator. That allows misinterpretation
which could lead to confusion.
David G. Johnston, per gripe from masondeanm@aol.com.
Thomas Munro [Thu, 25 Jul 2024 02:46:01 +0000 (14:46 +1200)]
ci: Pin MacPorts version to 2.9.3.
Commit d01ce180 invented a new way to find the latest MacPorts version.
By bad luck, a new beta release has just been published, and it seems
to lack some packages we need. Go back to searching for this specific
version for now. We still search with a pattern so that we can find the
package for the running version of macOS, but for now we always look for
2.9.3. The code to do that had been anticipated already in a commented
out line, I just didn't expect to have to use it so soon...
Also include the whole MacPorts installation script in the cache key, so
that changes to the script cause a fresh installation. This should make
it a bit easier to reason about the effect of changes on cached state in
github accounts using CI, when we make adjustments.