Amit Kapila [Mon, 4 Aug 2025 04:02:47 +0000 (04:02 +0000)]
Detect and report update_deleted conflicts.
This enhancement builds upon the infrastructure introduced in commit 228c370868, which enables the preservation of deleted tuples and their
origin information on the subscriber. This capability is crucial for
handling concurrent transactions replicated from remote nodes.
The update introduces support for detecting update_deleted conflicts
during the application of update operations on the subscriber. When an
update operation fails to locate the target row-typically because it has
been concurrently deleted-we perform an additional table scan. This scan
uses the SnapshotAny mechanism and we do this additional scan only when
the retain_dead_tuples option is enabled for the relevant subscription.
The goal of this scan is to locate the most recently deleted tuple-matching
the old column values from the remote update-that has not yet been removed
by VACUUM and is still visible according to our slot (i.e., its deletion
is not older than conflict-detection-slot's xmin). If such a tuple is
found, the system reports an update_deleted conflict, including the origin
and transaction details responsible for the deletion.
This provides a groundwork for more robust and accurate conflict
resolution process, preventing unexpected behavior by correctly
identifying cases where a remote update clashes with a deletion from
another origin.
Tom Lane [Sun, 3 Aug 2025 17:01:17 +0000 (13:01 -0400)]
Take a little more care in set_backtrace().
Coverity complained that the "errtrace" string is leaked if we return
early because backtrace_symbols fails. Another criticism that could
be leveled at this is that not providing any hint of what happened is
user-unfriendly. Fix that.
The odds of a leak here are small, and typically it wouldn't matter
anyway since the leak will be in ErrorContext which will soon get
reset. So I'm not feeling a need to back-patch.
Tom Lane [Sun, 3 Aug 2025 01:26:21 +0000 (21:26 -0400)]
Avoid leakage of zero-length arrays in partition_bounds_copy().
If ndatums is zero, the code would allocate zero-length boundKinds
and boundDatums chunks, which would have nothing pointing to them,
leading to Valgrind complaints. Rearrange the code to avoid the
useless pallocs, and also to not bother computing byval/typlen when
they aren't used.
I'm unsure why I didn't see this in my Valgrind testing back in May.
This code hasn't changed since then, but maybe we added a regression
test that reaches this edge case. Or possibly I just failed to
notice the reports, which do say "0 bytes lost".
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/285483.1746756246@sss.pgh.pa.us
Tom Lane [Sat, 2 Aug 2025 23:44:10 +0000 (19:44 -0400)]
Silence complaints about leaks in PlanCacheComputeResultDesc.
CompleteCachedPlan intentionally doesn't worry about small
leaks from PlanCacheComputeResultDesc. However, Valgrind
knows nothing of engineering tradeoffs and complains anyway.
Silence it by doing things the hard way if USE_VALGRIND.
I don't really love this patch, because it makes the handling
of plansource->resultDesc different from the handling of query
dependencies and search_path just above, which likewise are willing
to accept small leaks into the cached plan's context. However,
those cases aren't provoking Valgrind complaints. (Perhaps in a
CLOBBER_CACHE_ALWAYS build, they would?) For the moment, this
makes the src/pl/plpgsql tests leak-free according to Valgrind.
Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/285483.1746756246@sss.pgh.pa.us
Tom Lane [Sat, 2 Aug 2025 23:43:53 +0000 (19:43 -0400)]
Suppress complaints about leaks in TS dictionary loading.
Like the situation with function cache loading, text search
dictionary loading functions tend to leak some cruft into the
dictionary's long-lived cache context. To judge by the examples in
the core regression tests, not very many bytes are at stake.
Moreover, I don't see a way to prevent such leaks without changing the
API for TS template initialization functions: right now they do not
have to worry about making sure that their results are long-lived.
Hence, I think we should install a suppression rule rather than trying
to fix this completely. However, I did grab some low-hanging fruit:
several places were leaking the result of get_tsearch_config_filename.
This seems worth doing mostly because they are inconsistent with other
dictionaries that were freeing it already.
Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/285483.1746756246@sss.pgh.pa.us
Tom Lane [Sat, 2 Aug 2025 23:43:04 +0000 (19:43 -0400)]
Suppress complaints about leaks in function cache loading.
PL/pgSQL and SQL-function parsing leak some stuff into the long-lived
function cache context. This isn't really a huge practical problem,
since it's not a large amount of data and the cruft will be recovered
if we have to re-parse the function. It's not clear that it's worth
working any harder than the previous patch did to eliminate these
leak complaints, so instead silence them with a suppression rule.
This suppression rule also hides the fact that CachedFunction structs
are intentionally leaked in some cases because we're unsure if any
fn_extra pointers remain. That might be nice to do something about
eventually, but it's not clear how.
Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/285483.1746756246@sss.pgh.pa.us
Tom Lane [Sat, 2 Aug 2025 23:39:03 +0000 (19:39 -0400)]
Reduce leakage during PL/pgSQL function compilation.
format_procedure leaks memory, so run it in a short-lived context
not the session-lifespan cache context for the PL/pgSQL function.
parse_datatype called the core parser in the function's cache context,
thus leaking potentially a lot of storage into that context. We were
also being a bit careless with the TypeName structures made in that
code path and others. Most of the time we don't need to retain the
TypeName, so make sure it is made in the short-lived temp context,
and copy it only if we do need to retain it.
These are far from the only leaks in PL/pgSQL compilation, but
they're the biggest as far as I've seen, and further improvement
looks like it'd require delicate and bug-prone surgery.
Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/285483.1746756246@sss.pgh.pa.us
Tom Lane [Sat, 2 Aug 2025 23:32:12 +0000 (19:32 -0400)]
Silence Valgrind leakage complaints in more-or-less-hackish ways.
These changes don't actually fix any leaks. They just make sure that
Valgrind will find pointers to data structures that remain allocated
at process exit, and thus not falsely complain about leaks. In
particular, we are trying to avoid situations where there is no
pointer to the beginning of an allocated block (except possibly
within the block itself, which Valgrind won't count).
* Because dynahash.c never frees hashtable storage except by deleting
the whole hashtable context, it doesn't bother to track the individual
blocks of elements allocated by element_alloc(). This results in
"possibly lost" complaints from Valgrind except when the first element
of each block is actively in use. (Otherwise it'll be on a freelist,
but very likely only reachable via "interior pointers" within element
blocks, which doesn't satisfy Valgrind.)
To fix, if we're building with USE_VALGRIND, expend an extra pointer's
worth of space in each element block so that we can chain them all
together from the HTAB header. Skip this in shared hashtables though:
Valgrind doesn't track those, and we'd need additional locking to make
it safe to manipulate a shared chain.
While here, update a comment obsoleted by 9c911ec06.
* Put the dlist_node fields of catctup and catclist structs first.
This ensures that the dlist pointers point to the starts of these
palloc blocks, and thus that Valgrind won't consider them
"possibly lost".
* The postmaster's PMChild structs and the autovac launcher's
avl_dbase structs also have the dlist_node-is-not-first problem,
but putting it first still wouldn't silence the warning because we
bulk-allocate those structs in an array, so that Valgrind sees a
single allocation. Commonly the first array element will be pointed
to only from some later element, so that the reference would be an
interior pointer even if it pointed to the array start. (This is the
same issue as for dynahash elements.) Since these are pretty simple
data structures, I don't feel too bad about faking out Valgrind by
just keeping a static pointer to the array start.
(This is all quite hacky, and it's not hard to imagine usages where
we'd need some other idea in order to have reasonable leak tracking of
structures that are only accessible via dlist_node lists. But these
changes seem to be enough to silence this class of leakage complaints
for the moment.)
* Free a couple of data structures manually near the end of an
autovacuum worker's run when USE_VALGRIND, and ensure that the final
vac_update_datfrozenxid() call is done in a non-permanent context.
This doesn't have any real effect on the process's total memory
consumption, since we're going to exit as soon as that last
transaction is done. But it does pacify Valgrind.
* Valgrind complains about the postmaster's socket-files and
lock-files lists being leaked, which we can silence by just
not nulling out the static pointers to them.
* Valgrind seems not to consider the global "environ" variable as
a valid root pointer; so when we allocate a new environment array,
it claims that data is leaked. To fix that, keep our own
statically-allocated copy of the pointer, similarly to the previous
item.
Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/285483.1746756246@sss.pgh.pa.us
Tom Lane [Sat, 2 Aug 2025 23:07:53 +0000 (19:07 -0400)]
Fix assorted pretty-trivial memory leaks in the backend.
In the current system architecture, none of these are worth obsessing
over; most are once-per-process leaks. However, Valgrind complains
about all of them, and if we get to using threads rather than
processes for backend sessions, it will become more interesting to
avoid per-session leaks.
* Fix leaks in StartupXLOG() and ShutdownWalRecovery().
* Fix leakage of pq_mq_handle in a parallel worker.
While at it, move mq_putmessage's "Assert(pq_mq_handle != NULL)"
to someplace where it's not trivially useless.
* Fix leak in logicalrep_worker_detach().
* Don't leak the startup-packet buffer in ProcessStartupPacket().
* Fix leak in evtcache.c's DecodeTextArrayToBitmapset().
If the presented array is toasted, this neglected to free the
detoasted copy, which was then leaked into EventTriggerCacheContext.
* I'm distressed by the amount of code that BuildEventTriggerCache
is willing to run while switched into a long-lived cache context.
Although the detoasted array is the only leak that Valgrind reports,
let's tighten things up while we're here. (DecodeTextArrayToBitmapset
is still run in the cache context, so doing this doesn't remove the
need for the detoast fix. But it reduces the surface area for other
leaks.)
* load_domaintype_info() intentionally leaked some intermediate cruft
into the long-lived DomainConstraintCache's memory context, reasoning
that the amount of leakage will typically not be much so it's not
worth doing a copyObject() of the final tree to avoid that. But
Valgrind knows nothing of engineering tradeoffs and complains anyway.
On the whole, the copyObject doesn't cost that much and this is surely
not a performance-critical code path, so let's do it the clean way.
* MarkGUCPrefixReserved didn't bother to clean up removed placeholder
GUCs at all, which shows up as a leak in one regression test.
It seems appropriate for it to do as much cleanup as
define_custom_variable does when replacing placeholders, so factor
that code out into a helper function. define_custom_variable's logic
was one brick shy of a load too: it forgot to free the separate
allocation for the placeholder's name.
Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/285483.1746756246@sss.pgh.pa.us
Tom Lane [Sat, 2 Aug 2025 22:31:39 +0000 (18:31 -0400)]
Fix MemoryContextAllocAligned's interaction with Valgrind.
Arrange that only the "aligned chunk" part of the allocated space is
included in a Valgrind vchunk. This suppresses complaints about that
vchunk being possibly lost because PG is retaining only pointers to
the aligned chunk. Also make sure that trailing wasted space is
marked NOACCESS.
As a tiny performance improvement, arrange that MCXT_ALLOC_ZERO zeroes
only the returned "aligned chunk", not the wasted padding space.
In passing, fix GetLocalBufferStorage to use MemoryContextAllocAligned
instead of rolling its own implementation, which was equally broken
according to Valgrind.
Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/285483.1746756246@sss.pgh.pa.us
Tom Lane [Sat, 2 Aug 2025 22:26:19 +0000 (18:26 -0400)]
Improve our support for Valgrind's leak tracking.
When determining whether an allocated chunk is still reachable,
Valgrind will consider only pointers within what it believes to be
allocated chunks. Normally, all of a block obtained from malloc()
would be considered "allocated" --- but it turns out that if we use
VALGRIND_MEMPOOL_ALLOC to designate sub-section(s) of a malloc'ed
block as allocated, all the rest of that malloc'ed block is ignored.
This leads to lots of false positives of course. In particular,
in any multi-malloc-block context, all but the primary block were
reported as leaked. We also had a problem with context "ident"
strings, which were reported as leaked unless there was some other
pointer to them besides the one in the context header.
To fix, we need to use VALGRIND_MEMPOOL_ALLOC to designate
a context's management structs (the context struct itself and
any per-block headers) as allocated chunks. That forces moving
the VALGRIND_CREATE_MEMPOOL/VALGRIND_DESTROY_MEMPOOL calls into
the per-context-type code, so that the pool identifier can be
made as soon as we've allocated the initial block, but otherwise
it's fairly straightforward. Note that in Valgrind's eyes there
is no distinction between these allocations and the allocations
that the mmgr modules hand out to user code. That's fine for
now, but perhaps someday we'll want to do better yet.
When reading this patch, it's helpful to start with the comments
added at the head of mcxt.c.
Author: Andres Freund <andres@anarazel.de> Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/285483.1746756246@sss.pgh.pa.us
Discussion: https://postgr.es/m/20210317181531.7oggpqevzz6bka3g@alap3.anarazel.de
Fujii Masao [Sun, 3 Aug 2025 01:49:03 +0000 (10:49 +0900)]
Fix assertion failure in pgbench when handling multiple pipeline sync messages.
Previously, when running pgbench in pipeline mode with a custom script
that triggered retriable errors (e.g., serialization errors),
an assertion failure could occur:
Assertion failed: (res == ((void*)0)), function discardUntilSync, file pgbench.c, line 3515.
The root cause was that pgbench incorrectly assumed only a single
pipeline sync message would be received at the end. In reality,
multiple pipeline sync messages can be sent and must be handled properly.
This commit fixes the issue by updating pgbench to correctly process
multiple pipeline sync messages, preventing the assertion failure.
Etsuro Fujita [Sat, 2 Aug 2025 09:30:00 +0000 (18:30 +0900)]
Doc: clarify the restrictions of AFTER triggers with transition tables.
It was not very clear that the triggers are only allowed on plain tables
(not foreign tables). Also, rephrase the documentation for better
readability.
Michael Paquier [Sat, 2 Aug 2025 08:08:45 +0000 (17:08 +0900)]
Fix use-after-free with INSERT ON CONFLICT changes in reorderbuffer.c
In ReorderBufferProcessTXN(), used to send the data of a transaction to
an output plugin, INSERT ON CONFLICT changes (INTERNAL_SPEC_INSERT) are
delayed until a confirmation record arrives (INTERNAL_SPEC_CONFIRM),
updating the change being processed.
8c58624df462 has added an extra step after processing a change to update
the progress of the transaction, by calling the callback
update_progress_txn() based on the LSN stored in a change after a
threshold of CHANGES_THRESHOLD (100) is reached. This logic has missed
the fact that for an INSERT ON CONFLICT change the data is freed once
processed, hence update_progress_txn() could be called pointing to a LSN
value that's already been freed. This could result in random crashes,
depending on the workload.
Per discussion, this issue is fixed by reusing in update_progress_txn()
the LSN from the change processed found at the beginning of the loop,
meaning that for a INTERNAL_SPEC_CONFIRM change the progress is updated
using the LSN of the INTERNAL_SPEC_CONFIRM change, and not the LSN from
its INTERNAL_SPEC_INSERT change. This is actually more correct, as we
want to update the progress to point to the INTERNAL_SPEC_CONFIRM
change.
Masahiko Sawada has found a nice trick to reproduce the issue: hardcode
CHANGES_THRESHOLD at 1 and run test_decoding (test "ddl" being enough)
on an instance running valgrind. The bug has been analyzed by Ethan
Mertz, who also originally suggested the solution used in this patch.
Issue introduced by 8c58624df462, so backpatch down to v16.
Author: Ethan Mertz <ethan.mertz@gmail.com> Co-authored-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/aIsQqDZ7x4LAQ6u1@paquier.xyz
Backpatch-through: 16
Nathan Bossart [Fri, 1 Aug 2025 21:52:11 +0000 (16:52 -0500)]
Allow resetting unknown custom GUCs with reserved prefixes.
Currently, ALTER DATABASE/ROLE/SYSTEM RESET [ALL] with an unknown
custom GUC with a prefix reserved by MarkGUCPrefixReserved() errors
(unless a superuser runs a RESET ALL variant). This is problematic
for cases such as an extension library upgrade that removes a GUC.
To fix, simply make sure the relevant code paths explicitly allow
it. Note that we require superuser or privileges on the parameter
to reset it. This is perhaps a bit more restrictive than is
necessary, but it's not clear whether further relaxing the
requirements is safe.
Oversight in commit 88103567cb. The ALTER SYSTEM fix is dependent
on commit 2d870b4aef, which first appeared in v17. Unfortunately,
back-patching that commit would introduce ABI breakage, and while
that breakage seems unlikely to bother anyone, it doesn't seem
worth the risk. Hence, the ALTER SYSTEM part of this commit is
omitted on v15 and v16.
Reported-by: Mert Alev <mert@futo.org> Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Discussion: https://postgr.es/m/18964-ba09dea8c98fccd6%40postgresql.org
Backpatch-through: 15
libpq: Complain about missing BackendKeyData later with PGgetCancel()
PostgreSQL always sends the BackendKeyData message at connection
startup, but there are some third party backend implementations out
there that don't support cancellation, and don't send the message
[1]. While the protocol docs left it up for interpretation if that is
valid behavior, libpq in PostgreSQL 17 and below accepted it. It does
not seem like the libpq behavior was intentional though, since it did
so by sending CancelRequest messages with all zeros to such servers
(instead of returning an error or making the cancel a no-op).
In version 18 the behavior was changed to return an error when trying
to create the cancel object with PGgetCancel() or PGcancelCreate().
This was done without any discussion, as part of supporting different
lengths of cancel packets for the new 3.2 version of the protocol.
This commit changes the behavior of PGgetCancel() / PGcancel() once
more to only return an error when the cancel object is actually used
to send a cancellation, instead of when merely creating the object.
The reason to do so is that some clients [2] create a cancel object as
part of their connection creation logic (thus having the cancel object
ready for later when they need it), so if creating the cancel object
returns an error, the whole connection attempt fails. By delaying the
error, such clients will still be able to connect to the third party
backend implementations in question, but when actually trying to
cancel a query, the user will be notified that that is not possible
for the server that they are connected to.
This commit only changes the behavior of the older PGgetCancel() /
PQcancel() functions, not the more modern PQcancelCreate() family of
functions. I.e. PQcancelCreate() returns a failed connection object
(CONNECTION_BAD) if the server didn't send a cancellation key. Unlike
the old PQgetCancel() function, we're not aware of any clients in the
field that use PQcancelCreate() during connection startup in a way
that would prevent connecting to such servers.
[1] AWS RDS Proxy is definitely one of them, and CockroachDB might be
another.
Amit Kapila [Fri, 1 Aug 2025 07:58:48 +0000 (07:58 +0000)]
Fix a deadlock during ALTER SUBSCRIPTION ... DROP PUBLICATION.
A deadlock can occur when the DDL command and the apply worker acquire
catalog locks in different orders while dropping replication origins.
The issue is rare in PG16 and higher branches because, in most cases, the
tablesync worker performs the origin drop in those branches, and its
locking sequence does not conflict with DDL operations.
This patch ensures consistent lock acquisition to prevent such deadlocks.
As per buildfarm.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Ajin Cherian <itsajin@gmail.com> Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: vignesh C <vignesh21@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 14, where it was introduced
Discussion: https://postgr.es/m/bab95e12-6cc5-4ebb-80a8-3e41956aa297@gmail.com
Tomas Vondra [Thu, 31 Jul 2025 13:17:26 +0000 (15:17 +0200)]
Fix tab completion for ALTER ROLE|USER ... RESET
Commit c407d5426b87 added tab completion for ALTER ROLE|USER ... RESET,
with the intent to offer only the variables actually set on the role.
But as soon as the user started typing something, it would start to
offer all possible matching variables.
Fix this the same way ALTER DATABASE ... RESET does it, i.e. by
properly considering the prefix.
A second issue causing similar symptoms (offering variables that are not
actually set for a role) was caused by a match to another pattern. The
ALTER DATABASE ... RESET was already excluded, so do the same thing for
ROLE/USER.
Report and fix by Dagfinn Ilmari Mannsåker. Backpatch to 18, same as c407d5426b87.
Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/87qzyghw2x.fsf%40wibble.ilmari.org
Discussion: https://postgr.es/m/87tt4lumqz.fsf%40wibble.ilmari.org
Backpatch-through: 18
Tomas Vondra [Thu, 31 Jul 2025 13:15:44 +0000 (15:15 +0200)]
Schema-qualify unnest() in ALTER DATABASE ... RESET
Commit 9df8727c5067 failed to schema-quality the unnest() call in the
query used to list the variables in ALTER DATABASE ... RESET. If there's
another unnest() function in the search_path, this could cause either
failures, or even security issues (when the tab-completion gets used by
privileged accounts).
Report and fix by Dagfinn Ilmari Mannsåker. Backpatch to 18, same as 9df8727c5067.
Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/87qzyghw2x.fsf%40wibble.ilmari.org
Discussion: https://postgr.es/m/87tt4lumqz.fsf%40wibble.ilmari.org
Backpatch-through: 18
Sort dump objects independent of OIDs, for the 7 holdout object types.
pg_dump sorts objects by their logical names, e.g. (nspname, relname,
tgname), before dependency-driven reordering. That removes one source
of logically-identical databases differing in their schema-only dumps.
In other words, it helps with schema diffing. The logical name sort
ignored essential sort keys for constraints, operators, PUBLICATION
... FOR TABLE, PUBLICATION ... FOR TABLES IN SCHEMA, operator classes,
and operator families. pg_dump's sort then depended on object OID,
yielding spurious schema diffs. After this change, OIDs affect dump
order only in the event of catalog corruption. While pg_dump also
wrongly ignored pg_collation.collencoding, CREATE COLLATION restrictions
have been keeping that imperceptible in practical use.
Use techniques like we use for object types already having full sort key
coverage. Where the pertinent queries weren't fetching the ignored sort
keys, this adds columns to those queries and stores those keys in memory
for the long term.
The ignorance of sort keys became more problematic when commit 172259afb563d35001410dc6daad78b250924038 added a schema diff test
sensitive to it. Buildfarm member hippopotamus witnessed that.
However, dump order stability isn't a new goal, and this might avoid
other dump comparison failures. Hence, back-patch to v13 (all supported
versions).
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/20250707192654.9e.nmisch@google.com
Backpatch-through: 13
Michael Paquier [Thu, 31 Jul 2025 02:20:29 +0000 (11:20 +0900)]
pg_stat_statements: Add counters for generic and custom plans
This patch adds two new counters to pg_stat_statements:
- generic_plan_calls
- custom_plan_calls
These counters track how many times a prepared statement was executed
using a generic or custom plan, respectively, providing a global
equivalent at query level, for top and non-top levels, of
pg_prepared_statements whose data is restricted to a single session.
This commit builds upon e125e360020a. The module is bumped to version
1.13. PGSS_FILE_HEADER is bumped as well, something that the latest
patches touching the on-disk format of the PGSS file did not actually
bother with since 2022..
Author: Sami Imseih <samimseih@gmail.com> Reviewed-by: Ilia Evdokimov <ilya.evdokimov@tantorlabs.com> Reviewed-by: Andrei Lepikhov <lepihov@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Nikolay Samokhvalov <nik@postgres.ai>
Discussion: https://postgr.es/m/CAA5RZ0uFw8Y9GCFvafhC=OA8NnMqVZyzXPfv_EePOt+iv1T-qQ@mail.gmail.com
Michael Paquier [Thu, 31 Jul 2025 01:06:34 +0000 (10:06 +0900)]
Rename CachedPlanType to PlannedStmtOrigin for PlannedStmt
Commit 719dcf3c42 introduced a field called CachedPlanType in
PlannedStmt to allow extensions to determine whether a cached plan is
generic or custom.
After discussion, the concepts that we want to track are a bit wider
than initially anticipated, as it is closer to knowing from which
"source" or "origin" a PlannedStmt has been generated or retrieved.
Custom and generic cached plans are a subset of that.
Based on the state of HEAD, we have been able to define two more
origins:
- "standard", for the case where PlannedStmt is generated in
standard_planner(), the most common case.
- "internal", for the fake PlannedStmt generated internally by some
query patterns.
This could be tuned in the future depending on what is needed. This
looks like a good starting point, at least. The default value is called
"UNKNOWN", provided as fallback value. This value is not used in the
core code, the idea is to let extensions building their own PlannedStmts
know about this new field.
Author: Michael Paquier <michael@paquier.xyz> Co-authored-by: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/aILaHupXbIGgF2wJ@paquier.xyz
doc: Adjust documentation for vacuumdb --missing-stats-only.
The sentence in question gave readers the impression that vacuumdb
removes statistics for a period of time while analyzing, but it's
actually meant to convey that --analyze-in-stages temporarily
replaces existing statistics with ones generated with lower
statistics targets.
Reported-by: Frédéric Yhuel <frederic.yhuel@dalibo.com> Reviewed-by: Frédéric Yhuel <frederic.yhuel@dalibo.com> Reviewed-by: "David G. Johnston" <david.g.johnston@gmail.com> Reviewed-by: Corey Huinker <corey.huinker@gmail.com> Reviewed-by: Jeff Davis <pgsql@j-davis.com>
Discussion: https://postgr.es/m/4b94ca16-7a6d-4581-b2aa-4ea79dbc082a%40dalibo.com
Backpatch-through: 18
Presently, pg_upgrade assumes that all non-default tablespaces
don't move to different directories during upgrade. Unfortunately,
this isn't true for in-place tablespaces, which move to the new
cluster's pg_tblspc directory. This commit teaches pg_upgrade to
handle in-place tablespaces by retrieving the tablespace
directories for both the old and new clusters. In turn, we can
relax the prohibition on non-default tablespaces for same-version
upgrades, i.e., if all non-default tablespaces are in-place,
pg_upgrade may proceed.
This change is primarily intended to enable additional pg_upgrade
testing with non-default tablespaces, as is done in
006_transfer_modes.pl.
Reviewed-by: Corey Huinker <corey.huinker@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/aA_uBLYMUs5D66Nb%40nathan
Andrew Dunstan [Wed, 30 Jul 2025 15:04:05 +0000 (11:04 -0400)]
Revert Non text modes for pg_dumpall, and pg_restore support
Recent discussions of the mechanisms used to manage global data have
raised concerns about their robustness and security. Rather than try
to deal with those concerns at a very late stage of the release cycle,
the conclusion is to revert these features and work on them for the
next release.
This reverts parts or all of the following commits:
1495eff7bdb Non text modes for pg_dumpall, correspondingly change pg_restore 5db3bf7391d Clean up from commit 1495eff7bdb 289f74d0cb2 Add more TAP tests for pg_dumpall 2ef57908067 Fix a couple of error messages and tests for them b52a4a5f285 Clean up error messages from 1495eff7bdb 4170298b6ec Further cleanup for directory creation on pg_dump/pg_dumpall 22cb6d28950 Fix memory leak in pg_restore.c 928394b664b Improve various new-to-v18 appendStringInfo calls 39729ec01d2 Fix fat fingering in 22cb6d28950 5822bf21d50 Add missing space in pg_restore documentation. f09088a01d3 Free memory properly in pg_restore.c 40b9c27014d pg_restore cleanups 4aad2cb7707 Portability fix: isdigit() must be passed an unsigned char. 88e947136b4 Fix typos and grammar in the code f60420cff66 doc: Alphabetize long options for pg_dump[all]. bc35adee8d7 doc: Put new options in consistent order on man pages a876464abc7 Message style improvements dec6643487b Improve pg_dump/pg_dumpall help synopses and terminology 0ebd2425558 Run pgperltidy
Michael Paquier [Wed, 30 Jul 2025 02:55:42 +0000 (11:55 +0900)]
Fix ./configure checks with __cpuidex() and __cpuid()
The configure checks used two incorrect functions when checking the
presence of some routines in an environment:
- __get_cpuidex() for the check of __cpuidex().
- __get_cpuid() for the check of __cpuid().
This means that Postgres has never been able to detect the presence of
these functions, impacting environments where these exist, like Windows.
Simply fixing the function name does not work. For example, using
configure with MinGW on Windows causes the checks to detect all four of
__get_cpuid(), __get_cpuid_count(), __cpuidex() and __cpuid() to be
available, causing a compilation failure as this messes up with the
MinGW headers as we would include both <intrin.h> and <cpuid.h>.
The Postgres code expects only one in { __get_cpuid() , __cpuid() } and
one in { __get_cpuid_count() , __cpuidex() } to exist. This commit
reshapes the configure checks to do exactly what meson is doing, which
has been working well for us: check one, then the other, but never allow
both to be detected in a given build.
The logic is wrong since 3dc2d62d0486 and 792752af4eb5 where these
checks have been introduced (the second case is most likely a copy-pasto
coming from the first case), with meson documenting that the configure
checks were broken. As far as I can see, they are not once applied
consistently with what the code expects, but let's see if the buildfarm
has different something to say. The comment in meson.build is adjusted
as well, to reflect the new reality.
If the client sent a query cancel request with backend PID 0, it
tripped an assertion. With assertions disabled, you got this in the
log instead:
LOG: invalid cancel request with PID 0
LOG: wrong key in cancel request for process 0
Query cancellations don't even require authentication, so we better
tolerate bogus requests. Fix by turning the assertion into a regular
runtime check.
Spotted while testing libpq behavior with a modified server that
didn't send BackendKeyData to the client.
Tom Lane [Tue, 29 Jul 2025 19:17:40 +0000 (15:17 -0400)]
Don't put library-supplied -L/-I switches before user-supplied ones.
For many optional libraries, we extract the -L and -l switches needed
to link the library from a helper program such as llvm-config. In
some cases we put the resulting -L switches into LDFLAGS ahead of
-L switches specified via --with-libraries. That risks breaking
the user's intention for --with-libraries.
It's not such a problem if the library's -L switch points to a
directory containing only that library, but on some platforms a
library helper may "helpfully" offer a switch such as -L/usr/lib
that points to a directory holding all standard libraries. If the
user specified --with-libraries in hopes of overriding the standard
build of some library, the -L/usr/lib switch prevents that from
happening since it will come before the user-specified directory.
To fix, avoid inserting these switches directly into LDFLAGS during
configure, instead adding them to LIBDIRS or SHLIB_LINK. They will
still eventually get added to LDFLAGS, but only after the switches
coming from --with-libraries.
The same problem exists for -I switches: those coming from
--with-includes should appear before any coming from helper programs
such as llvm-config. We have not heard field complaints about this
case, but it seems certain that a user attempting to override a
standard library could have issues.
The changes for this go well beyond configure itself, however,
because many Makefiles have occasion to manipulate CPPFLAGS to
insert locally-desirable -I switches, and some of them got it wrong.
The correct ordering is any -I switches pointing at within-the-
source-tree-or-build-tree directories, then those from the tree-wide
CPPFLAGS, then those from helper programs. There were several places
that risked pulling in a system-supplied copy of libpq headers, for
example, instead of the in-tree files. (Commit cb36f8ec2 fixed one
instance of that a few months ago, but this exercise found more.)
The Meson build scripts may or may not have any comparable problems,
but I'll leave it to someone else to investigate that.
Reported-by: Charles Samborski <demurgos@demurgos.net>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/70f2155f-27ca-4534-b33d-7750e20633d7@demurgos.net
Backpatch-through: 13
Tom Lane [Tue, 29 Jul 2025 16:47:19 +0000 (12:47 -0400)]
Remove unnecessary complication around xmlParseBalancedChunkMemory.
When I prepared 71c0921b6 et al yesterday, I was thinking that the
logic involving explicitly freeing the node_list output was still
needed to dodge leakage bugs in libxml2. But I was misremembering:
we introduced that only because with early 2.13.x releases we could
not trust xmlParseBalancedChunkMemory's result code, so we had to
look to see if a node list was returned or not. There's no reason
to believe that xmlParseBalancedChunkMemory will fail to clean up
the node list when required, so simplify. (This essentially
completes reverting all the non-cosmetic changes in 6082b3d5d.)
Reported-by: Jim Jones <jim.jones@uni-muenster.de>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/997668.1753802857@sss.pgh.pa.us
Backpatch-through: 13
Tom Lane [Tue, 29 Jul 2025 14:35:01 +0000 (10:35 -0400)]
Split up pgfdw_report_error so that we can mark it pg_noreturn.
pgfdw_report_error has the same design fault as elog/ereport
do, namely that it might or might not return depending on elevel.
While those functions are too widely used to redesign, there are
only about 30 call sites for pgfdw_report_error, and it's not
exposed for extension use. So let's rethink it. Split it into
pgfdw_report_error() which hard-wires ERROR elevel and is marked
pg_noreturn, and pgfdw_report() which allows only elevels less
than ERROR. (Thanks to Álvaro Herrera for suggesting this naming.)
The motivation for doing this now is that in the wake of commit 80aa9848b, which removed a bunch of PG_TRYs from postgres_fdw,
we're seeing more thorough flow analysis there from C compilers
and Coverity. Marking pgfdw_report_error as noreturn where
appropriate should help prevent false-positive complaints.
We could alternatively have invented a macro wrapper similar
to what we use for elog/ereport, but that code is sufficiently
fragile that I didn't find it appetizing to make another copy.
Since 80aa9848b already changed pgfdw_report_error's signature,
this won't make back-patching any harder than it was already.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/420221.1753714491@sss.pgh.pa.us
Tom Lane [Tue, 29 Jul 2025 13:42:22 +0000 (09:42 -0400)]
Suppress uninitialized-variable warning.
In the wake of commit 80aa9848b, a few compilers think that
postgresAcquireSampleRowsFunc's "reltuples" might be used
uninitialized. The logic is visibly correct, both before
and after that change; presumably what happened here is that
the previous presence of a setjmp() in the function stopped
them from attempting any flow analysis at all. Add a dummy
initialization to silence the warning.
Reported-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAExHW5tkerCufA_F6oct5dMJ61N+yVrVgYXL7M8dD-5_zXjrDw@mail.gmail.com
Add regression test for background worker restart after crash.
Previously, if a background worker crashed and the server restarted
with restart_after_crash enabled, the worker was not restarted
as expected. This issue was fixed by commit b5d084c5353,
which ensures that background workers without the never-restart flag
are correctly restarted after a crash-and-restart cycle.
To guard against regressions, this commit adds a test that verifies
a background worker successfully restarts in such a scenario.
Michael Paquier [Tue, 29 Jul 2025 08:03:07 +0000 (17:03 +0900)]
Handle timeout in PostgreSQL::Test::Cluster::is_alive()
This commit adds an extra --timeout=PG_TEST_TIMEOUT_DEFAULT to the call
of pg_isready done in is_alive(), so as it is possible to have more
leverage with the call on machines constrained on resources.
By default the timeout is 180s, and it can be changed depending on the
environment where the tests are run.
Per buildfarm member mamba, where the default timeout of 3s used by
pg_isready has proved that it may not be enough as the postmaster may
not have the time it needs to reply to a ping request.
Reported-by: Alexander Lakhin <exclusion@gmail.com> Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/29b637df-f818-4b52-986a-f11ba28300e9@gmail.com
This commit documents differences in the definition of word separators for
the initcap function between libc and ICU locale providers.
Backpatch to all supported branches.
David Rowley [Tue, 29 Jul 2025 03:18:01 +0000 (15:18 +1200)]
Display Memoize planner estimates in EXPLAIN
There've been a few complaints that it can be overly difficult to figure
out why the planner picked a Memoize plan. To help address that, here we
adjust the EXPLAIN output to display the following additional details:
1) The estimated number of cache entries that can be stored at once
2) The estimated number of unique lookup keys that we expect to see
3) The number of lookups we expect
4) The estimated hit ratio
Technically #4 can be calculated using #1, #2 and #3, but it's not a
particularly obvious calculation, so we opt to display it explicitly.
The original patch by Lukas Fittl only displayed the hit ratio, but
there was a fear that might lead to more questions about how that was
calculated. The idea with displaying all 4 is to be transparent which
may allow queries to be tuned more easily. For example, if #2 isn't
correct then maybe extended statistics or a manual n_distinct estimate can
be used to help fix poor plan choices.
Author: Ilia Evdokimov <ilya.evdokimov@tantorlabs.com>
Author: Lukas Fittl <lukas@fittl.com> Reviewed-by: David Rowley <dgrowleyml@gmail.com> Reviewed-by: Andrei Lepikhov <lepihov@gmail.com> Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CAP53Pky29GWAVVk3oBgKBDqhND0BRBN6yTPeguV_qSivFL5N_g%40mail.gmail.com
Tom Lane [Mon, 28 Jul 2025 20:50:41 +0000 (16:50 -0400)]
Avoid regression in the size of XML input that we will accept.
This mostly reverts commit 6082b3d5d, "Use xmlParseInNodeContext
not xmlParseBalancedChunkMemory". It turns out that
xmlParseInNodeContext will reject text chunks exceeding 10MB, while
(in most libxml2 versions) xmlParseBalancedChunkMemory will not.
The bleeding-edge libxml2 bug that we needed to work around a year
ago is presumably no longer a factor, and the argument that
xmlParseBalancedChunkMemory is semi-deprecated is not enough to
justify a functionality regression. Hence, go back to doing it
the old way.
Reported-by: Michael Paquier <michael@paquier.xyz>
Author: Michael Paquier <michael@paquier.xyz> Co-authored-by: Erik Wienhold <ewie@ewie.name> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/aIGknLuc8b8ega2X@paquier.xyz
Backpatch-through: 13
Robert Haas [Mon, 28 Jul 2025 15:15:47 +0000 (11:15 -0400)]
Remove misleading hint for "unexpected data beyond EOF" error.
Commit ffae5cc5a6024b4e25ec920ed5c4dfac649605f8 added this hint in 2006,
but it's now obsolete and doesn't reflect what users should really check
in this situation. We were not able to agree on a new hint, so just delete
the existing one and update the comments to mention one possibility that
is known to cause problems of this kind: something other than PostgreSQL
is modifying files in the PostgreSQL data directory.
Author: Jakub Wartak <jakub.wartak@enterprisedb.com> Reviewed-by: Robert Haas <rhaas@postgresql.org> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/CAKZiRmxNbcaL76x=09Sxf7aUmrRQJBf8drzDdUHo+j9_eM+VMg@mail.gmail.com
As a result, an attempt to set synchronous_standby_names to any value
that does not parse resulted in a generic "parser failed." message
rather than anything more specific. This fixes that.
Michael Paquier [Sun, 27 Jul 2025 23:15:11 +0000 (08:15 +0900)]
Fix performance regression with flush of pending fixed-numbered stats
The callback added in fc415edf8ca8 used to check if there is any pending
data to flush for fixed-numbered statistics, done by looping across all
the builtin and custom stats kinds with a call to have_fixed_pending_cb,
is proving to able to show in workloads that do not report any stats
(read-only, no function calls, no WAL, no IO, etc). The code used in
v17 was cheaper than that what HEAD has introduced, relying on three
boolean checks for WAL, SLRU and IO stats.
This commit switches the code to use a more efficient approach than fc415edf8ca8, with a single boolean flag that can be switched to "true"
by any fixed-numbered stats kinds to force pgstat_report_stat() to go
through one round of reports. The flag is reset by pgstat_report_stat()
once a full round of reports is done. The flag being false means that
fixed-numbered stats kinds saw no activity, and that there is no pending
data to flush.
ac000fca743e took one step in improving the performance by reducing the
number of stats kinds that the backend can hold. This commit takes a
more drastic step by bringing back the code efficiency to what it was
before v18 with a cheap check at the beginning of pgstat_report_stat()
for its fast-exit path.
The callback have_static_pending_cb is removed as an effect of all that.
Process sync requests incrementally in AbsorbSyncRequests
If the number of sync requests is big enough, the palloc() call in
AbsorbSyncRequests() will attempt to allocate more than 1 GB of memory,
resulting in failure. This can lead to an infinite loop in the checkpointer
process, as it repeatedly fails to absorb the pending requests.
This commit introduces the following changes to cope with this problem:
1. Turn pending checkpointer requests array in shared memory into a bounded
ring buffer.
2. Limit maximum ring buffer size to 10M items.
3. Make AbsorbSyncRequests() process requests incrementally in 10K batches.
Even #2 makes the whole queue size fit the maximum palloc() size of 1 GB.
of continuous lock holding.
This commit is for master only. Simpler fix, which just limits a request
queue size to 10M, will be backpatched.
Reported-by: Ekaterina Sokolova <e.sokolova@postgrespro.ru>
Discussion: https://postgr.es/m/db4534f83a22a29ab5ee2566ad86ca92%40postgrespro.ru
Author: Maxim Orlov <orlovmg@gmail.com> Co-authored-by: Xuneng Zhou <xunengzhou@gmail.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
Michael Paquier [Sun, 27 Jul 2025 08:48:47 +0000 (17:48 +0900)]
Add assertions for all the required index AM callbacks
Similar checks are done for the mandatory table AM callbacks. A portion
of the index AM callbacks are optional and can be NULL; the rest is
mandatory and is documented as such in the documentation and in amapi.h.
These checks are useful to detect quickly if all the mandatory callbacks
are defined when implementing a new index access method, as the
assertions are run when loading the AM.
Author: Japin Li <japinli@hotmail.com>
Discussion: https://postgr.es/m/ME0P300MB0445795D31CEAB92C58B41FDB651A@ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM
Tom Lane [Fri, 25 Jul 2025 20:37:29 +0000 (16:37 -0400)]
Silence leakage complaint about postgres_fdw's InitPgFdwOptions.
Valgrind complains that the PQconninfoOption array returned by libpq
is leaked. We apparently believed that we could suppress that warning
by storing that array's address in a static variable. However, modern
C compilers are bright enough to optimize the static variable away.
We could escalate that arms race by making the variable global.
But on the whole it seems better to revise the code so that it
can free libpq's result properly. The only thing that costs
us is copying the parameter-name keywords; which seems like a
pretty negligible cost in a function that runs at most once per
process.
Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Discussion: https://postgr.es/m/2976982.1748049023@sss.pgh.pa.us
Tom Lane [Fri, 25 Jul 2025 20:36:44 +0000 (16:36 -0400)]
Run pgindent on the changes of the previous patch.
This step can be checked mechanically.
Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Discussion: https://postgr.es/m/2976982.1748049023@sss.pgh.pa.us
Tom Lane [Fri, 25 Jul 2025 20:31:43 +0000 (16:31 -0400)]
Reap the benefits of not having to avoid leaking PGresults.
Remove a bunch of PG_TRY constructs, de-volatilize related
variables, remove some PQclear calls in error paths.
Aside from making the code simpler and shorter, this should
provide some marginal performance gains.
For ease of review, I did not re-indent code within the removed
PG_TRY constructs. That'll be done in a separate patch.
Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Discussion: https://postgr.es/m/2976982.1748049023@sss.pgh.pa.us
Tom Lane [Fri, 25 Jul 2025 20:30:00 +0000 (16:30 -0400)]
Create infrastructure to reliably prevent leakage of PGresults.
Commit 232d8caea fixed a case where postgres_fdw could lose track
of a PGresult object, resulting in a process-lifespan memory leak.
But I have little faith that there aren't other potential PGresult
leakages, now or in future, in the backend modules that use libpq.
Therefore, this patch proposes infrastructure that makes all
PGresults returned from libpq act as though they are palloc'd
in the CurrentMemoryContext (with the option to relocate them to
another context later). This should greatly reduce the risk of
careless leaks, and it also permits removal of a bunch of code
that attempted to prevent such leaks via PG_TRY blocks.
This patch adds infrastructure that wraps each PGresult in a
"libpqsrv_PGresult" that provides a memory context reset callback
to PQclear the PGresult. Code using this abstraction is inherently
memory-safe to the same extent as we are accustomed to in most backend
code. Furthermore, we add some macros that automatically redirect
calls of the libpq functions concerned with PGresults to use this
infrastructure, so that almost no source-code changes are needed to
wheel this infrastructure into place in all the backend code that
uses libpq.
Perhaps in future we could create similar infrastructure for
PGconn objects, but there seems less need for that.
This patch just creates the infrastructure and makes relevant code
use it, including reverting 232d8caea in favor of this mechanism.
A good deal of follow-on simplification is possible now that we don't
have to be so cautious about freeing PGresults, but I'll put that in
a separate patch.
Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Discussion: https://postgr.es/m/2976982.1748049023@sss.pgh.pa.us
This flag was effectively a no-op in EXEC_BACKEND (ie, Windows)
builds, because it was kept in the process-local HTAB struct,
and it could only ever become set in the postmaster's copy.
The simplest fix is to move it to the shared HASHHDR struct.
We could keep a copy in HTAB as well, as we do with keysize
and some other fields, but the "too much contention" argument
doesn't seem to apply here: we only examine isfixed during
element_alloc(), which had better not get hit very often for
a shared hashtable.
This oversight dates to 7c797e719 which invented the option.
But back-patching doesn't seem appropriate given the lack of
field complaints. If there is anyone running an affected
workload on Windows, they might be unhappy about the behavior
changing in a minor release.
Author: Aidar Imamov <a.imamov@postgrespro.ru> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/4d0cb35ff01c5c74d2b9a582ecb73823@postgrespro.ru
Refactor grammar to create opt_utility_option_list
This changes the grammar for REINDEX, CHECKPOINT, CLUSTER, ANALYZE/ANALYSE;
they still accept the same options as before, but the grammar is written
differently for convenience of future development.
Fix background worker not restarting after crash-and-restart cycle.
Previously, if a background worker crashed (e.g., due to a SIGKILL) and
the server restarted due to restart_after_crash being enabled,
the worker was not restarted as expected. Background workers without
the never-restart flag should automatically restart in this case.
This issue was introduced in commit 28a520c0b77, which failed to reset
the rw_pid field in the RegisteredBgWorker struct for the crashed worker.
This commit fixes the problem by resetting rw_pid for all eligible
background workers during the crash-and-restart cycle.
Back-patched to v18, where the bug was introduced.
Bug fix patches were proposed by Andrey Rudometov and ChangAo Chen,
but this commit uses a different approach.
Michael Paquier [Fri, 25 Jul 2025 07:17:13 +0000 (16:17 +0900)]
Fix assertion failure with latch wait in single-user mode
LatchWaitSetPostmasterDeathPos, the latch event position for the
postmaster death event, is initialized under IsUnderPostmaster.
WaitLatch() considered it as a valid wait target in single-user mode
(!IsUnderPostmaster), which was incorrect.
One code path found to fail with an assertion failure is a database drop
in single-user mode while waiting in WaitForProcSignalBarrier() after
the drop.
Michael Paquier [Fri, 25 Jul 2025 02:17:48 +0000 (11:17 +0900)]
Lower bounds related to pgstats kinds
This commit changes stats kinds to have the following bounds, making
their handling in core cheaper by default:
- PGSTAT_KIND_CUSTOM_MIN 128 -> 24
- PGSTAT_KIND_MAX 256 -> 32
The original numbers were rather high, and showed an impact on
performance in pgstat_report_stat() for the case of simple queries with
its early-exit path if there are no pending statistics to flush. This
logic will be improved more in a follow-up commit to bring the
performance of pgstat_report_stat() on par with v17 and older versions.
Lowering the bounds is a change worth doing on its own, independently of
the other improvement.
These new numbers should be enough to leave some room for the following
years for built-in and custom stats kinds, with stable ID numbers. At
least that should be enough to start with this facility for extension
developers. It can be always increased in the tree depending on the
requirements wanted.
Per discussion with Andres Freund and Bertrand Drouvot.
This function is declared as returning a uint8, but it returns a
bool in one code path. To fix, return (uint8) 0 instead of false
there. This should behave exactly the same as before, but it might
prevent future compiler complaints.
Amit Kapila [Thu, 24 Jul 2025 09:05:32 +0000 (09:05 +0000)]
Fix duplicate transaction replay during pg_createsubscriber.
Previously, the tool could replay the same transaction twice, once during
recovery, then again during replication after the subscriber was set up.
This occurred because the same recovery_target_lsn was used both to
finalize recovery and to start replication. If
recovery_target_inclusive = true, the transaction at that LSN would be
applied during recovery and then sent again by the publisher leading to
duplication.
To prevent this, we now set recovery_target_inclusive = false. This
ensures the transaction at recovery_target_lsn is not reapplied during
recovery, avoiding duplication when replication begins.
Bug #18897 Reported-by: Zane Duffield <duffieldzane@gmail.com>
Author: Shlok Kyal <shlok.kyal.oss@gmail.com> Reviewed-by: vignesh C <vignesh21@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 17, where it was introduced
Discussion: https://postgr.es/m/18897-d3db67535860dddb@postgresql.org
Michael Paquier [Thu, 24 Jul 2025 06:41:18 +0000 (15:41 +0900)]
Introduce field tracking cached plan type in PlannedStmt
PlannedStmt gains a new field, called CachedPlanType, able to track if a
given plan tree originates from the cache and if we are dealing with a
generic or custom cached plan.
This field can be used for monitoring or statistical purposes, in the
executor hooks, for example, based on the planned statement attached to
a QueryDesc. A patch is under discussion for pg_stat_statements to
provide an equivalent of the counters in pg_prepared_statements for
custom and generic plans, to provide a more global view of such data, as
this data is now restricted to the current session.
The concept introduced in this commit is useful on its own, and has been
extracted from a larger patch by the same author.
Author: Sami Imseih <samimseih@gmail.com> Reviewed-by: Andrei Lepikhov <lepihov@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CAA5RZ0uFw8Y9GCFvafhC=OA8NnMqVZyzXPfv_EePOt+iv1T-qQ@mail.gmail.com
Ensure the test waits for the apply worker to exit after disabling the
subscription. This is necessary to safely enable the retain_dead_tuples
option. Also added a similar wait in another part of the test to prevent
unintended apply worker activity that could lead to test failures
post-subscription disable.
Tom Lane [Wed, 23 Jul 2025 19:44:29 +0000 (15:44 -0400)]
Fix build breakage on Solaris-alikes with late-model GCC.
Solaris has never bothered to add "const" to the second argument
of PAM conversation procs, as all other Unixen did decades ago.
This resulted in an "incompatible pointer" compiler warning when
building --with-pam, but had no more serious effect than that,
so we never did anything about it. However, as of GCC 14 the
case is an error not warning by default.
To complicate matters, recent OpenIndiana (and maybe illumos
in general?) *does* supply the "const" by default, so we can't
just assume that platforms using our solaris template need help.
What we can do, short of building a configure-time probe,
is to make solaris.h #define _PAM_LEGACY_NONCONST, which
causes OpenIndiana's pam_appl.h to revert to the traditional
definition, and hopefully will have no effect anywhere else.
Then we can use that same symbol to control whether we include
"const" in the declaration of pam_passwd_conv_proc().
Bug: #18995 Reported-by: Andrew Watkins <awatkins1966@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/18995-82058da9ab4337a7@postgresql.org
Backpatch-through: 13
lwlock.c, lwlock.h, and wait_event_names.txt each contain a list of
built-in LWLock tranches. It is easy to miss one or the other when
adding or removing tranches, and discrepancies have adverse effects
(e.g., breaking JOINs between pg_stat_activity and pg_wait_events).
This commit moves the lists of built-in tranches in lwlock.{c,h} to
lwlocklist.h and adds a cross-check to the script that generates
lwlocknames.h. If the lists do not match exactly, building will
fail.
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/aHpOgwuFQfcFMZ/B%40ip-10-97-1-34.eu-west-3.compute.internal
Commit 70e81861fadd split out xlogrecovery.c/h and moved some enums
related to recovery targets to xlogrecovery.h. However, it seems that
the enum RecoveryTargetAction was inadvertently left out by that commit.
This commit moves it to xlogrecovery.h for consistency.
Amit Kapila [Wed, 23 Jul 2025 02:56:00 +0000 (02:56 +0000)]
Preserve conflict-relevant data during logical replication.
Logical replication requires reliable conflict detection to maintain data
consistency across nodes. To achieve this, we must prevent premature
removal of tuples deleted by other origins and their associated commit_ts
data by VACUUM, which could otherwise lead to incorrect conflict reporting
and resolution.
This patch introduces a mechanism to retain deleted tuples on the
subscriber during the application of concurrent transactions from remote
nodes. Retaining these tuples allows us to correctly ignore concurrent
updates to the same tuple. Without this, an UPDATE might be misinterpreted
as an INSERT during resolutions due to the absence of the original tuple.
Additionally, we ensure that origin metadata is not prematurely removed by
vacuum freeze, which is essential for detecting update_origin_differs and
delete_origin_differs conflicts.
To support this, a new replication slot named pg_conflict_detection is
created and maintained by the launcher on the subscriber. Each apply
worker tracks its own non-removable transaction ID, which the launcher
aggregates to determine the appropriate xmin for the slot, thereby
retaining necessary tuples.
Conflict information retention (deleted tuples and commit_ts) can be
enabled per subscription via the retain_conflict_info option. This is
disabled by default to avoid unnecessary overhead for configurations that
do not require conflict resolution or logging.
During upgrades, if any subscription on the old cluster has
retain_conflict_info enabled, a conflict detection slot will be created to
protect relevant tuples from deletion when the new cluster starts.
This is a foundational work to correctly detect update_deleted conflict
which will be done in a follow-up patch.
David Rowley [Wed, 23 Jul 2025 00:02:55 +0000 (12:02 +1200)]
Use strchr instead of strstr for single-char lookups
Compilers such as gcc and clang seem to perform this rewrite
automatically when the lookup string is known at compile-time to contain
a single character. The MSVC compiler does not seem apply the same
optimization, and the code being adjusted here is within an #ifdef WIN32,
so it seems worth adjusting this with the assumption that strchr() will be
slightly more performant.
There are a couple more instances in contrib/fuzzystrmatch that this
commit could also have adjusted. After some discussion, we deemed those
not important enough to bother with.
Michael Paquier [Tue, 22 Jul 2025 23:18:36 +0000 (08:18 +0900)]
ecpg: Improve error detection around ecpg_strdup()
Various code paths of the ECPG code did not check for memory allocation
failures, including the specific case where ecpg_strdup() considers a
NULL value given in input as a valid behavior. strdup() returning
itself NULL on failure, there was no way to make the difference between
what could be valid and what should fail.
With the different cases in mind, ecpg_strdup() is redesigned and gains
a new optional argument, giving its callers the possibility to
differentiate allocation failures and valid cases where the caller is
giving a NULL value in input. Most of the ECPG code does not expect a
NULL value, at the exception of ECPGget_desc() (setlocale) and
ECPGconnect(), like dbname being unspecified, with repeated strdup
calls.
The code is adapted to work with this new routine. Note the case of
ecpg_auto_prepare(), where the code order is switched so as we handle
failures with ecpg_strdup() before manipulating any cached data,
avoiding inconsistencies.
This class of failure is unlikely a problem in practice, so no backpatch
is done. Random OOM failures in ECPGconnect() could cause the driver to
connect to a different server than the one wanted by the caller, because
it could fallback to default values instead of the parameters defined
depending on the combinations of allocation failures and successes.
Author: Evgeniy Gorbanev <gorbanyoves@basealt.ru> Co-authored-by: Aleksander Alekseev <aleksander@tigerdata.com> Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de> Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/a6b193c1-6994-4d9c-9059-aca4aaf41ddd@basealt.ru
Remove translation marker from libpq-be-fe-helpers.h.
Commit 112faf1378e introduced a translation marker in libpq-be-fe-helpers.h,
but this caused build failures on some platforms—such as the one reported
by buildfarm member indri—due to linker issues with dblink. This is the same
problem previously addressed in commit 213c959a294.
To fix the issue, this commit removes the translation marker from
libpq-be-fe-helpers.h, following the approach used in 213c959a294.
It also removes the associated gettext_noop() calls added in commit 112faf1378e, as they are no longer needed.
While reviewing this, a gettext_noop() call was also found in
contrib/basic_archive. Since contrib modules don't support translation,
this call has been removed as well.
Andres Freund [Tue, 22 Jul 2025 12:30:52 +0000 (08:30 -0400)]
aio: Fix assertion, clarify README
The assertion wouldn't have triggered for a long while yet, but this won't
accidentally fail to detect the issue if/when it occurs.
Author: Matthias van de Meent <boekewurm+postgres@gmail.com>
Discussion: https://postgr.es/m/CAEze2Wj-43JV4YufW23gm=Uwr7Lkj+p0yKctKHxNm1rwFC+_DQ@mail.gmail.com
Backpatch-through: 18
Amit Kapila [Tue, 22 Jul 2025 06:00:21 +0000 (06:00 +0000)]
Doc: Fix logical replication examples.
The definition of \dRp+ was modified in commit 7054186c4e. This patch
updates the column list and row filter examples to align with the revised
definition.
Author: Shlok Kyal <shlok.kyal.oss@gmail.com>
Reviewed by: Peter Smith <smithpb2250@gmail.com>
Backpatch-through: 18, where it was introduced
Discussion: https://postgr.es/m/CANhcyEUvqkSO6b9zi_fs_BBPEge5acj4mf8QKmq2TX-7axa7EQ@mail.gmail.com
Michael Paquier [Tue, 22 Jul 2025 05:34:15 +0000 (14:34 +0900)]
doc: Inform about aminsertcleanup optional NULLness
This index AM callback has been introduced in c1ec02be1d79 and it is
optional, currently only being used by BRIN. Optional callbacks are
documented with NULL as possible value in amapi.h and indexam.sgml, but
this callback has missed this part of the description.
Reported-by: Peter Smith <smithpb2250@gmail.com> Reviewed-by: Japin Li <japinli@hotmail.com>
Discussion: https://postgr.es/m/CAHut+PvgYcPmPDi1YdHMJY5upnyGRpc0N8pk1xNB11xDSBwNog@mail.gmail.com
Backpatch-through: 17
Log remote NOTICE, WARNING, and similar messages using ereport().
Previously, NOTICE, WARNING, and similar messages received from remote
servers over replication, postgres_fdw, or dblink connections were printed
directly to stderr on the local server (e.g., the subscriber). As a result,
these messages lacked log prefixes (e.g., timestamp), making them harder
to trace and correlate with other log entries.
This commit addresses the issue by introducing a custom notice receiver
for replication, postgres_fdw, and dblink connections. These messages
are now logged via ereport(), ensuring they appear in the logs with proper
formatting and context, which improves clarity and aids in debugging.
Michael Paquier [Tue, 22 Jul 2025 05:00:00 +0000 (14:00 +0900)]
ecpg: Fix NULL pointer dereference during connection lookup
ECPGconnect() caches established connections to the server, supporting
the case of a NULL connection name when a database name is not specified
by its caller.
A follow-up call to ECPGget_PGconn() to get an established connection
from the cached set with a non-NULL name could cause a NULL pointer
dereference if a NULL connection was listed in the cache and checked for
a match. At least two connections are necessary to reproduce the issue:
one with a NULL name and one with a non-NULL name.
Author: Aleksander Alekseev <aleksander@tigerdata.com>
Discussion: https://postgr.es/m/CAJ7c6TNvFTPUTZQuNAoqgzaSGz-iM4XR61D7vEj5PsQXwg2RyA@mail.gmail.com
Backpatch-through: 13
Richard Guo [Tue, 22 Jul 2025 02:21:36 +0000 (11:21 +0900)]
Reduce "Var IS [NOT] NULL" quals during constant folding
In commit b262ad440, we introduced an optimization that reduces an IS
[NOT] NULL qual on a NOT NULL column to constant true or constant
false, provided we can prove that the input expression of the NullTest
is not nullable by any outer joins or grouping sets. This deduction
happens quite late in the planner, during the distribution of quals to
rels in query_planner. However, this approach has some drawbacks: we
can't perform any further folding with the constant, and it turns out
to be prone to bugs.
Ideally, this deduction should happen during constant folding.
However, the per-relation information about which columns are defined
as NOT NULL is not available at that point. This information is
currently collected from catalogs when building RelOptInfos for base
or "other" relations.
This patch moves the collection of NOT NULL attribute information for
relations before pull_up_sublinks, storing it in a hash table keyed by
relation OID. It then uses this information to perform the NullTest
deduction for Vars during constant folding. This also makes it
possible to leverage this information to pull up NOT IN subqueries.
Note that this patch does not get rid of restriction_is_always_true
and restriction_is_always_false. Removing them would prevent us from
reducing some IS [NOT] NULL quals that we were previously able to
reduce, because (a) the self-join elimination may introduce new IS NOT
NULL quals after constant folding, and (b) if some outer joins are
converted to inner joins, previously irreducible NullTest quals may
become reducible.
Author: Richard Guo <guofenglinux@gmail.com> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAMbWs4-bFJ1At4btk5wqbezdu8PLtQ3zv-aiaY3ry9Ymm=jgFQ@mail.gmail.com
Richard Guo [Tue, 22 Jul 2025 02:20:40 +0000 (11:20 +0900)]
Centralize collection of catalog info needed early in the planner
There are several pieces of catalog information that need to be
retrieved for a relation during the early stage of planning. These
include relhassubclass, which is used to clear the inh flag if the
relation has no children, as well as a column's attgenerated and
default value, which are needed to expand virtual generated columns.
More such information may be required in the future.
Currently, these pieces of catalog data are collected in multiple
places, resulting in repeated table_open/table_close calls for each
relation in the rangetable. This patch centralizes the collection of
all required early-stage catalog information into a single loop over
the rangetable, allowing each relation to be opened and closed only
once.
Author: Richard Guo <guofenglinux@gmail.com> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAMbWs4-bFJ1At4btk5wqbezdu8PLtQ3zv-aiaY3ry9Ymm=jgFQ@mail.gmail.com
Richard Guo [Tue, 22 Jul 2025 02:19:17 +0000 (11:19 +0900)]
Expand virtual generated columns before sublink pull-up
Currently, we expand virtual generated columns after we have pulled up
any SubLinks within the query's quals. This ensures that the virtual
generated column references within SubLinks that should be transformed
into joins are correctly expanded. This approach works well and has
posed no issues.
In an upcoming patch, we plan to centralize the collection of catalog
information needed early in the planner. This will help avoid
repeated table_open/table_close calls for relations in the rangetable.
Since this information is required during sublink pull-up, we are
moving the expansion of virtual generated columns to occur beforehand.
To achieve this, if any EXISTS SubLinks can be pulled up, their
rangetables are processed just before pulling them up.
Author: Richard Guo <guofenglinux@gmail.com> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAMbWs4-bFJ1At4btk5wqbezdu8PLtQ3zv-aiaY3ry9Ymm=jgFQ@mail.gmail.com
pg_dump: include comments on not-null constraints on domains, too
Commit e5da0fe3c22b introduced catalog entries for not-null constraints
on domains; but because commit b0e96f311985 (the original work for
catalogued not-null constraints on tables) forgot to teach pg_dump to
process the comments for them, this one also forgot. Add that now.
We also need to teach repairDependencyLoop() about the new type of
constraints being possible for domains.
Backpatch-through: 17 Co-authored-by: jian he <jian.universality@gmail.com> Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de> Reported-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxF-0bqVR=j4jonS6N2Ka6hHUpFyu3_3TWKNhOW_4yFSSg@mail.gmail.com
doc: Document reopen of output file via SIGHUP in pg_recvlogical.
When pg_recvlogical receives a SIGHUP signal, it closes the current
output file and reopens a new one. This is useful since it allows us to
rotate the output file by renaming the current file and sending a SIGHUP.
This behavior was previously undocumented. This commit adds
the missing documentation.
Tom Lane [Sat, 19 Jul 2025 18:23:02 +0000 (14:23 -0400)]
Mostly-cosmetic adjustments to estimate_multivariate_bucketsize().
The only practical effect of these changes is to avoid a useless
list_copy() operation when there is a single hashclause. That's
never going to make any noticeable performance difference, but
the code is arguably clearer this way, especially if we take the
opportunity to add some comments so that readers don't have to
reverse-engineer the usage of these local variables. Also add
some braces for better/more consistent style.
Author: Tender Wang <tndrwang@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAHewXNnHBOO9NEa=NBDYOrwZL4oHu2NOcTYvqyNyWEswo8f5OQ@mail.gmail.com
This commit is only for HEAD and v18, where the test has been removed.
It also incorporates improvements below to stability and coverage of the
original test, which were already backpatched to v17.
- Add one pg_logical_emit_message() call to force the creation of a record
that spawns across two pages.
- Make the logic wait for the checkpoint completion.
Author: Alexander Korotkov <akorotkov@postgresql.org> Co-authored-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Michael Paquier <michael@paquier.xyz>
Backpatch-through: 18
Improve the stability of the recovery test 047_checkpoint_physical_slot
Currently, the comments in 047_checkpoint_physical_slot. It shows an
incomplete intention to wait for checkpoint completion before performing
an immediate database stop. However, an immediate node stop can occur both
before and after checkpoint completion. Both cases should work correctly.
But we would like the test to be more stable and deterministic. This is why
this commit makes this test explicitly wait for the checkpoint completion
log message.
Discussion: https://postgr.es/m/CAPpHfdurV-j_e0pb%3DUFENAy3tyzxfF%2ByHveNDNQk2gM82WBU5A%40mail.gmail.com
Discussion: https://postgr.es/m/aHXLep3OaX_vRTNQ%40paquier.xyz
Author: Alexander Korotkov <akorotkov@postgresql.org> Reviewed-by: Michael Paquier <michael@paquier.xyz>
Backpatch-through: 17
Fix infinite wait when reading a partially written WAL record
If a crash occurs while writing a WAL record that spans multiple pages, the
recovery process marks the page with the XLP_FIRST_IS_OVERWRITE_CONTRECORD
flag. However, logical decoding currently attempts to read the full WAL
record based on its expected size before checking this flag, which can lead
to an infinite wait if the remaining data is never written (e.g., no activity
after crash).
This patch updates the logic first to read the page header and check for
the XLP_FIRST_IS_OVERWRITE_CONTRECORD flag before attempting to reconstruct
the full WAL record. If the flag is set, decoding correctly identifies
the record as incomplete and avoids waiting for WAL data that will never
arrive.
Discussion: https://postgr.es/m/CAAKRu_ZCOzQpEumLFgG_%2Biw3FTa%2BhJ4SRpxzaQBYxxM_ZAzWcA%40mail.gmail.com
Discussion: https://postgr.es/m/CALDaNm34m36PDHzsU_GdcNXU0gLTfFY5rzh9GSQv%3Dw6B%2BQVNRQ%40mail.gmail.com
Author: Vignesh C <vignesh21@gmail.com> Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
Backpatch-through: 13
Michael Paquier [Sat, 19 Jul 2025 06:03:14 +0000 (15:03 +0900)]
Check status of nodes after regression test run in 027_stream_regress
This commit improves the recovery TAP test 027_stream_regress so as
regression diffs are printed only if both the primary and the standby
are still alive after the main regression test suite finishes, relying
on d4c9195eff41 to do the job.
Particularly, a crash of the primary could scribble the contents
reported with mostly useless data, as the diffs would refer to query
that failed to run, not necessarily the cause of the crash.
Suggested-by: Andres Freund <andres@anarazel.de>
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/CAN55FZ1D6KXvjSs7YGsDeadqCxNF3UUhjRAfforzzP0k-cE=bA@mail.gmail.com
Michael Paquier [Sat, 19 Jul 2025 05:38:52 +0000 (14:38 +0900)]
Add PostgreSQL::Test::Cluster::is_alive()
This new routine acts as a wrapper of pg_isready, that can be run on a
node to check its connection status. This will be used in a recovery
test in a follow-up commit.
Suggested-by: Andres Freund <andres@anarazel.de>
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/CAN55FZ1D6KXvjSs7YGsDeadqCxNF3UUhjRAfforzzP0k-cE=bA@mail.gmail.com
Tom Lane [Fri, 18 Jul 2025 20:42:02 +0000 (16:42 -0400)]
Speed up byteain by not parsing traditional-style input twice.
Instead of laboriously computing the exact output length, use strlen
to get an upper bound cheaply. (This is still O(N) of course, but
the constant factor is a lot less.) This will typically result in
overallocating the output datum, but that's of little concern since
it's a short-lived allocation in just about all use-cases.
A simple microbenchmark showed about 40% speedup for long input
strings.
While here, make some cosmetic cleanups and add a test case that
covers the double-backslash code path in byteain and byteaout.
Author: Steven Niu <niushiji@gmail.com> Reviewed-by: Kirill Reshke <reshkekirill@gmail.com> Reviewed-by: Stepan Neretin <slpmcf@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/ca315729-140b-426e-81a6-6cd5cfe7ecc5@gmail.com
SELECT pg_catalog.lo_create('5432');
ALTER LARGE OBJECT 5432 OWNER TO alice;
GRANT SELECT ON LARGE OBJECT 5432 TO bob;
for each large object. This is particularly slow at restore time,
especially when there are tens or hundreds of millions of large
objects. From reports and personal experience, such slow restores
seem to be most painful when encountered during pg_upgrade. This
commit teaches pg_dump to instead dump pg_largeobject_metadata and
the corresponding pg_shdepend rows when in binary upgrade mode,
i.e., pg_dump now generates commands like
COPY pg_catalog.pg_shdepend (dbid, classid, objid, objsubid, refclassid, refobjid, deptype) FROM stdin;
5 2613 5432 0 1260 16384 o
5 2613 5432 0 1260 16385 a
\.
Testing indicates the COPY approach can be significantly faster.
To do any better, we'd probably need to find a way to copy/link
pg_largeobject_metadata's files during pg_upgrade, which would be
limited to upgrades from >= v16 (since commit 7b378237aa changed
the storage format for aclitem, which is used for
pg_largeobject_metadata.lomacl).
Note that this change only applies to binary upgrade mode (i.e.,
dumps initiated by pg_upgrade) since it inserts rows directly into
catalogs. Also, this optimization can only be used for upgrades
from >= v12 because pg_largeobject_metadata was created WITH OIDS
in older versions, which prevents pg_dump from handling
pg_largeobject_metadata.oid properly. With some extra effort, it
might be possible to support upgrades from older versions, but the
added complexity didn't seem worth it to support versions that will
have been out-of-support for nearly 3 years by the time this change
is released.
Experienced hackers may remember that prior to v12, pg_upgrade
copied/linked pg_largeobject_metadata's files (see commit 12a53c732c). Besides the aforementioned storage format issues,
this approach failed to transfer the relevant pg_shdepend rows, and
pg_dump still had to generate an lo_create() command per large
object so that creating the dependent comments and security labels
worked. We could perhaps adopt a hybrid approach for upgrades from
v16 and newer (i.e., generate lo_create() commands for each large
object, copy/link pg_largeobject_metadata's files, and COPY the
relevant pg_shdepend rows), but further testing is needed.
Reported-by: Hannu Krosing <hannuk@google.com> Suggested-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Hannu Krosing <hannuk@google.com> Reviewed-by: Nitin Motiani <nitinmotiani@google.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAMT0RQSS-6qLH%2BzYsOeUbAYhop3wmQTkNmQpo5--QRDUR%2BqYmQ%40mail.gmail.com
Dean Rasheed [Fri, 18 Jul 2025 08:55:43 +0000 (09:55 +0100)]
Fix concurrent update trigger issues with MERGE in a CTE.
If a MERGE inside a CTE attempts an UPDATE or DELETE on a table with
BEFORE ROW triggers, and a concurrent UPDATE or DELETE happens, the
merge code would fail (crashing in the case of an UPDATE action, and
potentially executing the wrong action for a DELETE action).
This is the same issue that 9321c79c86 attempted to fix, except now
for a MERGE inside a CTE. As noted in 9321c79c86, what needs to happen
is for the trigger code to exit early, returning the TM_Result and
TM_FailureData information to the merge code, if a concurrent
modification is detected, rather than attempting to do an EPQ
recheck. The merge code will then do its own rechecking, and rescan
the action list, potentially executing a different action in light of
the concurrent update. In particular, the trigger code must never call
ExecGetUpdateNewTuple() for MERGE, since that is bound to fail because
MERGE has its own per-action projection information.
Commit 9321c79c86 did this using estate->es_plannedstmt->commandType
in the trigger code to detect that a MERGE was being executed, which
is fine for a plain MERGE command, but does not work for a MERGE
inside a CTE. Fix by passing that information to the trigger code as
an additional parameter passed to ExecBRUpdateTriggers() and
ExecBRDeleteTriggers().
Back-patch as far as v17 only, since MERGE cannot appear inside a CTE
prior to that. Additionally, take care to preserve the trigger ABI in
v17 (though not in v18, which is still in beta).
Bug: #18986 Reported-by: Yaroslav Syrytsia <me@ys.lc>
Author: Dean Rasheed <dean.a.rasheed@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/18986-e7a8aac3d339fa47@postgresql.org
Backpatch-through: 17
Support for deparsing of ArrayCoerceExpr node in contrib/postgres_fdw
When using a prepared statement to select data from a PostgreSQL foreign
table (postgres_fdw) with the "field = ANY($1)" expression, the operation
is not pushed down when an implicit type case is applied, and a generic plan
is used. This commit resolves the issue by supporting the push-down of
ArrayCoerceExpr, which is used in this case. The support is quite
straightforward and similar to other nods, such as RelabelType.
Discussion: https://postgr.es/m/4f0cea802476d23c6e799512ffd17aff%40postgrespro.ru
Author: Alexander Pyhalov <a.pyhalov@postgrespro.ru> Reviewed-by: Maxim Orlov <orlovmg@gmail.com> Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
Tom Lane [Thu, 17 Jul 2025 16:46:38 +0000 (12:46 -0400)]
Fix PQport to never return NULL unless the connection is NULL.
This is the documented behavior, and it worked that way before
v10. However, addition of the connhost[] array created cases
where conn->connhost[conn->whichhost].port is NULL. The rest
of libpq is careful to substitute DEF_PGPORT[_STR] for a null
or empty port string, but we failed to do so here, leading to
possibly returning NULL. As of v18 that causes psql's \conninfo
command to segfault. Older psql versions avoid that, but it's
pretty likely that other clients have trouble with this,
so we'd better back-patch the fix.
In stable branches, just revert to our historical behavior of
returning an empty string when there was no user-given port
specification. However, it seems substantially more useful and
indeed more correct to hand back DEF_PGPORT_STR in such cases,
so let's make v18 and master do that.
Author: Daniele Varrazzo <daniele.varrazzo@gmail.com> Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CA+mi_8YTS8WPZPO0PAb2aaGLwHuQ0DEQRF0ZMnvWss4y9FwDYQ@mail.gmail.com
Backpatch-through: 13
We have an assertion to ensure that a command tag has been assigned by
the time we're done executing, but if we happen to execute a command
with no queries, the assertion would fail. Per discussion, rather than
contort things to get a tag assigned, just remove the assertion.
Oversight in 2f9661311b83. That commit also retained a comment that
explained logic that had been adjacent to it but diffused into various
places, leaving none apt to keep part of the comment. Remove that part,
and rewrite what remains for extra clarity.
Bug: #18984
Backpatch-through: 13 Reported-by: Aleksander Alekseev <aleksander@tigerdata.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Michaël Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/18984-0f4778a6599ac3ae@postgresql.org
This commit adds a note to the pg_overexplain page that describes
how to use it (LOAD, session_preload_libraries, or
shared_preload_libraries). The new text is mostly lifted from the
auto_explain page. We should probably consider centralizing this
information in the future.
While at it, add a missing "module" to the opening sentence.
Reviewed-by: "David G. Johnston" <david.g.johnston@gmail.com> Reviewed-by: Robert Treat <rob@xzilla.net> Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/aHVWKM8l8kLlZzgv%40nathan
Backpatch-through: 18