Thomas Munro [Thu, 17 Oct 2024 02:52:24 +0000 (15:52 +1300)]
Fix extreme skew detection in Parallel Hash Join.
After repartitioning the inner side of a hash join that would have
exceeded the allowed size, we check if all the tuples from a parent
partition moved to one child partition. That is evidence that it
contains duplicate keys and later attempts to repartition will also
fail, so we should give up trying to limit memory (for lack of a better
fallback strategy).
A thinko prevented the check from working correctly in partition 0 (the
one that is partially loaded into memory already). After
repartitioning, we should check for extreme skew if the *parent*
partition's space_exhausted flag was set, not the child partition's.
The consequence was repeated futile repartitioning until per-partition
data exceeded various limits including "ERROR: invalid DSA memory alloc
request size 1811939328", OS allocation failure, or temporary disk space
errors. (We could also do something about some of those symptoms, but
that's material for separate patches.)
This problem only became likely when PostgreSQL 16 introduced support
for Parallel Hash Right/Full Join, allowing NULL keys into the hash
table. Repartitioning always leaves NULL in partition 0, no matter how
many times you do it, because the hash value is all zero bits. That's
unlikely for other hashed values, but they might still have caused
wasted extra effort before giving up.
Back-patch to all supported releases.
Reported-by: Craig Milhiser <craig@milhiser.com> Reviewed-by: Andrei Lepikhov <lepihov@gmail.com>
Discussion: https://postgr.es/m/CA%2BwnhO1OfgXbmXgC4fv_uu%3DOxcDQuHvfoQ4k0DFeB0Qqd-X-rQ%40mail.gmail.com
Michael Paquier [Wed, 16 Oct 2024 23:45:56 +0000 (08:45 +0900)]
Fix validation of COPY FORCE_NOT_NULL/FORCE_NULL for the all-column cases
This commit adds missing checks for COPY FORCE_NOT_NULL and FORCE_NULL
when applied to all columns via "*". These options now correctly
require CSV mode and are disallowed in COPY TO, making their behavior
consistent with FORCE_QUOTE.
Some regression tests are added to verify the correct behavior for the
all-columns case, including FORCE_QUOTE, which was not tested.
Backpatch down to 17, where support for the all-column grammar with
FORCE_NOT_NULL and FORCE_NULL has been added.
Author: Joel Jacobson Reviewed-by: Zhang Mingli
Discussion: https://postgr.es/m/65030d1d-5f90-4fa4-92eb-f5f50389858e@app.fastmail.com
Backpatch-through: 17
Michael Paquier [Wed, 16 Oct 2024 22:21:40 +0000 (07:21 +0900)]
Rewrite some regression queries for option checks with COPY
Some queries in copy2 are there to check various option combinations,
and used "stdin" or "stdout" incompatible with the COPY TO or FROM
clauses combined with them, which was confusing. This commit rewrites
these queries to use a compatible grammar.
The coverage of the tests is unchanged. Like the original commit 451d1164b9d0, backpatch down to 16 where these have been introduced. A
follow-up commit will rely on this area of the tests for a bug fix.
Author: Joel Jacobson Reviewed-by: Zhang Mingli
Discussion: https://postgr.es/m/65030d1d-5f90-4fa4-92eb-f5f50389858e@app.fastmail.com
Backpatch-through: 16
Tom Lane [Wed, 16 Oct 2024 21:36:29 +0000 (17:36 -0400)]
Further refine _SPI_execute_plan's rule for atomic execution.
Commit 2dc1deaea turns out to have been still a brick shy of a load,
because CALL statements executing within a plpgsql exception block
could still pass the wrong snapshot to stable functions within the
CALL's argument list. That happened because standard_ProcessUtility
forces isAtomicContext to true if IsTransactionBlock is true, which
it always will be inside a subtransaction. Then ExecuteCallStmt
would think it does not need to push a new snapshot --- but
_SPI_execute_plan didn't do so either, since it thought it was in
nonatomic mode.
The best fix for this seems to be for _SPI_execute_plan to operate
in atomic execution mode if IsSubTransaction() is true, even when the
SPI context as a whole is non-atomic. This makes _SPI_execute_plan
have the same rules about when non-atomic execution is allowed as
_SPI_commit/_SPI_rollback have about when COMMIT/ROLLBACK are allowed,
which seems appropriately symmetric. (If anyone ever tries to allow
COMMIT/ROLLBACK inside a subtransaction, this would all need to be
rethought ... but I'm unconvinced that such a thing could be logically
consistent at all.)
For further consistency, also check IsSubTransaction() in
SPI_inside_nonatomic_context. That does not matter for its
one present-day caller StartTransaction, which can't be reached
inside a subtransaction. But if any other callers ever arise,
they'd presumably want this definition.
Per bug #18656 from Alexander Alehin. Back-patch to all
supported branches, like previous fixes in this area.
Masahiko Sawada [Wed, 16 Oct 2024 19:08:02 +0000 (12:08 -0700)]
Reduce memory block size for decoded tuple storage to 8kB.
Commit a4ccc1cef introduced the Generation Context and modified the
logical decoding process to use a Generation Context with a fixed
block size of 8MB for storing tuple data decoded during logical
decoding (i.e., rb->tup_context). Several reports have indicated that
the logical decoding process can be terminated due to
out-of-memory (OOM) situations caused by excessive memory usage in
rb->tup_context.
This issue can occur when decoding a workload involving several
concurrent transactions, including a long-running transaction that
modifies tuples. By design, the Generation Context does not free a
memory block until all chunks within that block are
released. Consequently, if tuples modified by the long-running
transaction are stored across multiple memory blocks, these blocks
remain allocated until the long-running transaction completes, leading
to substantial memory fragmentation. The memory usage during logical
decoding, tracked by rb->size, does not account for memory
fragmentation, resulting in potentially much higher memory consumption
than the value of the logical_decoding_work_mem parameter.
Various improvement strategies were discussed in the relevant
thread. This change reduces the block size of the Generation Context
used in rb->tup_context from 8MB to 8kB. This modification
significantly decreases the likelihood of substantial memory
fragmentation occurring and is relatively straightforward to
backport. Performance testing across multiple platforms has confirmed
that this change will not introduce any performance degradation that
would impact actual operation.
Backport to all supported branches.
Reported-by: Alex Richman, Michael Guissine, Avi Weinberg Reviewed-by: Amit Kapila, Fujii Masao, David Rowley Tested-by: Hayato Kuroda, Shlok Kyal
Discussion: https://postgr.es/m/CAD21AoBTY1LATZUmvSXEssvq07qDZufV4AF-OHh9VD2pC0VY2A%40mail.gmail.com
Backpatch-through: 12
Nathan Bossart [Tue, 15 Oct 2024 21:20:49 +0000 (16:20 -0500)]
Add type cast to foreach_internal's loop variable.
C++ requires explicitly casting void pointers to the appropriate
pointer type, which means the foreach_ptr macro cannot be used in
C++ code without this change.
Author: Jelte Fennema-Nio Reviewed-by: Bruce Momjian
Discussion: https://postgr.es/m/CAGECzQSYG3QfHrc-rOk2KbnB9iJOd7Qu-Xii1s-GTA%3D3JFt49Q%40mail.gmail.com
Backpatch-through: 17
Michael Paquier [Mon, 14 Oct 2024 03:27:57 +0000 (12:27 +0900)]
psql: Fix \watch when using interval values less than 1ms
Attempting to use an interval of time less than 1ms would cause \watch
to hang. This was confusing, so let's change the logic so as an
interval lower than 1ms behaves the same as 0.
Comments are added to mention that the internals of do_watch() had
better rely on "sleep_ms", the interval value in milliseconds. While on
it, this commit adds a test to check the behavior of interval values
less than 1ms.
\watch hanging for interval values less than 1ms existed before 6f9ee74d45aa, that has changed the code to support an interval value of
0.
Reported-by: Heikki Linnakangas
Author: Andrey M. Borodin, Michael Paquier
Discussion: https://postgr.es/m/88445e0e-3156-4b9d-afae-9a1a7b1631f6@iki.fi
Backpatch-through: 16
Tom Lane [Sat, 12 Oct 2024 18:56:08 +0000 (14:56 -0400)]
Correctly identify which EC members are computable at a plan node.
find_computable_ec_member() had the wrong mental model of what
its primary caller prepare_sort_from_pathkeys() would do with
the selected EquivalenceClass member expression. We will not
compute the EC expression in a plan node atop the one returning
the passed-in targetlist; rather, the EC expression will be
computed as an additional column of that targetlist. So any
Var or quasi-Var used in the given tlist is also available to the
EC expression. In simple cases this makes no difference because
the given tlist is just a list of Vars or quasi-Vars --- but if
we are considering an appendrel member produced by flattening
a UNION ALL, the tlist may contain expressions, resulting in
failure to match and a "could not find pathkey item to sort"
error.
To fix, we can flatten both the tlist and the EC members with
pull_var_clause(), and then just check for subset-ness, so
that the code is actually shorter than before.
While this bug is quite old, the present patch only works back to
v13. We could possibly make it work in v12 by back-patching parts
of 375398244. On the whole though I don't like the risk/reward
ratio of that idea. v12's final release is next month, meaning
there would be no chance to correct matters if the patch causes a
regression. Since this failure has escaped notice for 14 years,
it's likely nobody will hit it in the field with v12.
Jeff Davis [Fri, 11 Oct 2024 23:58:22 +0000 (16:58 -0700)]
Fix missed case for builtin collation provider.
A missed check for the builtin collation provider could result in
falling through to call isalpha().
This does not appear to have practical consequences because it only
happens for characters in the ASCII range. Regardless, the builtin
provider should not be calling libc functions, so backpatch.
Tom Lane [Wed, 9 Oct 2024 21:07:53 +0000 (17:07 -0400)]
Avoid crash in estimate_array_length with null root pointer.
Commit 9391f7152 added a "PlannerInfo *root" parameter to
estimate_array_length, but failed to consider the possibility that
NULL would be passed for that, leading to a null pointer dereference.
We could rectify the particular case shown in the bug report by fixing
simplify_function/inline_function to pass through the root pointer.
However, as long as eval_const_expressions is documented to accept
NULL for root, similar hazards would remain. For now, let's just do
the narrow fix of hardening estimate_array_length to not crash.
Its behavior with NULL root will be the same as it was before 9391f7152, so this is not too awful.
Per report from Fredrik Widlert (via Paul Ramsey). Back-patch to v17
where 9391f7152 came in.
Commit 149ac7d4559 which re-implemented pgindent in Perl explicitly
imported the devnull function from File::Spec, but the module does
not export anything. In recent versions of Perl calling a missing
import function cause a warning, which combined with warnings being
fatal cause pgindent to error out.
Backpatch to all supported versions.
Author: Erik Wienhold <ewie@ewie.name> Reviewed-by: Andrew Dunstan <andrew@dunslane.net> Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discusson: https://postgr.es/m/2372cd74-11b0-46f9-b28e-8f9627215d19@ewie.name
Backpatch-through: v12
The test was unstable in branches 14 and 15 as we were relying on the
number of changes in the table having a toast column to start streaming.
On branches >= 16, we have a GUC debug_logical_replication_streaming which
can stream each change, so the test was stable in those branches.
Change the test to use PREPARE TRANSACTION as that should make the result
consistent and test the code changed in 022564f60c.
Reported-by: Daniel Gustafsson as per buildfarm
Author: Hou Zhijie, Amit Kapila
Backpatch-through: 14
Discussion: https://postgr.es/m/8C2F86AA-981E-4803-B14D-E264C0255330@yesql.se
Nathan Bossart [Mon, 7 Oct 2024 21:49:20 +0000 (16:49 -0500)]
vacuumdb: Schema-qualify operator in catalog query's WHERE clause.
Commit 1ab67c9dfa, which modified this catalog query so that it
doesn't return temporary relations, forgot to schema-qualify the
operator. A comment earlier in the function implores us to fully
qualify everything in the query:
* Since we execute the constructed query with the default search_path
* (which could be unsafe), everything in this query MUST be fully
* qualified.
This commit fixes that. While at it, add a newline for consistency
with surrounding code.
Nathan Bossart [Mon, 7 Oct 2024 18:51:03 +0000 (13:51 -0500)]
Fix Y2038 issues with MyStartTime.
Several places treat MyStartTime as a "long", which is only 32 bits
wide on some platforms. In reality, MyStartTime is a pg_time_t,
i.e., a signed 64-bit integer. This will lead to interesting bugs
on the aforementioned systems in 2038 when signed 32-bit integers
are no longer sufficient to store Unix time (e.g., "pg_ctl start"
hanging). To fix, ensure that MyStartTime is handled as a 64-bit
value everywhere. (Of course, users will need to ensure that
time_t is 64 bits wide on their system, too.)
Co-authored-by: Max Johnson
Discussion: https://postgr.es/m/CO1PR07MB905262E8AC270FAAACED66008D682%40CO1PR07MB9052.namprd07.prod.outlook.com
Backpatch-through: 12
Amit Kapila [Mon, 7 Oct 2024 09:45:05 +0000 (15:15 +0530)]
Fix fetching default toast value during decoding of in-progress transactions.
During logical decoding of in-progress transactions, we perform the toast
table scan while fetching the default toast value for an attribute. We
forgot to initialize the flag during this scan to indicate that the system
table scan is in progress. We need this flag to ensure that during logical
decoding we never directly access the tableam or heap APIs because we check
for concurrent aborts only in systable_* APIs.
Reported-by: Alexander Lakhin
Author: Takeshi Ideriha, Hou Zhijie Reviewed-by: Amit Kapila, Hou Zhijie
Backpatch-through: 14
Discussion: https://postgr.es/m/18641-6687273b7f15269d@postgresql.org
Tom Lane [Sun, 6 Oct 2024 20:03:48 +0000 (16:03 -0400)]
Ignore not-yet-defined Portals in pg_cursors view.
pg_cursor() supposed that any Portal it finds in the hash table must
have sourceText set up, but there's an edge case where that is not so.
A newly-created Portal has sourceText = NULL, and that doesn't change
until PortalDefineQuery is called. In SPI_cursor_open_internal,
we perform GetCachedPlan between CreatePortal and PortalDefineQuery,
and it's possible for user-defined code to execute during that
planning and cause a fetch from the pg_cursors view, resulting in a
null-pointer-dereference crash. (It looks like the same could happen
in exec_bind_message, but I've not tried to provoke a failure there.)
I considered trying to fix this by setting sourceText sooner, but
there may be instances of this same calling pattern in extensions,
and we couldn't be sure they'd get the memo promptly. It seems
better to redefine pg_cursor as not showing Portals that have
not yet had PortalDefineQuery called on them, which we can do by
just skipping them if sourceText is still NULL.
(Before a1c692358, pg_cursor would instead return a row with NULL
in the statement column. We could revert to that behavior but it
doesn't really seem like a better definition, especially since our
documentation doesn't suggest that the column could be NULL.)
Per report from PetSerAl. Back-patch to all supported branches.
Tom Lane [Sat, 5 Oct 2024 18:46:44 +0000 (14:46 -0400)]
Use generateClonedIndexStmt to propagate CREATE INDEX to partitions.
When instantiating an existing partitioned index for a new child
partition, we use generateClonedIndexStmt to build a suitable
IndexStmt to pass to DefineIndex. However, when DefineIndex needs
to recurse to instantiate a newly created partitioned index on an
existing child partition, it was doing copyObject on the given
IndexStmt and then applying a bunch of ad-hoc fixups. This has
a number of problems, primarily that it implies fresh lookups of
referenced objects such as opclasses and collations. Since commit 2af07e2f7 caused DefineIndex to restrict search_path internally, those
lookups could fail or deliver different results than the original one.
We can avoid those problems and save a few dozen lines of code by
using generateClonedIndexStmt in this code path too.
Another thing this fixes is incorrect propagation of parent-index
comments to child indexes (because the copyObject approach copies
the idxcomment field while generateClonedIndexStmt doesn't). I had
noticed this in connection with commit c01eb619a, but not run the
problem to ground.
I'm tempted to back-patch this further than v17, but the only thing
it's known to fix in older branches is the comment issue, which is
pretty minor and doesn't seem worth the risk of introducing new
issues in stable branches. (If anyone does care about that,
clearing idxcomment in the copied IndexStmt would be a safer fix.)
Per bug #18637 from usamoi. Back-patch to v17 where the search_path
change came in.
Thomas Munro [Sat, 5 Oct 2024 00:48:33 +0000 (13:48 +1300)]
Reject non-ASCII locale names.
Commit bf03cfd1 started scanning all available BCP 47 locale names on
Windows. This caused an abort/crash in the Windows runtime library if
the default locale name contained non-ASCII characters, because of our
use of the setlocale() save/restore pattern with "char" strings. After
switching to another locale with a different encoding, the saved name
could no longer be understood, and setlocale() would abort.
"Turkish_Türkiye.1254" is the example from recent reports, but there are
other examples of countries and languages with non-ASCII characters in
their names, and they appear in Windows' (old style) locale names.
To defend against this:
1. In initdb, reject non-ASCII locale names given explicity on the
command line, or returned by the operating system environment with
setlocale(..., ""), or "canonicalized" by the operating system when we
set it.
2. In initdb only, perform the save-and-restore with Windows'
non-standard wchar_t variant of setlocale(), so that it is not subject
to round trip failures stemming from char string encoding confusion.
3. In the backend, we don't have to worry about the save-and-restore
problem because we have already vetted the defaults, so we just have to
make sure that CREATE DATABASE also rejects non-ASCII names in any new
databases. SET lc_XXX doesn't suffer from the problem, but the ban
applies to it too because it uses check_locale(). CREATE COLLATION
doesn't suffer from the problem either, but it doesn't use
check_locale() so it is not included in the new ban for now, to minimize
the change.
Anyone who encounters the new error message should either create a new
duplicated locale with an ASCII-only name using Windows Locale Builder,
or consider using BCP 47 names like "tr-TR". Users already couldn't
initialize a cluster with "Turkish_Türkiye.1254" on PostgreSQL 16+, but
the new failure mode is an error message that explains why, instead of a
crash.
Back-patch to 16, where bf03cfd1 landed. Older versions are affected
in theory too, but only 16 and later are causing crash reports.
Reviewed-by: Andrew Dunstan <andrew@dunslane.net> (the idea, not the patch) Reported-by: Haifang Wang (Centific Technologies Inc) <v-haiwang@microsoft.com>
Discussion: https://postgr.es/m/PH8PR21MB3902F334A3174C54058F792CE5182%40PH8PR21MB3902.namprd21.prod.outlook.com
Dean Rasheed [Thu, 3 Oct 2024 12:45:37 +0000 (13:45 +0100)]
Fix wrong varnullingrels error for MERGE WHEN NOT MATCHED BY SOURCE.
If a MERGE command contains WHEN NOT MATCHED BY SOURCE actions, the
source relation appears on the outer side of the join. Thus, any Vars
referring to the source in the merge join condition, actions, and
RETURNING list should be marked as nullable by the join, since they
are used in the ModifyTable node above the join. Note that this only
applies to the copy of join condition used in the executor to
distinguish MATCHED from NOT MATCHED BY SOURCE cases. Vars in the
original join condition, inside the join node itself, should not be
marked.
Failure to correctly mark these Vars led to a "wrong varnullingrels"
error in the final stage of query planning, in some circumstances. We
happened to get away without this in all previous tests, since they
all involved a ModifyTable node directly on top of the join node, so
that the top plan targetlist coincided with the output of the join,
and the varnullingrels check was more lax. However, if another plan
node, such as a one-time filter Result node, gets inserted between the
ModifyTable node and the join node, then a stricter check is applied,
which fails.
Per bug #18634 from Alexander Lakhin. Thanks to Tom Lane and Richard
Guo for review and analysis.
Back-patch to v17, where WHEN NOT MATCHED BY SOURCE support was added
to MERGE.
Dean Rasheed [Thu, 3 Oct 2024 11:50:38 +0000 (12:50 +0100)]
Fix incorrect non-strict join recheck in MERGE WHEN NOT MATCHED BY SOURCE.
If a MERGE command contains WHEN NOT MATCHED BY SOURCE actions, the
merge join condition is used by the executor to distinguish MATCHED
from NOT MATCHED BY SOURCE cases. However, this qual is executed using
the output from the join subplan node, which nulls the output from the
source relation in the not matched case, and so the result may be
incorrect if the join condition is "non-strict" -- for example,
something like "src.col IS NOT DISTINCT FROM tgt.col".
Fix this by enhancing the join recheck condition with an additional
"src IS NOT NULL" check, so that it does the right thing when
evaluated using the output from the join subplan.
Noted by Tom Lane while investigating bug #18634 from Alexander
Lakhin.
Back-patch to v17, where WHEN NOT MATCHED BY SOURCE support was added
to MERGE.
Tom Lane [Wed, 2 Oct 2024 21:30:36 +0000 (17:30 -0400)]
Parse libpq's "keepalives" option more like other integer options.
Use pqParseIntParam (nee parse_int_param) instead of using strtol
directly. This allows trailing whitespace, which the previous coding
didn't, and makes the spelling of the error message consistent with
other similar cases.
This seems to be an oversight in commit e7a221797, which introduced
parse_int_param. That fixed places that were using atoi(), but missed
this place which was randomly using strtol() instead.
Ordinarily I'd consider this minor cleanup not worth back-patching.
However, it seems that ecpg assumes it can add trailing whitespace
to URL parameters, so that use of the keepalives option fails in
that context. Perhaps that's worth improving as a separate matter.
In the meantime, back-patch this to all supported branches.
Michael Paquier [Wed, 2 Oct 2024 02:12:45 +0000 (11:12 +0900)]
doc: Clarify name of files generated by pg_waldump --save-fullpage
The fork name is always separated with the block number by an underscore
in the names of the files generated, but the docs stuck them together
without a separator, which was confusing.
Author: Christoph Berg
Discussion: https://postgr.es/m/ZvxtSLiix9eceMRM@msg.df7cb.de
Backpatch-through: 16
Michael Paquier [Tue, 1 Oct 2024 06:44:07 +0000 (15:44 +0900)]
Fix race condition in COMMIT PREPARED causing orphaned 2PC files
COMMIT PREPARED removes on-disk 2PC files near its end, but the state
checked if a file is on-disk or not gets read from shared memory while
not holding the two-phase state lock.
Because of that, there was a small window where a second backend doing a
PREPARE TRANSACTION could reuse the GlobalTransaction put back into the
2PC free list by the COMMIT PREPARED, overwriting the "ondisk" flag read
afterwards by the COMMIT PREPARED to decide if its on-disk two-phase
state file should be removed, preventing the file deletion.
This commit fixes this issue so as the "ondisk" flag in the
GlobalTransaction is read while holding the two-phase state lock, not
from shared memory after its entry has been added to the free list.
Orphaned two-phase state files flushed to disk after a checkpoint are
discarded at the beginning of recovery. However, a truncation of
pg_xact/ would make the startup process issue a FATAL when it cannot
read the SLRU page holding the state of the transaction whose 2PC file
was orphaned, which is a necessary step to decide if the 2PC file should
be removed or not. Removing manually the file would be necessary in
this case.
Issue introduced by effe7d9552dd, so backpatch all the way down.
Tom Lane [Mon, 30 Sep 2024 16:06:54 +0000 (12:06 -0400)]
Remove incorrect entries in pg_walsummary's getopt_long call.
For some reason this listed "-f" and "-w" as valid switches, though
the code doesn't implement any such thing nor do the docs mention
them. The effect of this was that if you tried to use one of these
switches, you'd get an unhelpful error message.
reindexdb: Skip reindexing temporary tables and indexes.
Reindexing temp tables or indexes of other sessions is not allowed.
However, reindexdb in parallel mode previously listed them as
the objects to process, leading to failures.
This commit ensures reindexdb in parallel mode skips temporary tables
and indexes by adding a condition based on the relpersistence column
in pg_class to the object listing queries, preventing these issues.
Note that this commit does not affect reindexdb when temporary tables
or indexes are explicitly specified using the -t or -j options;
reindexdb in that case still does not skip them and can cause an error.
Back-patch to v13 where parallel mode was introduced in reindexdb.
Author: Fujii Masao Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/5f37ee56-14fb-44fe-9150-9eb97e10538b@oss.nttdata.com
Remove NULL dereference from RenameRelationInternal().
Defect in last week's commit aac2c9b4fde889d13f859c233c2523345e72d32b,
per Coverity. Reaching this would need catalog corruption. Back-patch
to v12, like that commit.
doc: Note that CREATE MATERIALIZED VIEW restricts search_path.
Since v17, CREATE MATERIALIZED VIEW has set search_path to
"pg_catalog, pg_temp" while running the query. The docs for the
other commands that restrict search_path mention it, but the page
for CREATE MATERIALIZED VIEW does not. Fix that.
Author: Yugo Nagata Reviewed-by: Jeff Davis
Discussion: https://postgr.es/m/20240805160502.d2a4975802a832b1e04afb80%40sraoss.co.jp
Backpatch-through: 17
Michael Paquier [Fri, 27 Sep 2024 00:40:14 +0000 (09:40 +0900)]
Fix incorrect memory access in VACUUM FULL with invalid toast indexes
An invalid toast index is skipped in reindex_relation(). These would be
remnants of a failed REINDEX CONCURRENTLY and they should never been
rebuilt as there can only be one valid toast index at a time.
REINDEX_REL_SUPPRESS_INDEX_USE, used by CLUSTER and VACUUM FULL, needs
to maintain a list of the indexes being processed. The list of indexes
is retrieved from the relation cache, and includes invalid indexes. The
code has missed that invalid toast indexes are ignored in
reindex_relation() as this leads to a hard failure in reindex_index(),
and they were left in the reindex pending list, making the list
inconsistent when rechecked. The incorrect memory access was happening
when scanning pg_class for the refresh of pg_database.datfrozenxid, when
doing a scan of pg_class.
This issue exists since REINDEX CONCURRENTLY exists, where invalid toast
indexes can exist, so backpatch all the way down.
Reported-by: Alexander Lakhin
Author: Tender Wang
Discussion: https://postgr.es/m/18630-9aed99c38830657d@postgresql.org
Backpatch-through: 12
Michael Paquier [Wed, 25 Sep 2024 05:44:50 +0000 (14:44 +0900)]
vacuumdb: Skip temporary tables in query to build list of relations
Running vacuumdb with a non-superuser while another user has created a
temporary table would lead to a mid-flight permission failure,
interrupting the operation. vacuum_rel() skips temporary relations of
other backends, and it makes no sense for vacuumdb to know about these
relations, so let's switch it to ignore temporary relations entirely.
Adding a qual in the query based on relpersistence simplifies the
generation of its WHERE clause in vacuum_one_database(), per se the
removal of "has_where".
For inplace update durability, make heap_update() callers wait.
The previous commit fixed some ways of losing an inplace update. It
remained possible to lose one when a backend working toward a
heap_update() copied a tuple into memory just before inplace update of
that tuple. In catalogs eligible for inplace update, use LOCKTAG_TUPLE
to govern admission to the steps of copying an old tuple, modifying it,
and issuing heap_update(). This includes MERGE commands. To avoid
changing most of the pg_class DDL, don't require LOCKTAG_TUPLE when
holding a relation lock sufficient to exclude inplace updaters.
Back-patch to v12 (all supported versions). In v13 and v12, "UPDATE
pg_class" or "UPDATE pg_database" can still lose an inplace update. The
v14+ UPDATE fix needs commit 86dc90056dfdbd9d1b891718d2e5614e3e432f35,
and it wasn't worth reimplementing that fix without such infrastructure.
Reviewed by Nitin Motiani and (in earlier versions) Heikki Linnakangas.
Fix data loss at inplace update after heap_update().
As previously-added tests demonstrated, heap_inplace_update() could
instead update an unrelated tuple of the same catalog. It could lose
the update. Losing relhasindex=t was a source of index corruption.
Inplace-updating commands like VACUUM will now wait for heap_update()
commands like GRANT TABLE and GRANT DATABASE. That isn't ideal, but a
long-running GRANT already hurts VACUUM progress more just by keeping an
XID running. The VACUUM will behave like a DELETE or UPDATE waiting for
the uncommitted change.
For implementation details, start at the systable_inplace_update_begin()
header comment and README.tuplock. Back-patch to v12 (all supported
versions). In back branches, retain a deprecated heap_inplace_update(),
for extensions.
Reported by Smolkin Grigory. Reviewed by Nitin Motiani, (in earlier
versions) Heikki Linnakangas, and (in earlier versions) Alexander
Lakhin.
Warn if LOCKTAG_TUPLE is held at commit, under debug_assertions.
The current use always releases this locktag. A planned use will
continue that intent. It will involve more areas of code, making unlock
omissions easier. Warn under debug_assertions, like we do for various
resource leaks. Back-patch to v12 (all supported versions), the plan
for the commit of the new use.
Tom Lane [Tue, 24 Sep 2024 21:21:38 +0000 (17:21 -0400)]
Fix psql describe commands' handling of ACL columns for old servers.
Commit d1379ebf4 carelessly broke printACLColumn for pre-9.4 servers,
by using the cardinality() function which we introduced in 9.4.
We expect psql's describe-related commands to work back to 9.2, so
this is bad. Use the longstanding array_length() function instead.
Per report from Christoph Berg. Back-patch to v17.
Peter Geoghegan [Fri, 20 Sep 2024 18:06:30 +0000 (14:06 -0400)]
Fix nbtree pgstats accounting with parallel scans.
Commit 5bf748b8, which enhanced nbtree ScalarArrayOp execution, made
parallel index scans work with the new design for arrays via explicit
scheduling of primitive index scans. Under this scheme a parallel index
scan with array keys will perform the same number of index descents as
an equivalent serial index scan (barring corner cases where an
individual parallel worker discovers that it can advance the scan's
array keys without anybody needing to perform another descent of the
index to get to the relevant page on the leaf level).
Despite all this, the pgstats accounting wasn't updated; it continued to
increment the total number of index scans for the rel once per _bt_first
call, no matter the details. As a result, the number of (primitive)
index scans could be over-counted during parallel scans.
To fix, delay incrementing the count of index scans until after we've
established that another descent of the index (using either _bt_search
or _bt_endpoint) is required. That way pg_stat_user_tables.idx_scan
always advances in the same way, regardless of whether or not the scan
makes use of parallelism.
Oversight in commit 5bf748b8, which enhanced nbtree ScalarArrayOp
execution.
Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Tomas Vondra <tomas@vondra.me>
Discussion: https://postgr.es/m/CAH2-Wz=E7XrkvscBN0U6V81NK3Q-dQOmivvbEsjG-zwEfDdFpg@mail.gmail.com
Discussion: https://postgr.es/m/CAH2-WzkRqvaqR2CTNqTZP0z6FuL4-3ED6eQB0yx38XBNj1v-4Q@mail.gmail.com
Backpatch: 17-, where nbtree SAOP execution was enhanced.
Michael Paquier [Thu, 19 Sep 2024 07:25:07 +0000 (16:25 +0900)]
psql: Fix memory leak with repeated calls of \bind
Calling \bind repeatedly would cause the memory allocated for the list
of bind parameters to be leaked after each call, as the list is reset
when beginning a single call.
This issue is fixed by making the cleanup of the bind parameter list
more aggressive, refactoring it into a single routine called after
processing a query and before running an individual \bind.
HEAD required more surgery and has been fixed by 87eeadaea143. Issue
introduced by 5b66de3433e2.
docs: Improve the description of num_timed column in pg_stat_checkpointer.
The previous documentation stated that num_timed reflects the number of
scheduled checkpoints performed. However, checkpoints may be skipped
if the server has been idle, and num_timed counts both skipped and completed
checkpoints. This commit clarifies the description to make it clear that
the counter includes both skipped and completed checkpoints.
Back-patch to v17 where pg_stat_checkpointer was added.
Author: Fujii Masao Reviewed-by: Alexander Korotkov
Discussion: https://postgr.es/m/9ea77f40-818d-4841-9dee-158ac8f6e690@oss.nttdata.com
Don't enter parallel mode when holding interrupts.
Doing so caused the leader to hang in wait_event=ParallelFinish, which
required an immediate shutdown to resolve. Back-patch to v12 (all
supported versions).
Michael Paquier [Wed, 18 Sep 2024 00:59:14 +0000 (09:59 +0900)]
Add missing query ID reporting in extended query protocol
This commit adds query ID reports for two code paths when processing
extended query protocol messages:
- When receiving a bind message, setting it to the first Query retrieved
from a cached cache.
- When receiving an execute message, setting it to the first PlannedStmt
stored in a portal.
An advantage of this method is that this is able to cover all the types
of portals handled in the extended query protocol, particularly these
two when the report done in ExecutorStart() is not enough (neither is an
addition in ExecutorRun(), actually, for the second point):
- Multiple execute messages, with multiple ExecutorRun().
- Portal with execute/fetch messages, like a query with a RETURNING
clause and a fetch size that stores the tuples in a first execute
message going though ExecutorStart() and ExecuteRun(), followed by one
or more execute messages doing only fetches from the tuplestore created
in the first message. This corresponds to the case where
execute_is_fetch is set, for example.
Note that the query ID reporting done in ExecutorStart() is still
necessary, as an EXECUTE requires it. Query ID reporting is optimistic
and more calls to pgstat_report_query_id() don't matter as the first
report takes priority except if the report is forced. The comment in
ExecutorStart() is adjusted to reflect better the reality with the
extended query protocol.
The test added in pg_stat_statements is a courtesy of Robert Haas. This
uses psql's \bind metacommand, hence this part is backpatched down to
v16.
Reported-by: Kaido Vaikla, Erik Wienhold
Author: Sami Imseih Reviewed-by: Jian He, Andrei Lepikhov, Michael Paquier
Discussion: https://postgr.es/m/CA+427g8DiW3aZ6pOpVgkPbqK97ouBdf18VLiHFesea2jUk3XoQ@mail.gmail.com
Discussion: https://postgr.es/m/CA+TgmoZxtnf_jZ=VqBSyaU8hfUkkwoJCJ6ufy4LGpXaunKrjrg@mail.gmail.com
Discussion: https://postgr.es/m/1391613709.939460.1684777418070@office.mailbox.org
Backpatch-through: 14
Thomas Munro [Tue, 17 Sep 2024 21:20:59 +0000 (09:20 +1200)]
Allow ReadStream to be consumed as raw block numbers.
Commits 041b9680 and 6377e12a changed the interface of
scan_analyze_next_block() to take a ReadStream instead of a BlockNumber
and a BufferAccessStrategy, and to return a value to indicate when the
stream has run out of blocks.
This caused integration problems for at least one known extension that
uses specially encoded BlockNumber values that map to different
underlying storage, because acquire_sample_rows() sets up the stream so
that read_stream_next_buffer() reads blocks from the main fork of the
relation's SMgrRelation.
Provide read_stream_next_block(), as a way for such an extension to
access the stream of raw BlockNumbers directly and forward them to its
own ReadBuffer() calls after decoding, as it could in earlier releases.
The new function returns the BlockNumber and BufferAccessStrategy that
were previously passed directly to scan_analyze_next_block().
Alternatively, an extension could wrap the stream of BlockNumbers in
another ReadStream with a callback that performs any decoding required
to arrive at real storage manager BlockNumber values, so that it could
benefit from the I/O combining and concurrency provided by
read_stream.c.
Another class of table access method that does nothing in
scan_analyze_next_block() because it is not block-oriented could use
this function to control the number of block sampling loops. It could
match the previous behavior with "return read_stream_next_block(stream,
&bas) != InvalidBlockNumber".
Ongoing work is expected to provide better ANALYZE support for table
access methods that don't behave like heapam with respect to storage
blocks, but that will be for future releases.
Tom Lane [Tue, 17 Sep 2024 19:53:26 +0000 (15:53 -0400)]
Repair pg_upgrade for identity sequences with non-default persistence.
Since we introduced unlogged sequences in v15, identity sequences
have defaulted to having the same persistence as their owning table.
However, it is possible to change that with ALTER SEQUENCE, and
pg_dump tries to preserve the logged-ness of sequences when it doesn't
match (as indeed it wouldn't for an unlogged table from before v15).
The fly in the ointment is that ALTER SEQUENCE SET [UN]LOGGED fails
in binary-upgrade mode, because it needs to assign a new relfilenode
which we cannot permit in that mode. Thus, trying to pg_upgrade a
database containing a mismatching identity sequence failed.
To fix, add syntax to ADD/ALTER COLUMN GENERATED AS IDENTITY to allow
the sequence's persistence to be set correctly at creation, and use
that instead of ALTER SEQUENCE SET [UN]LOGGED in pg_dump. (I tried to
make SET [UN]LOGGED work without any pg_dump modifications, but that
seems too fragile to be a desirable answer. This way should be
markedly faster anyhow.)
In passing, document the previously-undocumented SEQUENCE NAME option
that pg_dump also relies on for identity sequences; I see no value
in trying to pretend it doesn't exist.
Per bug #18618 from Anthony Hsu.
Back-patch to v15 where we invented this stuff.
Peter Geoghegan [Tue, 17 Sep 2024 15:10:33 +0000 (11:10 -0400)]
Avoid parallel nbtree index scan hangs with SAOPs.
Commit 5bf748b8, which enhanced nbtree ScalarArrayOp execution, made
parallel index scans work with the new design for arrays via explicit
scheduling of primitive index scans. A backend that successfully
scheduled the scan's next primitive index scan saved its backend local
array keys in shared memory. Any backend could pick up the scheduled
primitive scan within _bt_first. This scheme decouples scheduling a
primitive scan from starting the scan (by performing another descent of
the index via a _bt_search call from _bt_first) to make things robust.
The scheme had a deadlock hazard, at least when the leader process
participated in the scan. _bt_parallel_seize had a code path that made
backends that were not in an immediate position to start a scheduled
primitive index scan wait for some other backend to do so instead.
Under the right circumstances, the leader process could wait here
forever: the leader would wait for any other backend to start the
primitive scan, while every worker was busy waiting on the leader to
consume tuples from the scan's tuple queue.
To fix, don't wait for a scheduled primitive index scan to be started by
some other eligible backend from within _bt_parallel_seize (when the
calling backend isn't in a position to do so itself). Return false
instead, while recording that the scan has a scheduled primitive index
scan in backend local state. This leaves the backend in the same state
as the existing case where a backend schedules (or tries to schedule)
another primitive index scan from within _bt_advance_array_keys, before
calling _bt_parallel_seize. _bt_parallel_seize already handles that
case by returning false without waiting, and without unsetting the
backend local state. Leaving the backend in this state enables it to
start a previously scheduled primitive index scan once it gets back to
_bt_first.
Oversight in commit 5bf748b8, which enhanced nbtree ScalarArrayOp
execution.
Matthias van de Meent, with tweaks by me.
Author: Matthias van de Meent <boekewurm+postgres@gmail.com> Reported-By: Tomas Vondra <tomas@vondra.me> Reviewed-By: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-WzmMGaPa32u9x_FvEbPTUkP5e95i=QxR8054nvCRydP-sw@mail.gmail.com
Backpatch: 17-, where nbtree SAOP execution was enhanced.
Tom Lane [Sun, 15 Sep 2024 17:33:09 +0000 (13:33 -0400)]
Replace usages of xmlXPathCompile() with xmlXPathCtxtCompile().
In existing releases of libxml2, xmlXPathCompile can be driven
to stack overflow because it fails to protect itself against
too-deeply-nested input. While there is an upstream fix as of
yesterday, it will take years for that to propagate into all
shipping versions. In the meantime, we can protect our own
usages basically for free by calling xmlXPathCtxtCompile instead.
(The actual bug is that libxml2 keeps its nesting counter in the
xmlXPathContext, and its parsing code was willing to just skip
counting nesting levels if it didn't have a context. So if we supply
a context, all is well. It seems odd actually that it works at all
to not supply a context, because this means that XPath parsing does
not have access to XML namespace info. Apparently libxml2 never
checks namespaces until runtime? Anyway, this seems like good
future-proofing even if its only immediate effect is to dodge a bug.)
Sadly, this hack only offers protection with libxml2 2.9.11 and newer.
Before that there are multiple similar problems, so if you are
processing untrusted XML it behooves you to get a newer version.
But we have some pretty old libxml2 in the buildfarm, so it seems
impractical to add a regression test to verify this fix.
Per bug #18617 from Jingzhou Fu. Back-patch to all supported
versions.
Tom Lane [Sat, 14 Sep 2024 21:55:02 +0000 (17:55 -0400)]
Run regression tests with timezone America/Los_Angeles.
Historically we've used timezone "PST8PDT", but the recent release
2024b of tzdb changes the definition of that zone in a way that
breaks many test cases concerned with dates before 1970. Although
we've not yet adopted 2024b into our own tree, this is already
problematic for people using --with-system-tzdata if their platform
has already adopted 2024b. To work with both older and newer
versions of tzdb, switch to using "America/Los_Angeles", accepting
the ensuing changes in regression test results.
Andrew Dunstan [Sat, 14 Sep 2024 14:26:25 +0000 (10:26 -0400)]
Improve meson's detection of perl build flags
The current method of detecting perl build flags breaks if the path to
perl contains a space. This change makes two improvements. First,
instead of getting a list of ldflags and ccdlflags and then trying to
filter those out of the reported ldopts, we tell perl to suppress
reporting those in the first instance. Second, it tells perl to parse
those and output them, one per line. Thus any space on the option in a
file name, for example, is preserved.
Andrew Dunstan [Sat, 14 Sep 2024 12:37:08 +0000 (08:37 -0400)]
Only define NO_THREAD_SAFE_LOCALE for MSVC plperl when required
Latest versions of Strawberry Perl define USE_THREAD_SAFE_LOCALE, and we
therefore get a handshake error when building against such instances.
The solution is to perform a test to see if USE_THREAD_SAFE_LOCALE is
defined and only define NO_THREAD_SAFE_LOCALE if it isn't.
Backpatch the meson.build fix back to release 16 and apply the same
logic to Mkvcbuild.pm in releases 12 through 16.
Original report of the issue from Muralikrishna Bandaru.
Tom Lane [Fri, 13 Sep 2024 20:16:47 +0000 (16:16 -0400)]
Allow _h_indexbuild() to be interrupted.
When we are building a hash index that is large enough to need
pre-sorting (larger than either maintenance_work_mem or NBuffers),
the initial sorting phase is interruptible, but the insertion
phase wasn't. Add the missing CHECK_FOR_INTERRUPTS().
Per bug #18616 from Alexander Lakhin. Back-patch to all
supported branches.
I managed to break this test in two different ways in commit 05036a3155.
First, the output of the new call to tuple_data_split() on the test
sequence is dependent on endianness. This is fixed by setting a
special start value for the test sequence that produces the same
output regardless of the endianness of the machine.
Second, on versions older than v15, the new test case fails under
"force_parallel_mode = regress" with the following error:
ERROR: cannot access temporary tables during a parallel operation
This is because pageinspect's disk-accessing functions are
incorrectly marked PARALLEL SAFE on versions older than v15 (see
commit aeaaf520f4 for details). This one is fixed by changing the
test sequence to be permanent. The only reason it was previously
marked temporary was to avoid needing a DROP SEQUENCE command at
the end of the test. Unlike some other tests in this file, the use
of a permanent sequence here shouldn't result in any test
instability like what was fixed by commit e2933a6e11.
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/ZuOKOut5hhDlf_bP%40nathan
Backpatch-through: 12
Amit Langote [Fri, 13 Sep 2024 07:08:13 +0000 (16:08 +0900)]
SQL/JSON: Update example in JSON_QUERY() documentation
Commit 2c27346ed684 fixed the behavior of JSON_QUERY() when WITH
CONDITIONAL WRAPPER is used, but the documentation example wasn't
updated to reflect this change. This commit updates the example to
show the correct result.
Reintroduce support for sequences in pgstattuple and pageinspect.
Commit 4b82664156 restricted a number of functions provided by
contrib modules to only relations that use the "heap" table access
method. Sequences always use this table access method, but they do
not advertise as such in the pg_class system catalog, so the
aforementioned commit also (presumably unintentionally) removed
support for sequences from some of these functions. This commit
reintroduces said support for sequences to these functions and adds
a couple of relevant tests.
Co-authored-by: Ayush Vatsa Reviewed-by: Robert Haas, Michael Paquier, Matthias van de Meent
Discussion: https://postgr.es/m/CACX%2BKaP3i%2Bi9tdPLjF5JCHVv93xobEdcd_eB%2B638VDvZ3i%3DcQA%40mail.gmail.com
Backpatch-through: 12
Tom Lane [Thu, 12 Sep 2024 18:30:29 +0000 (14:30 -0400)]
Make jsonpath .string() be immutable for datetimes.
Discussion of commit ed055d249 revealed that we don't actually
want jsonpath's .string() method to depend on DateStyle, nor
TimeZone either, because the non-"_tz" jsonpath functions are
supposed to be immutable. Potentially we could allow a TimeZone
dependency in the "_tz" variants, but it seems better to just
uniformly define this method as returning the same string that
jsonb text output would do. That's easier to implement too,
saving a couple dozen lines.
Patch by me, per complaint from Peter Eisentraut. Back-patch
to v17 where this feature came in (in 66ea94e8e). Also
back-patch ed055d249 to provide test cases.
David Rowley [Thu, 12 Sep 2024 10:37:27 +0000 (22:37 +1200)]
Doc: alphabetize aggregate function table
A few recent JSON aggregates have been added without much consideration
to the existing order. Put these back in alphabetical order (with the
exception of the JSONB variant of each JSON aggregate).
Author: Wolfgang Walther <walther@technowledgy.de> Reviewed-by: Marlene Reiterer <marlene.reiterer.03@gmail.com>
Discussion: https://postgr.es/m/6a7b910c-3feb-4006-b817-9b4759cb6bb6%40technowledgy.de
Backpatch-through: 16, where these aggregates were added
Amit Langote [Thu, 12 Sep 2024 00:36:31 +0000 (09:36 +0900)]
SQL/JSON: Fix JSON_QUERY(... WITH CONDITIONAL WRAPPER)
Currently, when WITH CONDITIONAL WRAPPER is specified, array wrappers
are applied even to a single SQL/JSON item if it is a scalar JSON
value, but this behavior does not comply with the standard.
To fix, apply wrappers only when there are multiple SQL/JSON items
in the result.
Reported-by: Peter Eisentraut <peter@eisentraut.org>
Author: Peter Eisentraut <peter@eisentraut.org>
Author: Amit Langote <amitlangote09@gmail.com> Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/8022e067-818b-45d3-8fab-6e0d94d03626%40eisentraut.org
Backpatch-through: 17
Tom Lane [Wed, 11 Sep 2024 15:41:47 +0000 (11:41 -0400)]
Remove incorrect Assert.
check_agglevels_and_constraints() asserted that if we find an
aggregate function in an EXPR_KIND_FROM_SUBSELECT expression, the
expression must be in a LATERAL subquery. Alexander Lakhin found a
case where that's not so: because of the odd scoping rules for NEW/OLD
within a rule, a reference to NEW/OLD could cause an aggregate to be
considered top-level even though it's in an unmarked sub-select.
The error message that would be thrown seems sufficiently on-point,
so just remove the Assert. (Hence, this is not a bug for production
builds.)
This Assert was added by me in commit eaccfded9 (9.3 era). It looks
like I put it in to cross-check that the new logic for detecting
misplaced aggregates (using agglevelsup) caught the same cases that a
previous check on p_lateral_active did. So there might have been some
related misbehavior before eaccfded9 ... but that's very ancient
history by now, so I didn't dig any deeper.
Per bug #18608 from Alexander Lakhin. Back-patch to all supported
branches.
Tomas Vondra [Wed, 11 Sep 2024 11:21:05 +0000 (13:21 +0200)]
Fix unique key checks in JSON object constructors
When building a JSON object, the code builds a hash table of keys, to
allow checking if the keys are unique. The uniqueness check and adding
the new key happens in json_unique_check_key(), but this assumes the
pointer to the key remains valid.
Unfortunately, two places passed pointers to keys in a buffer, while
also appending more data (additional key/value pairs) to the buffer.
With enough data the buffer is resized by enlargeStringInfo(), which
calls repalloc(), invalidating the earlier key pointers.
Due to this the uniqueness check may fail with both false negatives and
false positives, producing JSON objects with duplicate keys or failing
to produce a perfectly valid JSON object.
This affects multiple functions that enforce uniqueness of keys, all
introduced in PG16 with the new SQL/JSON:
Existing regression tests did not detect the issue, simply because the
initial buffer size is 1024 and the objects were small enough not to
require the repalloc.
With a sufficiently large object, AddressSanitizer reported the access
to invalid memory immediately. So would valgrind, of course.
Fixed by copying the key into the hash table memory context, and adding
regression tests with enough data to repalloc the buffer. Backpatch to
16, where the functions were introduced.
Reported by Alexander Lakhin. Investigation and initial fix by Junwang
Zhao, with various improvements and tests by me.
Reported-by: Alexander Lakhin
Author: Junwang Zhao, Tomas Vondra
Backpatch-through: 16
Discussion: https://postgr.es/m/18598-3279ed972a2347c7@postgresql.org
Discussion: https://postgr.es/m/CAEG8a3JjH0ReJF2_O7-8LuEbO69BxPhYeXs95_x7+H9AMWF1gw@mail.gmail.com
Tom Lane [Tue, 10 Sep 2024 20:20:31 +0000 (16:20 -0400)]
Fix some whitespace issues in XMLSERIALIZE(... INDENT).
We must drop whitespace while parsing the input, else libxml2
will include "blank" nodes that interfere with the desired
indentation behavior. The end result is that we didn't indent
nodes separated by whitespace.
Also, it seems that libxml2 may add a trailing newline when working
in DOCUMENT mode. This is semantically insignificant, so strip it.
This is in the gray area between being a bug fix and a definition
change. However, the INDENT option is still pretty new (since v16),
so I think we can get away with changing this in stable branches.
Hence, back-patch to v16.
Amit Langote [Mon, 9 Sep 2024 02:55:38 +0000 (11:55 +0900)]
SQL/JSON: Avoid initializing unnecessary ON ERROR / ON EMPTY steps
When the ON ERROR / ON EMPTY behavior is to return NULL, returning
NULL directly from ExecEvalJsonExprPath() suffices. Therefore, there's
no need to create separate steps to check the error/empty flag or
those to evaluate the the constant NULL expression. This speeds up
common cases because the default ON ERROR / ON EMPTY behavior for
JSON_QUERY() and JSON_VALUE() is to return NULL. However, these steps
are necessary if the RETURNING type is a domain, as constraints on the
domain may need to be checked.
Reported-by: Jian He <jian.universality@gmail.com>
Author: Jian He <jian.universality@gmail.com>
Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
Michael Paquier [Mon, 9 Sep 2024 04:49:59 +0000 (13:49 +0900)]
Fix waits of REINDEX CONCURRENTLY for indexes with predicates or expressions
As introduced by f9900df5f94, a REINDEX CONCURRENTLY job done for an
index with predicates or expressions would set PROC_IN_SAFE_IC in its
MyProc->statusFlags, causing it to be ignored by other concurrent
operations.
Such concurrent index rebuilds should never be ignored, as a predicate
or an expression could call a user-defined function that accesses a
different table than the table where the index is rebuilt.
A test that uses injection points is added, backpatched down to 17.
Michail has proposed a different test, but I have added something
simpler with more coverage.
Tom Lane [Fri, 6 Sep 2024 15:57:57 +0000 (11:57 -0400)]
Fix incorrect pg_stat_io output on 32-bit machines.
pg_stat_get_io() applied TimestampTzGetDatum twice to the
stat_reset_timestamp value. On 64-bit builds that's harmless because
TimestampTzGetDatum is a no-op, but on 32-bit builds it results in
displaying garbage in the stats_reset column of the pg_stat_io view.
Bug dates to commit a9c70b46d which introduced pg_stat_io, so
back-patch to v16 where that came in.
Amit Langote [Fri, 6 Sep 2024 04:25:14 +0000 (13:25 +0900)]
SQL/JSON: Fix default ON ERROR behavior for JSON_TABLE
Use EMPTY ARRAY instead of EMPTY.
This change does not affect the runtime behavior of JSON_TABLE(),
which continues to return an empty relation ON ERROR. It only alters
whether the default ON ERROR behavior is shown in the deparsed output.
Reported-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
Amit Langote [Fri, 6 Sep 2024 04:25:02 +0000 (13:25 +0900)]
SQL/JSON: Fix JSON_TABLE() column deparsing
The deparsing code in get_json_expr_options() unnecessarily emitted
the default column-specific ON ERROR / EMPTY behavior when the
top-level ON ERROR behavior in JSON_TABLE was set to ERROR. Fix that
by not overriding the column-specific default, determined based on
the column's JsonExprOp in get_json_table_columns(), with
JSON_BEHAVIOR_ERROR when that is the top-level ON ERROR behavior.
Note that this only removes redundancy; the current deparsing output
is not incorrect, just redundant.
Reviewed-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
Amit Langote [Fri, 6 Sep 2024 03:04:29 +0000 (12:04 +0900)]
SQL/JSON: Avoid initializing unnecessary ON ERROR / ON EMPTY steps
When the ON ERROR / ON EMPTY behavior is to return NULL, returning
NULL directly from ExecEvalJsonExprPath() suffices. Therefore, there's
no need to create separate steps to check the error/empty flag or
those to evaluate the the constant NULL expression. This speeds up
common cases because the default ON ERROR / ON EMPTY behavior for
JSON_QUERY() and JSON_VALUE() is to return NULL. However, these steps
are necessary if the RETURNING type is a domain, as constraints on the
domain may need to be checked.
Reported-by: Jian He <jian.universality@gmail.com>
Author: Jian He <jian.universality@gmail.com>
Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
Amit Langote [Fri, 6 Sep 2024 01:13:03 +0000 (10:13 +0900)]
SQL/JSON: Fix default ON ERROR behavior for JSON_TABLE
Use EMPTY ARRAY instead of EMPTY.
This change does not affect the runtime behavior of JSON_TABLE(),
which continues to return an empty relation ON ERROR. It only alters
whether the default ON ERROR behavior is shown in the deparsed output.
Reported-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
Amit Langote [Fri, 6 Sep 2024 01:12:16 +0000 (10:12 +0900)]
SQL/JSON: Fix JSON_TABLE() column deparsing
The deparsing code in get_json_expr_options() unnecessarily emitted
the default column-specific ON ERROR / EMPTY behavior when the
top-level ON ERROR behavior in JSON_TABLE was set to ERROR. Fix that
by not overriding the column-specific default, determined based on
the column's JsonExprOp in get_json_table_columns(), with
JSON_BEHAVIOR_ERROR when that is the top-level ON ERROR behavior.
Note that this only removes redundancy; the current deparsing output
is not incorrect, just redundant.
Reviewed-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17