<para>
When changing this value, consider also adjusting
<xref linkend="guc-max-parallel-workers"/>,
+ <xref linkend="guc-autovacuum-max-parallel-workers"/>,
<xref linkend="guc-max-parallel-maintenance-workers"/>, and
<xref linkend="guc-max-parallel-workers-per-gather"/>.
</para>
</listitem>
</varlistentry>
+ <varlistentry id="guc-autovacuum-max-parallel-workers" xreflabel="autovacuum_max_parallel_workers">
+ <term><varname>autovacuum_max_parallel_workers</varname> (<type>integer</type>)
+ <indexterm>
+ <primary><varname>autovacuum_max_parallel_workers</varname></primary>
+ <secondary>configuration parameter</secondary>
+ </indexterm>
+ </term>
+ <listitem>
+ <para>
+ Sets the maximum number of parallel workers that can be used by a
+ single autovacuum worker to process indexes. This limit applies
+ specifically to the index vacuuming and index cleanup phases (for the
+ details of each autovacuum phase, please refer to <xref linkend="vacuum-phases"/>).
+ The actual number of parallel workers is further limited by
+ <xref linkend="guc-max-parallel-workers"/>. This is the
+ per-autovacuum worker equivalent of the <literal>PARALLEL</literal>
+ option of the <link linkend="sql-vacuum"><command>VACUUM</command></link>
+ command. Setting this value to 0 disables parallel vacuum during autovacuum.
+ The default is 0.
+ </para>
+ </listitem>
+ </varlistentry>
+
</variablelist>
</sect2>
per-table <literal>autovacuum_vacuum_cost_delay</literal> or
<literal>autovacuum_vacuum_cost_limit</literal> storage parameters have been set
are not considered in the balancing algorithm.
+ Parallel workers launched for <xref linkend="parallel-vacuum"/> are using
+ the same cost delay parameters as the leader worker. If any of these
+ parameters are changed in the leader worker, it will propagate the new
+ parameter values to all of its parallel workers.
</para>
<para>
</para>
</sect3>
</sect2>
+
+ <sect2 id="parallel-vacuum" xreflabel="Parallel Vacuum">
+ <title>Parallel Vacuum</title>
+
+ <para>
+ <command>VACUUM</command> can perform index vacuuming and index cleanup
+ phases in parallel using background workers (for the details of each
+ vacuum phase, please refer to <xref linkend="vacuum-phases"/>). The
+ degree of parallelism is determined by the number of indexes on the
+ relation that support parallel vacuum. For manual <command>VACUUM</command>,
+ this is limited by the <literal>PARALLEL</literal> option, which is
+ further capped by <xref linkend="guc-max-parallel-maintenance-workers"/>.
+ For autovacuum, it is limited by the table's
+ <xref linkend="reloption-autovacuum-parallel-workers"/> if any which is
+ capped limited by
+ <xref linkend="guc-autovacuum-max-parallel-workers"/> parameter. Please
+ note that it is not guaranteed that the number of parallel workers that was
+ calculated will be used during execution. It is possible for a vacuum to
+ run with fewer workers than specified, or even with no workers at all.
+ </para>
+
+ <para>
+ An index can participate in parallel vacuum if and only if the size of the
+ index is more than <xref linkend="guc-min-parallel-index-scan-size"/>.
+ Only one worker can be used per index. So parallel workers are launched
+ only when there are at least <literal>2</literal> indexes in the table.
+ Workers for vacuum are launched before the start of each phase and exit at
+ the end of the phase. These behaviors might change in a future release.
+ </para>
+ </sect2>
</sect1>
</listitem>
</varlistentry>
+ <varlistentry id="reloption-autovacuum-parallel-workers" xreflabel="autovacuum_parallel_workers">
+ <term><literal>autovacuum_parallel_workers</literal> (<type>integer</type>)
+ <indexterm>
+ <primary><varname>autovacuum_parallel_workers</varname> storage parameter</primary>
+ </indexterm>
+ </term>
+ <listitem>
+ <para>
+ Per-table value for <xref linkend="guc-autovacuum-max-parallel-workers"/>
+ parameter. If -1 is specified, <varname>autovacuum_max_parallel_workers</varname>
+ value will be used. If set to 0, parallel vacuum is disabled for
+ this table. The default value is -1.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="reloption-autovacuum-vacuum-threshold" xreflabel="autovacuum_vacuum_threshold">
<term><literal>autovacuum_vacuum_threshold</literal>, <literal>toast.autovacuum_vacuum_threshold</literal> (<type>integer</type>)
<indexterm>
is not obtained. However, extra space is not returned to the operating
system (in most cases); it's just kept available for re-use within the
same table. It also allows us to leverage multiple CPUs in order to process
- indexes. This feature is known as <firstterm>parallel vacuum</firstterm>.
+ indexes. This feature is known as <firstterm><xref linkend="parallel-vacuum"/></firstterm>.
To disable this feature, one can use <literal>PARALLEL</literal> option and
specify parallel workers as zero. <command>VACUUM FULL</command> rewrites
the entire contents of the table into a new disk file with no extra space,
<term><literal>PARALLEL</literal></term>
<listitem>
<para>
- Perform index vacuum and index cleanup phases of <command>VACUUM</command>
- in parallel using <replaceable class="parameter">integer</replaceable>
- background workers (for the details of each vacuum phase, please
- refer to <xref linkend="vacuum-phases"/>). The number of workers used
- to perform the operation is equal to the number of indexes on the
- relation that support parallel vacuum which is limited by the number of
- workers specified with <literal>PARALLEL</literal> option if any which is
- further limited by <xref linkend="guc-max-parallel-maintenance-workers"/>.
- An index can participate in parallel vacuum if and only if the size of the
- index is more than <xref linkend="guc-min-parallel-index-scan-size"/>.
- Please note that it is not guaranteed that the number of parallel workers
- specified in <replaceable class="parameter">integer</replaceable> will be
- used during execution. It is possible for a vacuum to run with fewer
- workers than specified, or even with no workers at all. Only one worker
- can be used per index. So parallel workers are launched only when there
- are at least <literal>2</literal> indexes in the table. Workers for
- vacuum are launched before the start of each phase and exit at the end of
- the phase. These behaviors might change in a future release. This
+ Specifies the maximum number of parallel workers that can be used
+ for <xref linkend="parallel-vacuum"/>, which is further limited
+ by <xref linkend="guc-max-parallel-maintenance-workers"/>. This
option can't be used with the <literal>FULL</literal> option.
</para>
</listitem>
},
SPGIST_DEFAULT_FILLFACTOR, SPGIST_MIN_FILLFACTOR, 100
},
+ {
+ {
+ "autovacuum_parallel_workers",
+ "Maximum number of parallel autovacuum workers that can be used for processing this table.",
+ RELOPT_KIND_HEAP,
+ ShareUpdateExclusiveLock
+ },
+ -1, -1, 1024
+ },
{
{
"autovacuum_vacuum_threshold",
{"fillfactor", RELOPT_TYPE_INT, offsetof(StdRdOptions, fillfactor)},
{"autovacuum_enabled", RELOPT_TYPE_BOOL,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, enabled)},
+ {"autovacuum_parallel_workers", RELOPT_TYPE_INT,
+ offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, autovacuum_parallel_workers)},
{"autovacuum_vacuum_threshold", RELOPT_TYPE_INT,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_threshold)},
{"autovacuum_vacuum_max_threshold", RELOPT_TYPE_INT,
#include "storage/latch.h"
#include "storage/lmgr.h"
#include "storage/read_stream.h"
+#include "utils/injection_point.h"
#include "utils/lsyscache.h"
#include "utils/pg_rusage.h"
#include "utils/timestamp.h"
lazy_check_wraparound_failsafe(vacrel);
dead_items_alloc(vacrel, params->nworkers);
+#ifdef USE_INJECTION_POINTS
+
+ /*
+ * Used by tests to pause before parallel vacuum is launched, allowing
+ * test code to modify configuration that the leader then propagates to
+ * workers.
+ */
+ if (AmAutoVacuumWorkerProcess() && ParallelVacuumIsActive(vacrel))
+ INJECTION_POINT("autovacuum-start-parallel-vacuum", NULL);
+#endif
+
/*
* Call lazy_scan_heap to perform all required heap pruning, index
* vacuuming, and heap vacuuming (plus related processing)
/* Always check for interrupts */
CHECK_FOR_INTERRUPTS();
- if (InterruptPending ||
- (!VacuumCostActive && !ConfigReloadPending))
+ if (InterruptPending)
+ return;
+
+ if (IsParallelWorker())
+ {
+ /*
+ * Update cost-based vacuum delay parameters for a parallel autovacuum
+ * worker if any changes are detected. It might enable cost-based
+ * delay so it needs to be called before VacuumCostActive check.
+ */
+ parallel_vacuum_update_shared_delay_params();
+ }
+
+ if (!VacuumCostActive && !ConfigReloadPending)
return;
/*
ConfigReloadPending = false;
ProcessConfigFile(PGC_SIGHUP);
VacuumUpdateCosts();
+
+ /*
+ * Propagate cost-based vacuum delay parameters to shared memory if
+ * any of them have changed during the config reload.
+ */
+ parallel_vacuum_propagate_shared_delay_params();
}
/*
/*-------------------------------------------------------------------------
*
* vacuumparallel.c
- * Support routines for parallel vacuum execution.
+ * Support routines for parallel vacuum and autovacuum execution. In the
+ * comments below, the word "vacuum" will refer to both vacuum and
+ * autovacuum.
*
* This file contains routines that are intended to support setting up, using,
* and tearing down a ParallelVacuumState.
* the parallel context is re-initialized so that the same DSM can be used for
* multiple passes of index bulk-deletion and index cleanup.
*
+ * For parallel autovacuum, we need to propagate cost-based vacuum delay
+ * parameters from the leader to its workers, as the leader's parameters can
+ * change even while processing a table (e.g., due to a config reload).
+ * The PVSharedCostParams struct manages these parameters using a
+ * generation counter. Each parallel worker polls this shared state and
+ * refreshes its local delay parameters whenever a change is detected.
+ *
* Portions Copyright (c) 1996-2026, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
#define PARALLEL_VACUUM_KEY_WAL_USAGE 4
#define PARALLEL_VACUUM_KEY_INDEX_STATS 5
+/*
+ * Struct for cost-based vacuum delay related parameters to share among an
+ * autovacuum worker and its parallel vacuum workers.
+ */
+typedef struct PVSharedCostParams
+{
+ /*
+ * The generation counter is incremented by the leader process each time
+ * it updates the shared cost-based vacuum delay parameters. Parallel
+ * vacuum workers compare it with their local generation,
+ * shared_params_generation_local, to detect whether they need to refresh
+ * their local parameters. The generation starts from 1 so that a freshly
+ * started worker (whose local copy is 0) will always load the initial
+ * parameters on its first check.
+ */
+ pg_atomic_uint32 generation;
+
+ slock_t mutex; /* protects all fields below */
+
+ /* Parameters to share with parallel workers */
+ double cost_delay;
+ int cost_limit;
+ int cost_page_dirty;
+ int cost_page_hit;
+ int cost_page_miss;
+} PVSharedCostParams;
+
/*
* Shared information among parallel workers. So this is allocated in the DSM
* segment.
/* Statistics of shared dead items */
VacDeadItemsInfo dead_items_info;
+
+ /*
+ * If 'true' then we are running parallel autovacuum. Otherwise, we are
+ * running parallel maintenance VACUUM.
+ */
+ bool is_autovacuum;
+
+ /*
+ * Cost-based vacuum delay parameters shared between the autovacuum leader
+ * and its parallel workers.
+ */
+ PVSharedCostParams cost_params;
} PVShared;
/* Status used during parallel index vacuum or cleanup */
PVIndVacStatus status;
};
+static PVSharedCostParams *pv_shared_cost_params = NULL;
+
+/*
+ * Worker-local copy of the last cost-parameter generation this worker has
+ * applied. Initialized to 0; since the leader initializes the shared
+ * generation counter to 1, the first call to
+ * parallel_vacuum_update_shared_delay_params() will always detect a
+ * mismatch and read the initial parameters from shared memory.
+ */
+static uint32 shared_params_generation_local = 0;
+
static int parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
bool *will_parallel_vacuum);
static void parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
static bool parallel_vacuum_index_is_parallel_safe(Relation indrel, int num_index_scans,
bool vacuum);
static void parallel_vacuum_error_callback(void *arg);
+static inline void parallel_vacuum_set_cost_parameters(PVSharedCostParams *params);
+static void parallel_vacuum_dsm_detach(dsm_segment *seg, Datum arg);
/*
* Try to enter parallel mode and create a parallel context. Then initialize
shared->queryid = pgstat_get_my_query_id();
shared->maintenance_work_mem_worker =
(nindexes_mwm > 0) ?
- maintenance_work_mem / Min(parallel_workers, nindexes_mwm) :
- maintenance_work_mem;
+ vac_work_mem / Min(parallel_workers, nindexes_mwm) :
+ vac_work_mem;
+
shared->dead_items_info.max_bytes = vac_work_mem * (size_t) 1024;
/* Prepare DSA space for dead items */
pg_atomic_init_u32(&(shared->active_nworkers), 0);
pg_atomic_init_u32(&(shared->idx), 0);
+ shared->is_autovacuum = AmAutoVacuumWorkerProcess();
+
+ /*
+ * Initialize shared cost-based vacuum delay parameters if it's for
+ * autovacuum.
+ */
+ if (shared->is_autovacuum)
+ {
+ parallel_vacuum_set_cost_parameters(&shared->cost_params);
+ pg_atomic_init_u32(&shared->cost_params.generation, 1);
+ SpinLockInit(&shared->cost_params.mutex);
+
+ pv_shared_cost_params = &(shared->cost_params);
+ on_dsm_detach(pcxt->seg, parallel_vacuum_dsm_detach, (Datum) 0);
+ }
+
shm_toc_insert(pcxt->toc, PARALLEL_VACUUM_KEY_SHARED, shared);
pvs->shared = shared;
DestroyParallelContext(pvs->pcxt);
ExitParallelMode();
+ if (AmAutoVacuumWorkerProcess())
+ pv_shared_cost_params = NULL;
+
pfree(pvs->will_parallel_vacuum);
pfree(pvs);
}
+/*
+ * DSM detach callback. This is invoked when an autovacuum worker detaches
+ * from the DSM segment holding PVShared. It ensures to reset the local pointer
+ * to the shared state even if paralell vacuum raises an error and doesn't
+ * call parallel_vacuum_end().
+ */
+static void
+parallel_vacuum_dsm_detach(dsm_segment *seg, Datum arg)
+{
+ Assert(AmAutoVacuumWorkerProcess());
+ pv_shared_cost_params = NULL;
+}
+
/*
* Returns the dead items space and dead items information.
*/
parallel_vacuum_process_all_indexes(pvs, num_index_scans, false, wstats);
}
+/*
+ * Fill in the given structure with cost-based vacuum delay parameter values.
+ */
+static inline void
+parallel_vacuum_set_cost_parameters(PVSharedCostParams *params)
+{
+ params->cost_delay = vacuum_cost_delay;
+ params->cost_limit = vacuum_cost_limit;
+ params->cost_page_dirty = VacuumCostPageDirty;
+ params->cost_page_hit = VacuumCostPageHit;
+ params->cost_page_miss = VacuumCostPageMiss;
+}
+
+/*
+ * Updates the cost-based vacuum delay parameters for parallel autovacuum
+ * workers.
+ *
+ * For non-autovacuum parallel workers, this function will have no effect.
+ */
+void
+parallel_vacuum_update_shared_delay_params(void)
+{
+ uint32 params_generation;
+
+ Assert(IsParallelWorker());
+
+ /* Quick return if the worker is not running for the autovacuum */
+ if (pv_shared_cost_params == NULL)
+ return;
+
+ params_generation = pg_atomic_read_u32(&pv_shared_cost_params->generation);
+ Assert(shared_params_generation_local <= params_generation);
+
+ /* Return if parameters had not changed in the leader */
+ if (params_generation == shared_params_generation_local)
+ return;
+
+ SpinLockAcquire(&pv_shared_cost_params->mutex);
+ VacuumCostDelay = pv_shared_cost_params->cost_delay;
+ VacuumCostLimit = pv_shared_cost_params->cost_limit;
+ VacuumCostPageDirty = pv_shared_cost_params->cost_page_dirty;
+ VacuumCostPageHit = pv_shared_cost_params->cost_page_hit;
+ VacuumCostPageMiss = pv_shared_cost_params->cost_page_miss;
+ SpinLockRelease(&pv_shared_cost_params->mutex);
+
+ VacuumUpdateCosts();
+
+ shared_params_generation_local = params_generation;
+
+ elog(DEBUG2,
+ "parallel autovacuum worker updated cost params: cost_limit=%d, cost_delay=%g, cost_page_miss=%d, cost_page_dirty=%d, cost_page_hit=%d",
+ vacuum_cost_limit,
+ vacuum_cost_delay,
+ VacuumCostPageMiss,
+ VacuumCostPageDirty,
+ VacuumCostPageHit);
+}
+
+/*
+ * Store the cost-based vacuum delay parameters in the shared memory so that
+ * parallel vacuum workers can consume them (see
+ * parallel_vacuum_update_shared_delay_params()).
+ */
+void
+parallel_vacuum_propagate_shared_delay_params(void)
+{
+ Assert(AmAutoVacuumWorkerProcess());
+
+ /*
+ * Quick return if the leader process is not sharing the delay parameters.
+ */
+ if (pv_shared_cost_params == NULL)
+ return;
+
+ /*
+ * Check if any delay parameters have changed. We can read them without
+ * locks as only the leader can modify them.
+ */
+ if (vacuum_cost_delay == pv_shared_cost_params->cost_delay &&
+ vacuum_cost_limit == pv_shared_cost_params->cost_limit &&
+ VacuumCostPageDirty == pv_shared_cost_params->cost_page_dirty &&
+ VacuumCostPageHit == pv_shared_cost_params->cost_page_hit &&
+ VacuumCostPageMiss == pv_shared_cost_params->cost_page_miss)
+ return;
+
+ /* Update the shared delay parameters */
+ SpinLockAcquire(&pv_shared_cost_params->mutex);
+ parallel_vacuum_set_cost_parameters(pv_shared_cost_params);
+ SpinLockRelease(&pv_shared_cost_params->mutex);
+
+ /*
+ * Increment the generation of the parameters, i.e. let parallel workers
+ * know that they should re-read shared cost params.
+ */
+ pg_atomic_fetch_add_u32(&pv_shared_cost_params->generation, 1);
+}
+
/*
* Compute the number of parallel worker processes to request. Both index
* vacuum and index cleanup can be executed with parallel workers.
int nindexes_parallel_bulkdel = 0;
int nindexes_parallel_cleanup = 0;
int parallel_workers;
+ int max_workers;
+
+ max_workers = AmAutoVacuumWorkerProcess() ?
+ autovacuum_max_parallel_workers :
+ max_parallel_maintenance_workers;
/*
* We don't allow performing parallel operation in standalone backend or
* when parallelism is disabled.
*/
- if (!IsUnderPostmaster || max_parallel_maintenance_workers == 0)
+ if (!IsUnderPostmaster || max_workers == 0)
return 0;
/*
parallel_workers = (nrequested > 0) ?
Min(nrequested, nindexes_parallel) : nindexes_parallel;
- /* Cap by max_parallel_maintenance_workers */
- parallel_workers = Min(parallel_workers, max_parallel_maintenance_workers);
+ /* Cap by GUC variable */
+ parallel_workers = Min(parallel_workers, max_workers);
return parallel_workers;
}
shared->dead_items_handle);
/* Set cost-based vacuum delay */
- VacuumUpdateCosts();
+ if (shared->is_autovacuum)
+ {
+ /*
+ * Parallel autovacuum workers initialize cost-based delay parameters
+ * from the leader's shared state rather than GUC defaults, because
+ * the leader may have applied per-table or autovacuum-specific
+ * overrides. pv_shared_cost_params must be set before calling
+ * parallel_vacuum_update_shared_delay_params().
+ */
+ pv_shared_cost_params = &(shared->cost_params);
+ parallel_vacuum_update_shared_delay_params();
+ }
+ else
+ VacuumUpdateCosts();
+
VacuumCostBalance = 0;
VacuumCostBalanceLocal = 0;
VacuumSharedCostBalance = &(shared->cost_balance);
vac_close_indexes(nindexes, indrels, RowExclusiveLock);
table_close(rel, ShareUpdateExclusiveLock);
FreeAccessStrategy(pvs.bstrategy);
+
+ if (shared->is_autovacuum)
+ pv_shared_cost_params = NULL;
}
/*
}
else
{
- /* Must be explicit VACUUM or ANALYZE */
+ /* Must be explicit VACUUM or ANALYZE or parallel autovacuum worker */
vacuum_cost_delay = VacuumCostDelay;
vacuum_cost_limit = VacuumCostLimit;
}
*/
tab->at_params.index_cleanup = VACOPTVALUE_UNSPECIFIED;
tab->at_params.truncate = VACOPTVALUE_UNSPECIFIED;
- /* As of now, we don't support parallel vacuum for autovacuum */
- tab->at_params.nworkers = -1;
tab->at_params.freeze_min_age = freeze_min_age;
tab->at_params.freeze_table_age = freeze_table_age;
tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;
tab->at_params.log_analyze_min_duration = log_analyze_min_duration;
tab->at_params.toast_parent = InvalidOid;
+ /* Determine the number of parallel vacuum workers to use */
+ tab->at_params.nworkers = 0;
+ if (avopts)
+ {
+ if (avopts->autovacuum_parallel_workers == 0)
+ {
+ /*
+ * Disable parallel vacuum, if the reloption sets the parallel
+ * degree as zero.
+ */
+ tab->at_params.nworkers = -1;
+ }
+ else if (avopts->autovacuum_parallel_workers > 0)
+ tab->at_params.nworkers = avopts->autovacuum_parallel_workers;
+
+ /*
+ * autovacuum_parallel_workers == -1 falls through, keep
+ * nworkers=0
+ */
+ }
+
/*
* Later, in vacuum_rel(), we check reloptions for any
* vacuum_max_eager_freeze_failure_rate override.
int MaxConnections = 100;
int max_worker_processes = 8;
int max_parallel_workers = 8;
+int autovacuum_max_parallel_workers = 0;
int MaxBackends = 0;
/* GUC parameters for vacuum */
*
* Also allow normal setting if the GUC is marked GUC_ALLOW_IN_PARALLEL.
*
- * Other changes might need to affect other workers, so forbid them.
+ * Other changes might need to affect other workers, so forbid them. Note,
+ * that parallel autovacuum leader is an exception because cost-based
+ * delays need to be affected to parallel autovacuum workers. These
+ * parameters are propagated to its workers during parallel vacuum (see
+ * vacuumparallel.c for details). All other changes will affect only the
+ * parallel autovacuum leader.
*/
- if (IsInParallelMode() && changeVal && action != GUC_ACTION_SAVE &&
+ if (IsInParallelMode() && !AmAutoVacuumWorkerProcess() && changeVal &&
+ action != GUC_ACTION_SAVE &&
(record->flags & GUC_ALLOW_IN_PARALLEL) == 0)
{
ereport(elevel,
max => '10.0',
},
+{ name => 'autovacuum_max_parallel_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
+ short_desc => 'Maximum number of parallel workers that can be used by a single autovacuum worker.',
+ variable => 'autovacuum_max_parallel_workers',
+ boot_val => '0',
+ min => '0',
+ max => 'MAX_PARALLEL_WORKER_LIMIT',
+},
+
{ name => 'autovacuum_max_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
short_desc => 'Sets the maximum number of simultaneously running autovacuum worker processes.',
variable => 'autovacuum_max_workers',
#autovacuum_worker_slots = 16 # autovacuum worker slots to allocate
# (change requires restart)
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
+#autovacuum_max_parallel_workers = 0 # limited by max_parallel_workers
#autovacuum_naptime = 1min # time between autovacuum runs
#autovacuum_vacuum_threshold = 50 # min number of row updates before
# vacuum
"autovacuum_multixact_freeze_max_age",
"autovacuum_multixact_freeze_min_age",
"autovacuum_multixact_freeze_table_age",
+ "autovacuum_parallel_workers",
"autovacuum_vacuum_cost_delay",
"autovacuum_vacuum_cost_limit",
"autovacuum_vacuum_insert_scale_factor",
int num_index_scans,
bool estimated_count,
PVWorkerStats *wstats);
+extern void parallel_vacuum_update_shared_delay_params(void);
+extern void parallel_vacuum_propagate_shared_delay_params(void);
extern void parallel_vacuum_main(dsm_segment *seg, shm_toc *toc);
/* in commands/analyze.c */
extern PGDLLIMPORT int MaxConnections;
extern PGDLLIMPORT int max_worker_processes;
extern PGDLLIMPORT int max_parallel_workers;
+extern PGDLLIMPORT int autovacuum_max_parallel_workers;
extern PGDLLIMPORT int commit_timestamp_buffers;
extern PGDLLIMPORT int multixact_member_buffers;
typedef struct AutoVacOpts
{
bool enabled;
+
+ int autovacuum_parallel_workers;
int vacuum_threshold;
int vacuum_max_threshold;
int vacuum_ins_threshold;
plsample \
spgist_name_ops \
test_aio \
+ test_autovacuum \
test_binaryheap \
test_bitmapset \
test_bloomfilter \
subdir('spgist_name_ops')
subdir('ssl_passphrase_callback')
subdir('test_aio')
+subdir('test_autovacuum')
subdir('test_binaryheap')
subdir('test_bitmapset')
subdir('test_bloomfilter')
--- /dev/null
+# Generated subdirectories
+/tmp_check/
--- /dev/null
+# src/test/modules/test_autovacuum/Makefile
+
+PGFILEDESC = "test_autovacuum - test code for autovacuum"
+
+TAP_TESTS = 1
+
+EXTRA_INSTALL = src/test/modules/injection_points
+
+export enable_injection_points
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/test_autovacuum
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
--- /dev/null
+# Copyright (c) 2024-2026, PostgreSQL Global Development Group
+
+tests += {
+ 'name': 'test_autovacuum',
+ 'sd': meson.current_source_dir(),
+ 'bd': meson.current_build_dir(),
+ 'tap': {
+ 'env': {
+ 'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
+ },
+ 'tests': [
+ 't/001_parallel_autovacuum.pl',
+ ],
+ },
+}
--- /dev/null
+
+# Copyright (c) 2026, PostgreSQL Global Development Group
+
+# Test parallel autovacuum behavior
+
+use strict;
+use warnings FATAL => 'all';
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if ($ENV{enable_injection_points} ne 'yes')
+{
+ plan skip_all => 'Injection points not supported by this build';
+}
+
+# Before each test we should disable autovacuum for 'test_autovac' table and
+# generate some dead tuples in it. Returns the current autovacuum_count of
+# the table test_autovac.
+sub prepare_for_next_test
+{
+ my ($node, $test_number) = @_;
+
+ $node->safe_psql(
+ 'postgres', qq{
+ ALTER TABLE test_autovac SET (autovacuum_enabled = false);
+ UPDATE test_autovac SET col_1 = $test_number;
+ });
+
+ my $count = $node->safe_psql(
+ 'postgres', qq{
+ SELECT autovacuum_count FROM pg_stat_user_tables WHERE relname = 'test_autovac'
+ });
+
+ return $count;
+}
+
+# Wait for the table to be vacuumed by an autovacuum worker.
+sub wait_for_autovacuum_complete
+{
+ my ($node, $old_count) = @_;
+
+ $node->poll_query_until(
+ 'postgres', qq{
+ SELECT autovacuum_count > $old_count FROM pg_stat_user_tables WHERE relname = 'test_autovac'
+ });
+}
+
+my $node = PostgreSQL::Test::Cluster->new('main');
+$node->init;
+
+# Limit to one autovacuum worker and disable autovacuum logging globally
+# (enabled only on the test table) so that log checks below match only
+# activity on the expected table.
+$node->append_conf(
+ 'postgresql.conf', qq{
+autovacuum_max_workers = 1
+autovacuum_worker_slots = 1
+autovacuum_max_parallel_workers = 2
+max_worker_processes = 10
+max_parallel_workers = 10
+log_min_messages = debug2
+autovacuum_naptime = '1s'
+min_parallel_index_scan_size = 0
+log_autovacuum_min_duration = -1
+});
+$node->start;
+
+# Check if the extension injection_points is available, as it may be
+# possible that this script is run with installcheck, where the module
+# would not be installed by default.
+if (!$node->check_extension('injection_points'))
+{
+ plan skip_all => 'Extension injection_points not installed';
+}
+
+# Create all functions needed for testing
+$node->safe_psql(
+ 'postgres', qq{
+ CREATE EXTENSION injection_points;
+});
+
+my $indexes_num = 3;
+my $initial_rows_num = 10_000;
+my $autovacuum_parallel_workers = 2;
+
+# Create table and fill it with some data
+$node->safe_psql(
+ 'postgres', qq{
+ CREATE TABLE test_autovac (
+ id SERIAL PRIMARY KEY,
+ col_1 INTEGER, col_2 INTEGER, col_3 INTEGER, col_4 INTEGER
+ ) WITH (autovacuum_parallel_workers = $autovacuum_parallel_workers,
+ log_autovacuum_min_duration = 0);
+
+ INSERT INTO test_autovac
+ SELECT
+ g AS col1,
+ g + 1 AS col2,
+ g + 2 AS col3,
+ g + 3 AS col4
+ FROM generate_series(1, $initial_rows_num) AS g;
+});
+
+# Create specified number of b-tree indexes on the table
+$node->safe_psql(
+ 'postgres', qq{
+ DO \$\$
+ DECLARE
+ i INTEGER;
+ BEGIN
+ FOR i IN 1..$indexes_num LOOP
+ EXECUTE format('CREATE INDEX idx_col_\%s ON test_autovac (col_\%s);', i, i);
+ END LOOP;
+ END \$\$;
+});
+
+# Test 1 :
+# Our table has enough indexes and appropriate reloptions, so autovacuum must
+# be able to process it in parallel mode. Just check if it can do it.
+
+my $av_count = prepare_for_next_test($node, 1);
+my $log_offset = -s $node->logfile;
+
+$node->safe_psql(
+ 'postgres', qq{
+ ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+# Wait until the parallel autovacuum on table is completed. At the same time,
+# we check that the required number of parallel workers has been started.
+wait_for_autovacuum_complete($node, $av_count);
+ok( $node->log_contains(
+ qr/parallel workers: index vacuum: 2 planned, 2 launched in total/,
+ $log_offset));
+
+# Test 2:
+# Check whether parallel autovacuum leader can propagate cost-based parameters
+# to the parallel workers.
+
+$av_count = prepare_for_next_test($node, 2);
+$log_offset = -s $node->logfile;
+
+$node->safe_psql(
+ 'postgres', qq{
+ SELECT injection_points_attach('autovacuum-start-parallel-vacuum', 'wait');
+
+ ALTER TABLE test_autovac SET (autovacuum_parallel_workers = 1, autovacuum_enabled = true);
+});
+
+# Wait until parallel autovacuum is inited
+$node->wait_for_event('autovacuum worker',
+ 'autovacuum-start-parallel-vacuum');
+
+# Update the shared cost-based delay parameters.
+$node->safe_psql(
+ 'postgres', qq{
+ ALTER SYSTEM SET autovacuum_vacuum_cost_limit = 500;
+ ALTER SYSTEM SET autovacuum_vacuum_cost_delay = 5;
+ ALTER SYSTEM SET vacuum_cost_page_miss = 10;
+ ALTER SYSTEM SET vacuum_cost_page_dirty = 10;
+ ALTER SYSTEM SET vacuum_cost_page_hit = 10;
+ SELECT pg_reload_conf();
+});
+
+# Resume the leader process to update the shared parameters during heap scan (i.e.
+# vacuum_delay_point() is called) and launch a parallel vacuum worker, but it stops
+# before vacuuming indexes due to the injection point.
+$node->safe_psql(
+ 'postgres', qq{
+ SELECT injection_points_wakeup('autovacuum-start-parallel-vacuum');
+});
+
+# Check whether parallel worker successfully updated all parameters during
+# index processing.
+$node->wait_for_log(
+ qr/parallel autovacuum worker updated cost params: cost_limit=500, cost_delay=5, cost_page_miss=10, cost_page_dirty=10, cost_page_hit=10/,
+ $log_offset);
+
+wait_for_autovacuum_complete($node, $av_count);
+
+# Cleanup
+$node->safe_psql(
+ 'postgres', qq{
+ SELECT injection_points_detach('autovacuum-start-parallel-vacuum');
+});
+
+$node->stop;
+done_testing();
PVIndVacStatus
PVOID
PVShared
+PVSharedCostParams
PVWorkerUsage
PVWorkerStats
PX_Alias